This work aims to investigate how the amenities offered by Intelligent Environments can be used to shape new types of useful, exciting and fulfilling experiences while watching sports or movies. Towards this direction, two ambient media players were developed aspiring to offer live access to secondary information via the available displays of an Intelligent Living Room, and to appropriately exploit the technological equipment so as to support natural interaction. Expert-based evaluation experiments revealed some factors that can influence the overall experience significantly, without hindering the viewers’ immersion to the main media.
High stress levels and sleep deprivation may cause several mental or physical health issues, such as depression, impaired memory, decreased motivation, obesity, etc. The COVID-19 pandemic has produced unprecedented changes in our lives, generating significant stress, and worries about health, social isolation, employment, and finances. To this end, nowadays more than ever, it is crucial to deliver solutions that can help people to manage and control their stress, as well as to reduce sleep disturbances, so as to improve their health and overall quality of life. Technology, and in particular Ambient Intelligence Environments, can help towards that direction, when considering that they are able to understand the needs of their users, identify their behavior, learn their preferences, and act and react in their interest. This work presents two systems that have been designed and developed in the context of an Intelligent Home, namely CaLmi and HypnOS, which aim to assist users that struggle with stress and poor sleep quality, respectively. Both of the systems rely on real-time data collected by wearable devices, as well as contextual information retrieved from the ambient facilities of the Intelligent Home, so as to offer appropriate pervasive relaxation programs (CaLmi) or provide personalized insights regarding sleep hygiene (HypnOS) to the residents. This article will describe the design process that was followed, the functionality of both systems, the results of the user studies that were conducted for the evaluation of their end-user applications, and a discussion about future plans.
While the body of research focusing on Intelligent Environments (IEs) programming by adults is steadily growing, informed insights about children as programmers of such environments are limited. Previous work already established that young children can learn programming basics. Yet, there is still a need to investigate whether this capability can be transferred in the context of IEs, since encouraging children to participate in the management of their intelligent surroundings can enhance responsibility, independence, and the spirit of cooperation. We performed a user study (N=15) with children aged 7-12, using a block-based, gamified AR spatial coding prototype allowing to manipulate smart artifacts in an Intelligent Living room. Our results validated that children understand and can indeed program IEs. Based on our findings, we contribute preliminary implications regarding the use of specific technologies and paradigms (e.g. AR, trigger-action programming) to inspire future systems that enable children to create enriching experiences in IEs.
Sleep is important for many vital functions. Unfortunately, many people suffer from sleep-related problems, which have negative consequences on sleep quality and therefore on quality of life. Considering the important health benefits of a good night’s sleep, it is crucial to investigate technological solutions that promote and improve sleep hygiene. To that end, the HypnOS framework for “Intelligent Homes” is introduced, aiming to improve the sleep quality of home residents by monitoring their sleep and providing personalized recommendations to overcome sleep-related issues. It describes the design process that was followed, presents its functionality, reports the findings of an expert-based evaluation of the HypnOS mobile app and discusses future plans.
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
Due to the proliferation of Internet of Things (IoT) devices and the emergence of the Ambient Intelligence (AmI) paradigm, the need to facilitate the interaction between the user and the services that are integrated in Intelligent Environments has surfaced. As a result, Conversational Agents are increasingly used in this context, in order to achieve a natural, intuitive and seamless interaction between the user and the system. However, in spite of the continuous progress and advancements in the area of Conversational Agents, there are still some considerable limitations in current approaches. The system proposed in this paper addresses some of the main drawbacks by: (a) automatically integrating new services based on their formal specification, (b) incorporating error handling via follow-up questions, and (c) processing multiple user intents through the segmentation of the input. The paper describes the main components of the system, as well as the technologies that they utilize. Additionally, it analyses the pipeline process of the user input, which results in the generation of a response and the invocation of the appropriate intelligent services.
The emergence of the Ambient Intelligence (AmI) paradigm and the proliferation of Internet of Things (IoT) devices and services unveiled new potentials for the domain of domestic living, where the line between “the computer” and the (intelligent) environment becomes altogether invisible. Particularly, the residents of a house can use the living room not only as a traditional social and individual space where many activities take place, but also as a smart ecosystem that (a) enhances leisure activities by providing a rich suite of entertainment applications, (b) implements a home control middleware, (c) acts as an intervention host that is able to display appropriate content when the users need help or support, (d) behaves as an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (e) presents a notification hub that provides personalized alerts according to contextual information, and (f) becomes an intermediary communication center for the family. This paper (i) describes how the “Intelligent Living Room” realizes these newly emerged roles, (ii) presents the process that was followed in order to design the living room environment, (iii) introduces the hardware and software facilities that were developed in order to improve quality of life, and (iv) reports the findings of various evaluation experiments conducted to assess the overall User Experience (UX)
With the proliferation of Intelligent Environments, the need for configuring their behaviors to address their users’ needs emerges. In combination with the current advances in Augmented and Virtual Reality and Conversational Agents, new opportunities arise for systems which allow people to program their environment. Whereas today this requires programming skills, soon, when most spaces will include smart objects, tools which allow their collaborative management by non-technical users will become a necessity. To that end, we present BricklAyeR, a novel collaborative platform for non-programmers, that allows to define the behavior of Intelligent Environments, through an intuitive, 3D building-block User Interface, following the Trigger-Action programming principle, in Augmented Reality, with the help of a Conversational Agent.
In today's fast-paced and demanding society, more and more people are suffering from stress-related problems; however, intelligent environments can be equipped with facilities that assist in keeping it under control. This paper presents CaLmi, a system for Intelligent Homes that aims to reduce the stress of its residents by: (a) monitoring its level through a combination of biometric measurements from a wearable device along with information about user's everyday life and (b) enabling the ubiquitous presentation of relaxation programs, which deliver multi-sensory, context-aware, personalized interventions.
Intelligent Conversational Agents are already employed in different scenarios, both in commerce and in research. In particular, they can play an important role in defining a new natural interaction paradigm between them and humans. When these Intelligent Agents take a human-like form (embodied Virtual Agents) in the virtual world, we refer to them as Virtual Humans. In this context, they can communicate with humans through storytelling, where the Virtual Human plays the role of a narrator and/or demonstrator, and the user can listen, as well as interact with the story. We propose that the behavior and actions of multiple, concurrently active Virtual Humans, can be the result of communication between them, based on a dynamic script, which resembles in structure a screenplay. This paper presents CasandRA, a framework enabling real-time user interaction with Virtual Humans, whose actions are based on this kind of scripts. CasandRA can be integrated in any Ambient Intelligence setting, and the Virtual Humans provide contextual information, assistance, and narration, accessible through various mobile devices, in Augmented Reality. Finally, they allow users to manipulate smart objects in AmI Environments.