This work blends the domain of Precision Agriculture with the prevalent paradigm of Ambient Intelligence, so as to enhance the interaction between farmers and Intelligent Environments, and support their various daily agricultural activities, aspiring to improve the quality and quantity of cultivated plants. In this paper, two systems are presented, namely the Intelligent Greenhouse and the AmI seedbed, targeting a wide range of agricultural activities, starting from planting the seeds, caring for each individual sprouted plant up to their transplantation in the greenhouse, where the provision for the entire plantation lasts until the harvesting period.
This work aims to investigate how the amenities offered by Intelligent Environments can be used to shape new types of useful, exciting and fulfilling experiences while watching sports or movies. Towards this direction, two ambient media players were developed aspiring to offer live access to secondary information via the available displays of an Intelligent Living Room, and to appropriately exploit the technological equipment so as to support natural interaction. Expert-based evaluation experiments revealed some factors that can influence the overall experience significantly, without hindering the viewers’ immersion to the main media.
Abstract— Considering the prevalence of Ambient Intelligence, this work aims to enhance the interaction between farmers and Intelligent Environments, in order to support their various daily Agricultural activities, aspiring to improve the quality and quantity of cultivated species. Towards this direction, the Greta system was designed and developed, following a user-centered design process, permitting farmers/agronomists to easily monitor and control an Intelligent Greenhouse via a set of useful and usable applications. Greta offers a progressive web app (PWAs) targeting PCs, handheld devices, and technologically-enhanced artifacts of Smart Homes, while it also delivers an Augmented Reality application that visualizes the greenhouse’s interior conditions in a sophisticated manner, and provides context-sensitive assistance regarding cultivation activities. In more detail, the system interoperates with the ambient facilities of an Intelligent Greenhouse allowing end-users to: monitor the conditions inside the greenhouse, remotely control the state of various actuators, be notified regarding the available/active automations, be aware of the optimal conditions for their plants to grow and receive relevant guidelines, be informed regarding any diseases, and communicate with experts for receiving treatment advice. This work describes the design methodology and functionality of Greta, and documents the results of a series of expert-based evaluation experiments.
While the body of research focusing on Intelligent Environments (IEs) programming by adults is steadily growing, informed insights about children as programmers of such environments are limited. Previous work already established that young children can learn programming basics. Yet, there is still a need to investigate whether this capability can be transferred in the context of IEs, since encouraging children to participate in the management of their intelligent surroundings can enhance responsibility, independence, and the spirit of cooperation. We performed a user study (N=15) with children aged 7-12, using a block-based, gamified AR spatial coding prototype allowing to manipulate smart artifacts in an Intelligent Living room. Our results validated that children understand and can indeed program IEs. Based on our findings, we contribute preliminary implications regarding the use of specific technologies and paradigms (e.g. AR, trigger-action programming) to inspire future systems that enable children to create enriching experiences in IEs.
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
The emergence of the Ambient Intelligence (AmI) paradigm and the proliferation of Internet of Things (IoT) devices and services unveiled new potentials for the domain of domestic living, where the line between “the computer” and the (intelligent) environment becomes altogether invisible. Particularly, the residents of a house can use the living room not only as a traditional social and individual space where many activities take place, but also as a smart ecosystem that (a) enhances leisure activities by providing a rich suite of entertainment applications, (b) implements a home control middleware, (c) acts as an intervention host that is able to display appropriate content when the users need help or support, (d) behaves as an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (e) presents a notification hub that provides personalized alerts according to contextual information, and (f) becomes an intermediary communication center for the family. This paper (i) describes how the “Intelligent Living Room” realizes these newly emerged roles, (ii) presents the process that was followed in order to design the living room environment, (iii) introduces the hardware and software facilities that were developed in order to improve quality of life, and (iv) reports the findings of various evaluation experiments conducted to assess the overall User Experience (UX)
With the proliferation of Intelligent Environments, the need for configuring their behaviors to address their users’ needs emerges. In combination with the current advances in Augmented and Virtual Reality and Conversational Agents, new opportunities arise for systems which allow people to program their environment. Whereas today this requires programming skills, soon, when most spaces will include smart objects, tools which allow their collaborative management by non-technical users will become a necessity. To that end, we present BricklAyeR, a novel collaborative platform for non-programmers, that allows to define the behavior of Intelligent Environments, through an intuitive, 3D building-block User Interface, following the Trigger-Action programming principle, in Augmented Reality, with the help of a Conversational Agent.
In today's fast-paced and demanding society, more and more people are suffering from stress-related problems; however, intelligent environments can be equipped with facilities that assist in keeping it under control. This paper presents CaLmi, a system for Intelligent Homes that aims to reduce the stress of its residents by: (a) monitoring its level through a combination of biometric measurements from a wearable device along with information about user's everyday life and (b) enabling the ubiquitous presentation of relaxation programs, which deliver multi-sensory, context-aware, personalized interventions.
The proliferation of Internet of Things (IoT) devices and services and their integration in intelligent environments creates the need for a simple yet effective way of controlling and communicating with them. Towards such a direction, this work presents ParlAmI, a conversational framework featuring a multimodal chatbot that permits users to create simple “if-then” rules to define the behavior of an intelligent environment. ParlAmI delivers a disembodied conversational agent in the form of a messaging application named MAI, and an embodied conversational agent named nAoMI employing the programmable humanoid robot NAO. This paper describes the requirements and architecture of ParlAmI, the infrastructure of the “Intelligent Home” in which ParlAmI is deployed, the characteristics and functionality of both MAI and nAoMI, and finally presents the findings of a user experience evaluation that was conducted with the participation of sixteen users.
ICS-FORTH has recently initiated AmI-Garden, a smart farming project in the framework of its Ambient Intelligence Research Programme. A small experimental IoT greenhouse has been constructed and equipped with polycarbonate cover sheets and all the necessary infrastructure and hardware (automatic window-roof opening/closing, sliding door, fan installation for heating/cooling, vegetable breeding lamps etc.). Inside the greenhouse, a network of wireless sensors is used to measure environmental conditions and parameters, such as air/soil temperature and moisture, sunlight level, soil conductivity, quality and level of chemical ions in irrigation water, etc. The sensors communicate through IoT gateways to the greenhouse’s data centre for storage and post-processing. The system comes with pre-installed agricultural scenarios, a set of activity flows based on environmental conditions that are ideal for each plant species and are monitored in the greenhouse as explained above. The scenarios currently contain parameters to predict common diseases of the plants, as well as unexpected changes in the greenhouse’s microclimate. For example, the irrigation process is built as an agricultural scenario using data from current plant status and past data in order to establish the optimal amount of water to irrigate. The parameters of this scenario are based on specific plant breed and environmental variables. The intelligence behind the scenarios is based on critical limits and thresholds to create cultivation rules. On top of this rule based process, event-driven activation of various automations in the greenhouse is provided, for example, automatic humidity/temperature control, soil fertilisation (hydro fusion) and precise irrigation. Various sets of raw data are produced and ingested into the system, as the life cycle of each one of the plants evolves, in order to be used as the main input for the system’s actuations based on the agricultural treatment scenarios.