Abstract— Considering the prevalence of Ambient Intelligence, this work aims to enhance the interaction between farmers and Intelligent Environments, in order to support their various daily Agricultural activities, aspiring to improve the quality and quantity of cultivated species. Towards this direction, the Greta system was designed and developed, following a user-centered design process, permitting farmers/agronomists to easily monitor and control an Intelligent Greenhouse via a set of useful and usable applications. Greta offers a progressive web app (PWAs) targeting PCs, handheld devices, and technologically-enhanced artifacts of Smart Homes, while it also delivers an Augmented Reality application that visualizes the greenhouse’s interior conditions in a sophisticated manner, and provides context-sensitive assistance regarding cultivation activities. In more detail, the system interoperates with the ambient facilities of an Intelligent Greenhouse allowing end-users to: monitor the conditions inside the greenhouse, remotely control the state of various actuators, be notified regarding the available/active automations, be aware of the optimal conditions for their plants to grow and receive relevant guidelines, be informed regarding any diseases, and communicate with experts for receiving treatment advice. This work describes the design methodology and functionality of Greta, and documents the results of a series of expert-based evaluation experiments.
Culture is a field that is currently entering a revolutionary phase, no longer being a privilege for the few, but expanding to new audiences who are urged to not only passively consume cultural heritage content, but actually participate and assimilate it on their own. In this context, museums have already embraced new technologies as part of their exhibitions, many of them featuring augmented or virtual reality artifacts. The presented work proposes the synthesis of augmented, virtual and mixed reality technologies to provide unified X-Reality experiences in realistic virtual museums, engaging visitors in an interactive and seamless fusion of physical and virtual worlds that will feature virtual agents exhibiting naturalistic behavior. Visitors will be able to interact with the virtual agents, as they would with real world counterparts. The envisioned approach is expected to not only provide refined experiences for museum visitors, but also achieve high quality entertainment combined with more effective knowledge acquisition.
The proliferation of Internet of Things devices and services and their integration in everyday environments led to the emergence of intelligent offices, classrooms, conference, and meeting rooms that adhere to the paradigm of Ambient Intelligence. Usually, the type of activities performed in such environments (i.e., presentations and lectures) can be enhanced by the use of large Interactive Boards that—among others—allow access to digital content, promote collaboration, enhance the process of exchanging ideas, and increase the engagement of the audience. Additionally, the board contents are expected to be plenty, in terms of quantity, and diverse, in terms of type (e.g., textual data, pictorial data, multimedia, figures, and charts), which unavoidably makes their manipulation over a large display tiring and cumbersome, especially when the interaction lasts for a considerable amount of time (e.g., during a class hour). Acknowledging both the shortcomings and potentials of Interactive Boards in intelligent conference rooms, meeting rooms, and classrooms, this work introduces a sophisticated framework named CognitOS Board, which takes advantage of (i) the intelligent facilities offered by the environment and (ii) the amenities offered by wall-to-wall displays, in order to enhance presentation-related activities. In this article, we describe the design process of CognitOS Board, elaborate on the available functionality, and discuss the results of a user-based evaluation study.
Sleep is important for many vital functions. Unfortunately, many people suffer from sleep-related problems, which have negative consequences on sleep quality and therefore on quality of life. Considering the important health benefits of a good night’s sleep, it is crucial to investigate technological solutions that promote and improve sleep hygiene. To that end, the HypnOS framework for “Intelligent Homes” is introduced, aiming to improve the sleep quality of home residents by monitoring their sleep and providing personalized recommendations to overcome sleep-related issues. It describes the design process that was followed, presents its functionality, reports the findings of an expert-based evaluation of the HypnOS mobile app and discusses future plans.
Virtual reality (VR) has re-emerged as a low-cost, highly accessible consumer product, and training on simulators is rapidly becoming standard in many industrial sectors. However, the available systems are either focusing on gaming context, featuring limited capabilities or they support only content creation of virtual environments without any rapid prototyping and modification. In this project, we propose a code-free, visual scripting platform to replicate gamified training scenarios through rapid prototyping and VR software design patterns. We implemented and compared two authoring tools: a) visual scripting and b) VR editor for the rapid reconstruction of VR training scenarios. Our visual scripting module is capable of generating training applications utilizing a node-based scripting system, whereas the VR editor gives user/developer the ability to customize and populate new VR training scenarios directly from the virtual environment. We also introduce action prototypes, a new software design pattern suitable to replicate behavioral tasks for VR experiences. In addition, we present the training scenegraph architecture as the main model to represent training scenarios on a modular, dynamic and highly adaptive acyclic graph based on a structured educational curriculum. Finally, a user-based evaluation of the proposed solution indicated that users—regardless of their programming expertise—can effectively use the tools to create and modify training scenarios in VR.
Acknowledging the omnipotent impact of technology on the domain of education, we have pursued the vision of a student-oriented and educator-friendly “Intelligent Classroom” that supports students in their journey to acquiring knowledge. The classroom simulation space located within the AmI Facility provides an ideal testbed for assessing the effects of intelligent technologies on key aspects of the educational process. Designing and developing ambient applications that shape the “Intelligent Classroom” is an ongoing activity and an open research endeavour, integrating and assessing emerging technologies that introduce new educational opportunities and interaction paradigms.
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
In the domain of education, an Intelligent Classroom that employs Ambient Intelligence technologies can not only improve learning and student performance, but also support educators with the various educational tasks, such as lecturing, course preparation and classroom management. Given that the board is one of the key artifacts of any classroom, using technology to enhance it offers students and educators rich opportunities by providing access to a wide range of applications, capturing and maintaining a simultaneous focus of attention for large learner groups, supporting collaboration and encouraging discussion. To this end, this work presents the CognitOS Classboard, an educator- and student- oriented framework, employed on the “Intelligent Classroom Board” - a wall-to-wall projected interactive board- offering a variety of tools and applications aiming to support lecturing and enhance the learning process. Aiming to create highly engaging and fascinating learning experiences for the students, the CognitOS Classboard apart from offering access to useful educational applications, features sophisticated mechanisms that can transform the classroom into an immersive environment on demand. It supports multimodal interaction through touch, mid-air gestures, voice commands, and user position tracking, while a tablet and a desktop application were developed for permitting the management and overview of the board. This paper reports the functionality of the “CognitOS Classboard” and the findings of an evaluation experiment conducted with User Experience experts.
This paper explores a new approach to a teacher’s workstation in the context of the intelligent classroom of the 21st century. Nowadays, the term “intelligent” is not only associated with efforts to incorporate smart/mobile devices into the learning experience (distance learning, educational games/apps, etc.), but also to equip the physical environment of the classroom with technologically enhanced objects. These technologically augmented artefacts (Student Desk, Interactive Classroom Board and Educator’s Workstation) are embedded discreetly in the classroom’s environment. One of the main concerns in designing and developing such artefacts is to facilitate seamless interaction between educators and students, as well as to enable unobtrusive monitoring and supervision of the students by the educators. This paper presents LECTOR Podium, a system that liberates teachers from the confinement of a desk and introduces a flexible and empowering workstation in the form of a smart arm-chair. This arm-chair assumes the role of a control center, enabling the educator to monitor and operate every feature and artefact of the intelligent classroom.
Due to the proliferation of Internet of Things (IoT) devices and the emergence of the Ambient Intelligence (AmI) paradigm, the need to facilitate the interaction between the user and the services that are integrated in Intelligent Environments has surfaced. As a result, Conversational Agents are increasingly used in this context, in order to achieve a natural, intuitive and seamless interaction between the user and the system. However, in spite of the continuous progress and advancements in the area of Conversational Agents, there are still some considerable limitations in current approaches. The system proposed in this paper addresses some of the main drawbacks by: (a) automatically integrating new services based on their formal specification, (b) incorporating error handling via follow-up questions, and (c) processing multiple user intents through the segmentation of the input. The paper describes the main components of the system, as well as the technologies that they utilize. Additionally, it analyses the pipeline process of the user input, which results in the generation of a response and the invocation of the appropriate intelligent services.