The proliferation of Internet of Things devices and services and their integration in everyday environments led to the emergence of intelligent offices, classrooms, conference, and meeting rooms that adhere to the paradigm of Ambient Intelligence. Usually, the type of activities performed in such environments (i.e., presentations and lectures) can be enhanced by the use of large Interactive Boards that—among others—allow access to digital content, promote collaboration, enhance the process of exchanging ideas, and increase the engagement of the audience. Additionally, the board contents are expected to be plenty, in terms of quantity, and diverse, in terms of type (e.g., textual data, pictorial data, multimedia, figures, and charts), which unavoidably makes their manipulation over a large display tiring and cumbersome, especially when the interaction lasts for a considerable amount of time (e.g., during a class hour). Acknowledging both the shortcomings and potentials of Interactive Boards in intelligent conference rooms, meeting rooms, and classrooms, this work introduces a sophisticated framework named CognitOS Board, which takes advantage of (i) the intelligent facilities offered by the environment and (ii) the amenities offered by wall-to-wall displays, in order to enhance presentation-related activities. In this article, we describe the design process of CognitOS Board, elaborate on the available functionality, and discuss the results of a user-based evaluation study.
Sleep is important for many vital functions. Unfortunately, many people suffer from sleep-related problems, which have negative consequences on sleep quality and therefore on quality of life. Considering the important health benefits of a good night’s sleep, it is crucial to investigate technological solutions that promote and improve sleep hygiene. To that end, the HypnOS framework for “Intelligent Homes” is introduced, aiming to improve the sleep quality of home residents by monitoring their sleep and providing personalized recommendations to overcome sleep-related issues. It describes the design process that was followed, presents its functionality, reports the findings of an expert-based evaluation of the HypnOS mobile app and discusses future plans.
Virtual reality (VR) has re-emerged as a low-cost, highly accessible consumer product, and training on simulators is rapidly becoming standard in many industrial sectors. However, the available systems are either focusing on gaming context, featuring limited capabilities or they support only content creation of virtual environments without any rapid prototyping and modification. In this project, we propose a code-free, visual scripting platform to replicate gamified training scenarios through rapid prototyping and VR software design patterns. We implemented and compared two authoring tools: a) visual scripting and b) VR editor for the rapid reconstruction of VR training scenarios. Our visual scripting module is capable of generating training applications utilizing a node-based scripting system, whereas the VR editor gives user/developer the ability to customize and populate new VR training scenarios directly from the virtual environment. We also introduce action prototypes, a new software design pattern suitable to replicate behavioral tasks for VR experiences. In addition, we present the training scenegraph architecture as the main model to represent training scenarios on a modular, dynamic and highly adaptive acyclic graph based on a structured educational curriculum. Finally, a user-based evaluation of the proposed solution indicated that users—regardless of their programming expertise—can effectively use the tools to create and modify training scenarios in VR.
Acknowledging the omnipotent impact of technology on the domain of education, we have pursued the vision of a student-oriented and educator-friendly “Intelligent Classroom” that supports students in their journey to acquiring knowledge. The classroom simulation space located within the AmI Facility provides an ideal testbed for assessing the effects of intelligent technologies on key aspects of the educational process. Designing and developing ambient applications that shape the “Intelligent Classroom” is an ongoing activity and an open research endeavour, integrating and assessing emerging technologies that introduce new educational opportunities and interaction paradigms.
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
In the domain of education, an Intelligent Classroom that employs Ambient Intelligence technologies can not only improve learning and student performance, but also support educators with the various educational tasks, such as lecturing, course preparation and classroom management. Given that the board is one of the key artifacts of any classroom, using technology to enhance it offers students and educators rich opportunities by providing access to a wide range of applications, capturing and maintaining a simultaneous focus of attention for large learner groups, supporting collaboration and encouraging discussion. To this end, this work presents the CognitOS Classboard, an educator- and student- oriented framework, employed on the “Intelligent Classroom Board” - a wall-to-wall projected interactive board- offering a variety of tools and applications aiming to support lecturing and enhance the learning process. Aiming to create highly engaging and fascinating learning experiences for the students, the CognitOS Classboard apart from offering access to useful educational applications, features sophisticated mechanisms that can transform the classroom into an immersive environment on demand. It supports multimodal interaction through touch, mid-air gestures, voice commands, and user position tracking, while a tablet and a desktop application were developed for permitting the management and overview of the board. This paper reports the functionality of the “CognitOS Classboard” and the findings of an evaluation experiment conducted with User Experience experts.
This paper explores a new approach to a teacher’s workstation in the context of the intelligent classroom of the 21st century. Nowadays, the term “intelligent” is not only associated with efforts to incorporate smart/mobile devices into the learning experience (distance learning, educational games/apps, etc.), but also to equip the physical environment of the classroom with technologically enhanced objects. These technologically augmented artefacts (Student Desk, Interactive Classroom Board and Educator’s Workstation) are embedded discreetly in the classroom’s environment. One of the main concerns in designing and developing such artefacts is to facilitate seamless interaction between educators and students, as well as to enable unobtrusive monitoring and supervision of the students by the educators. This paper presents LECTOR Podium, a system that liberates teachers from the confinement of a desk and introduces a flexible and empowering workstation in the form of a smart arm-chair. This arm-chair assumes the role of a control center, enabling the educator to monitor and operate every feature and artefact of the intelligent classroom.
Due to the proliferation of Internet of Things (IoT) devices and the emergence of the Ambient Intelligence (AmI) paradigm, the need to facilitate the interaction between the user and the services that are integrated in Intelligent Environments has surfaced. As a result, Conversational Agents are increasingly used in this context, in order to achieve a natural, intuitive and seamless interaction between the user and the system. However, in spite of the continuous progress and advancements in the area of Conversational Agents, there are still some considerable limitations in current approaches. The system proposed in this paper addresses some of the main drawbacks by: (a) automatically integrating new services based on their formal specification, (b) incorporating error handling via follow-up questions, and (c) processing multiple user intents through the segmentation of the input. The paper describes the main components of the system, as well as the technologies that they utilize. Additionally, it analyses the pipeline process of the user input, which results in the generation of a response and the invocation of the appropriate intelligent services.
The emergence of the Ambient Intelligence (AmI) paradigm and the proliferation of Internet of Things (IoT) devices and services unveiled new potentials for the domain of domestic living, where the line between “the computer” and the (intelligent) environment becomes altogether invisible. Particularly, the residents of a house can use the living room not only as a traditional social and individual space where many activities take place, but also as a smart ecosystem that (a) enhances leisure activities by providing a rich suite of entertainment applications, (b) implements a home control middleware, (c) acts as an intervention host that is able to display appropriate content when the users need help or support, (d) behaves as an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (e) presents a notification hub that provides personalized alerts according to contextual information, and (f) becomes an intermediary communication center for the family. This paper (i) describes how the “Intelligent Living Room” realizes these newly emerged roles, (ii) presents the process that was followed in order to design the living room environment, (iii) introduces the hardware and software facilities that were developed in order to improve quality of life, and (iv) reports the findings of various evaluation experiments conducted to assess the overall User Experience (UX)
With the proliferation of Intelligent Environments, the need for configuring their behaviors to address their users’ needs emerges. In combination with the current advances in Augmented and Virtual Reality and Conversational Agents, new opportunities arise for systems which allow people to program their environment. Whereas today this requires programming skills, soon, when most spaces will include smart objects, tools which allow their collaborative management by non-technical users will become a necessity. To that end, we present BricklAyeR, a novel collaborative platform for non-programmers, that allows to define the behavior of Intelligent Environments, through an intuitive, 3D building-block User Interface, following the Trigger-Action programming principle, in Augmented Reality, with the help of a Conversational Agent.