In the domain of education, an Intelligent Classroom that employs Ambient Intelligence technologies can not only improve learning and student performance, but also support educators with the various educational tasks, such as lecturing, course preparation and classroom management. Given that the board is one of the key artifacts of any classroom, using technology to enhance it offers students and educators rich opportunities by providing access to a wide range of applications, capturing and maintaining a simultaneous focus of attention for large learner groups, supporting collaboration and encouraging discussion. To this end, this work presents the CognitOS Classboard, an educator- and student- oriented framework, employed on the “Intelligent Classroom Board” - a wall-to-wall projected interactive board- offering a variety of tools and applications aiming to support lecturing and enhance the learning process. Aiming to create highly engaging and fascinating learning experiences for the students, the CognitOS Classboard apart from offering access to useful educational applications, features sophisticated mechanisms that can transform the classroom into an immersive environment on demand. It supports multimodal interaction through touch, mid-air gestures, voice commands, and user position tracking, while a tablet and a desktop application were developed for permitting the management and overview of the board. This paper reports the functionality of the “CognitOS Classboard” and the findings of an evaluation experiment conducted with User Experience experts.
This paper explores a new approach to a teacher’s workstation in the context of the intelligent classroom of the 21st century. Nowadays, the term “intelligent” is not only associated with efforts to incorporate smart/mobile devices into the learning experience (distance learning, educational games/apps, etc.), but also to equip the physical environment of the classroom with technologically enhanced objects. These technologically augmented artefacts (Student Desk, Interactive Classroom Board and Educator’s Workstation) are embedded discreetly in the classroom’s environment. One of the main concerns in designing and developing such artefacts is to facilitate seamless interaction between educators and students, as well as to enable unobtrusive monitoring and supervision of the students by the educators. This paper presents LECTOR Podium, a system that liberates teachers from the confinement of a desk and introduces a flexible and empowering workstation in the form of a smart arm-chair. This arm-chair assumes the role of a control center, enabling the educator to monitor and operate every feature and artefact of the intelligent classroom.
Due to the proliferation of Internet of Things (IoT) devices and the emergence of the Ambient Intelligence (AmI) paradigm, the need to facilitate the interaction between the user and the services that are integrated in Intelligent Environments has surfaced. As a result, Conversational Agents are increasingly used in this context, in order to achieve a natural, intuitive and seamless interaction between the user and the system. However, in spite of the continuous progress and advancements in the area of Conversational Agents, there are still some considerable limitations in current approaches. The system proposed in this paper addresses some of the main drawbacks by: (a) automatically integrating new services based on their formal specification, (b) incorporating error handling via follow-up questions, and (c) processing multiple user intents through the segmentation of the input. The paper describes the main components of the system, as well as the technologies that they utilize. Additionally, it analyses the pipeline process of the user input, which results in the generation of a response and the invocation of the appropriate intelligent services.
The emergence of the Ambient Intelligence (AmI) paradigm and the proliferation of Internet of Things (IoT) devices and services unveiled new potentials for the domain of domestic living, where the line between “the computer” and the (intelligent) environment becomes altogether invisible. Particularly, the residents of a house can use the living room not only as a traditional social and individual space where many activities take place, but also as a smart ecosystem that (a) enhances leisure activities by providing a rich suite of entertainment applications, (b) implements a home control middleware, (c) acts as an intervention host that is able to display appropriate content when the users need help or support, (d) behaves as an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (e) presents a notification hub that provides personalized alerts according to contextual information, and (f) becomes an intermediary communication center for the family. This paper (i) describes how the “Intelligent Living Room” realizes these newly emerged roles, (ii) presents the process that was followed in order to design the living room environment, (iii) introduces the hardware and software facilities that were developed in order to improve quality of life, and (iv) reports the findings of various evaluation experiments conducted to assess the overall User Experience (UX)
With the proliferation of Intelligent Environments, the need for configuring their behaviors to address their users’ needs emerges. In combination with the current advances in Augmented and Virtual Reality and Conversational Agents, new opportunities arise for systems which allow people to program their environment. Whereas today this requires programming skills, soon, when most spaces will include smart objects, tools which allow their collaborative management by non-technical users will become a necessity. To that end, we present BricklAyeR, a novel collaborative platform for non-programmers, that allows to define the behavior of Intelligent Environments, through an intuitive, 3D building-block User Interface, following the Trigger-Action programming principle, in Augmented Reality, with the help of a Conversational Agent.
In today's fast-paced and demanding society, more and more people are suffering from stress-related problems; however, intelligent environments can be equipped with facilities that assist in keeping it under control. This paper presents CaLmi, a system for Intelligent Homes that aims to reduce the stress of its residents by: (a) monitoring its level through a combination of biometric measurements from a wearable device along with information about user's everyday life and (b) enabling the ubiquitous presentation of relaxation programs, which deliver multi-sensory, context-aware, personalized interventions.
Intelligent Conversational Agents are already employed in different scenarios, both in commerce and in research. In particular, they can play an important role in defining a new natural interaction paradigm between them and humans. When these Intelligent Agents take a human-like form (embodied Virtual Agents) in the virtual world, we refer to them as Virtual Humans. In this context, they can communicate with humans through storytelling, where the Virtual Human plays the role of a narrator and/or demonstrator, and the user can listen, as well as interact with the story. We propose that the behavior and actions of multiple, concurrently active Virtual Humans, can be the result of communication between them, based on a dynamic script, which resembles in structure a screenplay. This paper presents CasandRA, a framework enabling real-time user interaction with Virtual Humans, whose actions are based on this kind of scripts. CasandRA can be integrated in any Ambient Intelligence setting, and the Virtual Humans provide contextual information, assistance, and narration, accessible through various mobile devices, in Augmented Reality. Finally, they allow users to manipulate smart objects in AmI Environments.
The concept of Ambient Intelligence (AmI) is already playing an important role in enriching the educational experience. Such technologies offer students increased access to information within an augmented teaching environment, which encourages active learning and collaboration, enhancing their motivation to learn. Research work in this domain includes the learner-centered design and implementation of infrastructure technologies, prototypes of intelligent systems and applications, smart artifacts for learning and serious games. “Home game” is an innovative augmented table-top educational game that combines tangible interaction with a virtual environment, falling under the category of serious games. The system is structured into a set of mini-games (such as ‘Locate the room’ and ‘Find the wrong object’), which can be personalized for each player in terms of content and interaction paradigm, either automatically based on their profile settings or manually by the educators. Home Game is deployed in the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete, Greece. The full paper will contain a detailed presentation of the User Interface and the learning analytics data that are displayed in each student’s personal dashboards, so as to facilitate educators in adjusting the learning process according the needs of each student. Furthermore, the results of an expert-based evaluation of the tool will be reported.
A fundamental assumption in most contemporary person re-identification research, is that all query persons that need to be re-identified belong to a closed gallery of known persons, i.e., they have been observed and a representation of their appearance is available. For several real-world applications, this closed-world assumption does not hold, as image queries may contain people that the re-identification system has never observed before. In this work, we remove this constraining assumption. To do so, we introduce a novelty detection mechanism that decides whether a person in a query image exists in the gallery. The re-identification of persons existing in the gallery is easily achieved based on the persons representation employed by the novelty detection mechanism. The proposed method operates on a hybrid person descriptor that consists of both supervised (learnt) and unsupervised (hand-crafted) components. A series of experiments on public, state of the art datasets and in comparison with state of the art methods shows that the proposed approach is very accurate in identifying persons that have not been observed before and that this has a positive impact on re-identification accuracy.
The proliferation of Internet of Things (IoT) devices and services and their integration in intelligent environments creates the need for a simple yet effective way of controlling and communicating with them. Towards such a direction, this work presents ParlAmI, a conversational framework featuring a multimodal chatbot that permits users to create simple “if-then” rules to define the behavior of an intelligent environment. ParlAmI delivers a disembodied conversational agent in the form of a messaging application named MAI, and an embodied conversational agent named nAoMI employing the programmable humanoid robot NAO. This paper describes the requirements and architecture of ParlAmI, the infrastructure of the “Intelligent Home” in which ParlAmI is deployed, the characteristics and functionality of both MAI and nAoMI, and finally presents the findings of a user experience evaluation that was conducted with the participation of sixteen users.