The emergence of Intelligent Classrooms, and in particular classrooms equipped with facilities for identifying the students’ attention levels, has raised the need for appropriate student-friendly tools that not only facilitate application hosting, but also acts as the means to re-engage inattentive students in the educational process. This work presents CognitOS, a web-based working environment that hosts several types of applications (i.e., exercises, multimedia viewer, digital book) that are utilized as channels to present interventions dictated by the intelligent decision-making mechanisms of the attention-aware classroom. This paper presents the functionality of CognitOS and the design process followed for its development.
The Internet of Things is based on ecosystems of networked devices, referred to as smart objects, effectively enabling the blending of physical things with digital artifacts in an unprecedented way. In principle, endless automations may be introduced in the context of daily life exploring the numerous opportunities offered by the deployment and utilization of such smart objects. However, in practice the demands for such automations are highly personalized and fluid effectively minimizing the chances for building commercially successful general‐purpose applications. In this context our vision is to empower end‐users with the appropriate tools enabling to easily and quickly craft, test and modify the automations they need. In this chapter we initially discuss a few possible future scenarios for automations relying on smart objects. Then, we elaborate on the visual tools we currently develop, followed by a brief case study using the tools. Finally, the potential of publishing such automations in typical digital markets is considered.
The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.
In the present Internet of Things (IoT) era, smart city components (e.g. Smart buildings, Smart infrastructures, etc.) are increasingly embracing cutting edge technologies to support complex scenarios that include decision-making, prediction and intelligent actuation. In this context, there is an increased need for information visualization, so as to propagate information to end users in a smart, sustainable, and resilient way. Currently, despite the growth of the IoT sector, many IoT operators only provide static visualizations. However, interactive data visualizations are required to achieve deeper and faster insights, beyond what is available in existing infrastructure, towards supporting decision-making by city authorities; while offering real time information to citizens. This paper builds on top of ongoing research work carried out at the Human Computer Interaction (HCI) Laboratory of ICS-FORTH in the domain of visualizing and interacting with information in Ambient Intelligence environments, in order to propose the design of an interactive Smart City Visualization framework. In this context, advanced user interaction techniques can be employed, including gesture-based interaction with high resolution large screen displays in alternative contexts of use and immersive VR experiences. To this end, several gesture-based interaction techniques have been validated to propose a sufficiently rich set of gestures that are adaptable to user and context requirements and are ergonomic, intuitive, and easy to perform and remember, while remaining metaphorically appropriate for the addressed functionality. Additionally, Big Data visualization is accomplished by employing 3D solutions. The proposed design supports experiencing and interacting with information through VR technologies and large displays, offering improved data visualization capacity and enhanced data dimensionality, thus overcoming issues related to data complexity and heterogeneity.
Play is a voluntary activity in which individuals involve for pleasure. It is very important for children because through playing they learn to explore, develop and master physical and social skills. Play development is part of the child’s growth and maturation process since birth. As such, it is widely used in the context of Occupational Therapy (OT). Occupational therapists use activity analysis to shape play activities for therapeutic use and promote an environment where the child can approach various activities while playing. This paper builds on knowledge stemming from the processes and theories used in OT and activity analysis to present the design, implementation and deployment of a new version of the popular farm game as deployed within an Ambient Intelligence (AmI) simulation space. Within this space, an augmented interactive table and a three-dimensional avatar are employed to extend the purpose and objectives of the game, thus also expanding its applicability to the age group of preschool children from 3 to 6 years old. More importantly, through the environment, the game monitors and follows the progress of each young player, adapts accordingly and provides important information regarding the abilities and skills of the child and their development over time. The developed game was evaluated through a small scale study with children of the aforementioned age groups, their parents, and child care professionals. The outcomes of the evaluation were positive for all target groups and provided significant evidence regarding its potential to offer novel play experience to children, but also act as a valuable tool to child care professionals.
This paper describes an educational game that aims to familiarize cognitive impaired children with household objects, the overall home environment and the daily activities that take place in it. In addition to touch-based interaction, the game supports physical manipulation through printed cards on a tabletop setup, using a webcam to detect and track the cards placed on the game board.
This paper presents a user experience study of interaction with printed maps for providing digitally augmented tourism information. The Interactive Maps system has been implemented based on an interactive printed matter framework which provides all the necessary components for developing smart applications that offer printed matter interaction, and has been deployed and evaluated in the context of the publicly available Tourism InfoPoint of the Municipality of Heraklion. The results of the evaluation highlight that interacting with digitally augmented paper is quite easy and natural, while the overall user experience is positive.
Although activities of daily living are often difficult for individuals with cognitive impairments, their autonomy and independence can be fostered through interactive technologies. The use of traditional computer interfaces has however proved to be difficult for these users, bringing to the surface the need for novel interaction methods. This paper proposes Let’s Cook, an innovative Augmented Reality game, designed to teach children with cognitive impairments how to prepare simple meals, following a playful approach. Let’s Cook supports multimodal interaction techniques utilizing tangible objects on a table-top surface, as well as multimedia output. Additionally, it can be personalized to accommodate the diverse needs of children with cognitive impairments by employing individual user profiling. The system is currently installed in the kitchen of the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete where it was evaluated by the students.
Advanced Driver Assistant systems (ADAS) are receiving increased research focus as they promote a safer and more comfortable driving experience. In this context, personalization can play a key role as the different driver/rider needs, the environmental context and driver’s/rider’s state can be taken into account towards delivering custom tailored interaction and performing intelligent decision making. This paper presents an ontology-based approach for personalizing Human Machine Interaction (HMI) elements in ADAS systems. The main features of the presented research work include: (a) semantic modelling of relevant data in the form of an ontology meta-model that includes the driver/ rider information, the vehicle and its HMI elements, as well as the external environment, (b) rule-based reasoning on top of the meta-model to derive appropriate personalization decisions, and (c) adaptation of the vehicle’s HMI elements and interaction paradigms to best fit the particular driver or rider, as well as the overall driving context.
The population of elderly people and disabled has exponentially increased thanks to advances of medicine which allow people to live longer and healthier than the previous generations. In this context, Ambient Assisted Living (AAL) applications which promotes independent living is more necessary than ever. Also, the Internet of Things (IoT) proliferates as the dominant technological paradigm for the open deployment of networked smart objects in the environment, including physical things, smart devices and entire applications. In our work, a primary objective was the delivery of an AAL framework on the top of smart objects which uses the full range of IoT technologies. Very early, it became evident that the demand of personalized applications in the context of AAL is very intense. This is mainly due to the highly individualized and fluid nature of the required applications. Along these lines, we focus in providing an end-user programming environment to empower carers, possibly the elderly and family themselves, with the necessary tools to easily and quickly craft, test, modify and deploy smart object applications they would like to have in their everyday life. In this paper, we support personalized automations using smart objects for outdoor daily activities, outside the elderly's protected home environment. We initially outline possible useful mobility scenarios. Then, we elaborate on the visual tools we are developing, followed by a brief case study using them.