The population of elderly people and disabled has exponentially increased thanks to advances of medicine which allow people to live longer and healthier than the previous generations. In this context, Ambient Assisted Living (AAL) applications which promotes independent living is more necessary than ever. Also, the Internet of Things (IoT) proliferates as the dominant technological paradigm for the open deployment of networked smart objects in the environment, including physical things, smart devices and entire applications. In our work, a primary objective was the delivery of an AAL framework on the top of smart objects which uses the full range of IoT technologies. Very early, it became evident that the demand of personalized applications in the context of AAL is very intense. This is mainly due to the highly individualized and fluid nature of the required applications. Along these lines, we focus in providing an end-user programming environment to empower carers, possibly the elderly and family themselves, with the necessary tools to easily and quickly craft, test, modify and deploy smart object applications they would like to have in their everyday life. In this paper, we support personalized automations using smart objects for outdoor daily activities, outside the elderly's protected home environment. We initially outline possible useful mobility scenarios. Then, we elaborate on the visual tools we are developing, followed by a brief case study using them.
Ambient Assisted Living (AAL) promotes independent living, while the Internet of Things (IoT) proliferates as the dominant technology for the deployment of pervasive smart objects. In this work, we focus on the delivery of an AAL framework utilizing IoT technologies, while addressing the demand for very customized automations due to the diverse and fluid (can change over time) user requirements. The latter turns the idea of a general-purpose application suite to fit all users mostly unrealistic and suboptimal. Driven by the popularity of visual programming tools, especially for children, we focused in directly enabling end-users, including carers, family or friends, even the elderly/disabled themselves, to easily craft and modify custom automations. In this paper we firstly discuss scenarios of highly personalized AAL automations through smart objects, and then elaborate on the capabilities of the visual tools we are currently developing on a basis of a brief case study.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Although activities of daily living are often difficult for individuals with cognitive impairments, their autonomy and independence can be fostered through interactive technologies. The use of traditional computer interfaces has however proved to be difficult for these users, bringing to the surface the need for novel interaction methods. This paper proposes Let’s Cook, an innovative Augmented Reality game, designed to teach children with cognitive impairments how to prepare simple meals, following a playful approach. Let’s Cook supports multimodal interaction techniques utilizing tangible objects on a table-top surface, as well as multimedia output. Additionally, it can be personalized to accommodate the diverse needs of children with cognitive impairments by employing individual user profiling. The system is currently installed in the kitchen of the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete where it was evaluated by the students.
This paper describes an educational game that aims to familiarize cognitive impaired children with household objects, the overall home environment and the daily activities that take place in it. In addition to touch-based interaction, the game supports physical manipulation through printed cards on a tabletop setup, using a webcam to detect and track the cards placed on the game board.
This paper presents a user experience study of interaction with printed maps for providing digitally augmented tourism information. The Interactive Maps system has been implemented based on an interactive printed matter framework which provides all the necessary components for developing smart applications that offer printed matter interaction, and has been deployed and evaluated in the context of the publicly available Tourism InfoPoint of the Municipality of Heraklion. The results of the evaluation highlight that interacting with digitally augmented paper is quite easy and natural, while the overall user experience is positive.
Play is a voluntary activity in which individuals involve for pleasure. It is very important for children because through playing they learn to explore, develop and master physical and social skills. Play development is part of the child’s growth and maturation process since birth. As such, it is widely used in the context of Occupational Therapy (OT). Occupational therapists use activity analysis to shape play activities for therapeutic use and promote an environment where the child can approach various activities while playing. This paper builds on knowledge stemming from the processes and theories used in OT and activity analysis to present the design, implementation and deployment of a new version of the popular farm game as deployed within an Ambient Intelligence (AmI) simulation space. Within this space, an augmented interactive table and a three-dimensional avatar are employed to extend the purpose and objectives of the game, thus also expanding its applicability to the age group of preschool children from 3 to 6 years old. More importantly, through the environment, the game monitors and follows the progress of each young player, adapts accordingly and provides important information regarding the abilities and skills of the child and their development over time. The developed game was evaluated through a small scale study with children of the aforementioned age groups, their parents, and child care professionals. The outcomes of the evaluation were positive for all target groups and provided significant evidence regarding its potential to offer novel play experience to children, but also act as a valuable tool to child care professionals.
This paper discusses technology acceptance in the context of Ambient Intelligence (AmI) environments. Determining what would make a technology acceptable by users was widely recognized as a significant field of research since the seventies. Ever since several models have been developed, while recent advances in technology have led to increased research interest in assessing technology acceptance in a variety of domains. This has resulted in a plethora of studies and an extensive number of parameters that can be considered important towards predicting the acceptance of a given technology by its target audience. An important concern is how to practically employ these models for the assessment of AmI environments, given their high complexity and the wide range of potential contexts and target users. To this end, this paper carries out a review of the most important models and their evolution over time, as well as a review of studies extending these models in a variety of domains beyond the workplace. Furthermore, a classification of the parameters studied across these models is carried out, identifying a common feature across existing technology acceptance studies, namely that all assessments are based on self-reported metrics. This highlights the need for a synergistic evaluation approach, where assessment will move beyond self-reported or observed metrics and will be supported and assisted by the AmI environment itself.
Advanced Driver Assistant systems (ADAS) are receiving increased research focus as they promote a safer and more comfortable driving experience. In this context, personalization can play a key role as the different driver/rider needs, the environmental context and driver’s/rider’s state can be taken into account towards delivering custom tailored interaction and performing intelligent decision making. This paper presents an ontology-based approach for personalizing Human Machine Interaction (HMI) elements in ADAS systems. The main features of the presented research work include: (a) semantic modelling of relevant data in the form of an ontology meta-model that includes the driver/ rider information, the vehicle and its HMI elements, as well as the external environment, (b) rule-based reasoning on top of the meta-model to derive appropriate personalization decisions, and (c) adaptation of the vehicle’s HMI elements and interaction paradigms to best fit the particular driver or rider, as well as the overall driving context.