Ambient Assisted Living (AAL) promotes independent living, while the Internet of Things (IoT) proliferates as the dominant technology for the deployment of pervasive smart objects. In this work, we focus on the delivery of an AAL framework utilizing IoT technologies, while addressing the demand for very customized automations due to the diverse and fluid (can change over time) user requirements. The latter turns the idea of a general-purpose application suite to fit all users mostly unrealistic and suboptimal. Driven by the popularity of visual programming tools, especially for children, we focused in directly enabling end-users, including carers, family or friends, even the elderly/disabled themselves, to easily craft and modify custom automations. In this paper we firstly discuss scenarios of highly personalized AAL automations through smart objects, and then elaborate on the capabilities of the visual tools we are currently developing on a basis of a brief case study.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Although activities of daily living are often difficult for individuals with cognitive impairments, their autonomy and independence can be fostered through interactive technologies. The use of traditional computer interfaces has however proved to be difficult for these users, bringing to the surface the need for novel interaction methods. This paper proposes Let’s Cook, an innovative Augmented Reality game, designed to teach children with cognitive impairments how to prepare simple meals, following a playful approach. Let’s Cook supports multimodal interaction techniques utilizing tangible objects on a table-top surface, as well as multimedia output. Additionally, it can be personalized to accommodate the diverse needs of children with cognitive impairments by employing individual user profiling. The system is currently installed in the kitchen of the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete where it was evaluated by the students.
Robots are an increasingly discussed solution for assistance of seniors. Importance of testing natural interaction therefore becomes crucial. This paper presents first results of a study with an autonomous mobile social service robot prototype that was deployed in 18 private households of senior adults aged 75 years and older for a total of 371 days. Findings show that utility met the users' expectations. However, the robot was rather seen as a toy instead of being supportive for independent living. Furthermore, despite of an emergency function of the robot, perceived safety did not increase. Reasons for this might be the good health conditions of our users, a lack of technological robustness and slow performance of the prototype. However, users believed that a market ready version of the robot would be vital for supporting people who are more fragile and more socially isolated.
This work regards fingertip contact detection and localization upon planar surfaces, for the purpose of providing interactivity in augmented, interactive displays that are implemented upon these surfaces. The proposed approach differs from the widely employed approach where user hands are observed from above, in that user hands are imaged laterally. An algorithmic approach for the treatment of the corresponding visual input is proposed. The proposed approach is extensively evaluated and compared to the top view approach. Advantages of the proposed approach include increased sensitivity, localization accuracy, scalability, as well as, practicality and cost efficiency of installation.
This paper reports on ongoing work regarding interactive 3D visualization of large scale data centres in the context of Big Data and data centre infrastructure management. The proposed approach renders a virtual area of real data centres preserving the actual arrangement of their servers and visualizes their current state while it notifies users for potential server anomalies. The visualization includes several condition indicators, updated in real time, as well as a color-coding scheme for the current servers’ condition referring to a scale from normal to critical. Furthermore, the system supports on demand exploration of an individual server providing detailed information about its condition, for a specific timespan, combining historical analysis of previous values and the prediction of potential future state. Additionally, natural interaction through hand-gestures is supported for 3D navigation and item selection, based on a computer-vision approach.
Adaptation and content personalization in the context of multi user museum exhibits Conference Paper · June 2016 with 2 Reads Conference: 1st Workshop on Advanced Visual Interfaces for Cultural Heritage co-located with the International Working Conference on Advanced Visual Interfaces (AVI 2016), At Bari, Italy 1st Nikolaos Partarakis 4.18 · Foundation for Research and Technology - Hellas 2nd Margherita Antona 15.64 · Foundation for Research and Technology - Hellas 3rd Emmanouil Zidianakis 1.71 · Foundation for Research and Technology - Hellas 4th Constantine Stephanidis Abstract Two dimensional paintings are exhibited in museums and art galleries in the same manner since at least three centuries. However, the emergence of novel interaction techniques and metaphors provides the opportunity to change this status quo, by supporting mixing physical and digital Cultural Heritage experiences. This paper presents the design and implementation of a technological framework based on Ambient Intelligence to enhance visitor experiences within Cultural Heritage Institutions (CHIs) by augmenting two dimensional paintings. Among the major contributions of this research work is the support of personalized multi user access to exhibits, facilitating also adaptation mechanisms for altering the interaction style and content to the requirements of each CHI visitor. A standards compliant knowledge representation and the appropriate authoring tools guarantee the effective integration of this approach to the CHI context.
This paper presents the conversion of an electric cargo vehicle into a portable platform for interacting with information applications. The cargo vehicle hosts 2 seats for the driver and 1 extra passenger, and 3 interactive systems installed at the cargo’s right, left and back exterior side. The vehicle is intended to follow predefined routes from central ports to the nearest city center, making long term stops. During stops, embedded interactive systems entertain and provide visitors and other passersby with information of local interest. This papers focuses on the vehicle’s conversion process, from the installation of the necessary hardware components needed by the interactive systems to the development of a portable control panel designed to address the driver’s needs.