The Internet of Things is based on ecosystems of networked devices, referred to as smart objects, effectively enabling the blending of physical things with digital artifacts in an unprecedented way. In principle, endless automations may be introduced in the context of daily life exploring the numerous opportunities offered by the deployment and utilization of such smart objects. However, in practice the demands for such automations are highly personalized and fluid effectively minimizing the chances for building commercially successful general‐purpose applications. In this context our vision is to empower end‐users with the appropriate tools enabling to easily and quickly craft, test and modify the automations they need. In this chapter we initially discuss a few possible future scenarios for automations relying on smart objects. Then, we elaborate on the visual tools we currently develop, followed by a brief case study using the tools. Finally, the potential of publishing such automations in typical digital markets is considered.
This work concerns the automatic registration of spectral images of paintings upon planar, or approximately planar, surfaces. An approach that capitalizes upon this planarity is proposed, which estimates homography transforms that register the spectral images into an aligned spectral cube. Homography estimation methods are comparatively evaluated for this purpose. A non-linear, robust estimation method that is based on keypoint features is adopted, as the most accurate. A marker-based, quantitative evaluation method is proposed for the measurement of multispectral image registration accuracy and, in turn, utilized for the comparison of the proposed registration method to the state of the art. For the same purpose, characteristic for this application domain, benchmark datasets that are annotated with correctly corresponding points have been compiled and are publicly availed.
The proliferation of Internet of Things (IoT) devices and services and their integration in Ambient Intelligence (AmI) Environments revealed a new range of roles that TVs are expected to play so as to improve quality of life. This work introduces AmITV, an integrated multimodal system that permits end-users to use the TV not only as a traditional entertainment center, but also as (i) a control center for manipulating any intelligent device, (ii) an intervention host that presents appropriate content when they need help or support, (iii) an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (iv) a notification medium that informs them about interesting or urgent events, and (v) a communication hub that permits them to exchange messages in real-time or asynchronously. This paper presents two motivational scenarios inspired from Home and Hotel Intelligent Environments and the infrastructure behind AmITV. Additionally, it describes how it realizes the newly emerged roles of TVs as a multimodal, intelligent and versatile interaction hub with the ambient facilities of the entire technologically-augmented environment.
This paper describes the implementation of an Internet of Things (IoT) and Open Data infrastructure by the Institute of Computer Science of the Foundation for Research and Technology—Hellas (FORTH-ICS) for the city of Heraklion, focusing on the application of mature research and development outcomes in a Smart City context. These outcomes mainly fall under the domains of Telecommunication and Networks, Information Systems, Signal Processing and Human Computer Interaction. The infrastructure is currently being released and becoming available to the municipality and the public through the Heraklion Smart City web portal. It is expected that in the future such infrastructure will act as one of the pillars for sustainable growth and prosperity in the city, supporting enhanced overview of the municipality over the city that will foster better planning, enhanced social services and improved decision-making, ultimately leading to improved quality of life for all citizens and visitors.
In the present Internet of Things (IoT) era, smart city components (e.g. Smart buildings, Smart infrastructures, etc.) are increasingly embracing cutting edge technologies to support complex scenarios that include decision-making, prediction and intelligent actuation. In this context, there is an increased need for information visualization, so as to propagate information to end users in a smart, sustainable, and resilient way. Currently, despite the growth of the IoT sector, many IoT operators only provide static visualizations. However, interactive data visualizations are required to achieve deeper and faster insights, beyond what is available in existing infrastructure, towards supporting decision-making by city authorities; while offering real time information to citizens. This paper builds on top of ongoing research work carried out at the Human Computer Interaction (HCI) Laboratory of ICS-FORTH in the domain of visualizing and interacting with information in Ambient Intelligence environments, in order to propose the design of an interactive Smart City Visualization framework. In this context, advanced user interaction techniques can be employed, including gesture-based interaction with high resolution large screen displays in alternative contexts of use and immersive VR experiences. To this end, several gesture-based interaction techniques have been validated to propose a sufficiently rich set of gestures that are adaptable to user and context requirements and are ergonomic, intuitive, and easy to perform and remember, while remaining metaphorically appropriate for the addressed functionality. Additionally, Big Data visualization is accomplished by employing 3D solutions. The proposed design supports experiencing and interacting with information through VR technologies and large displays, offering improved data visualization capacity and enhanced data dimensionality, thus overcoming issues related to data complexity and heterogeneity.
Advanced Driver Assistant systems (ADAS) are receiving increased research focus as they promote a safer and more comfortable driving experience. In this context, personalization can play a key role as the different driver/rider needs, the environmental context and driver’s/rider’s state can be taken into account towards delivering custom tailored interaction and performing intelligent decision making. This paper presents an ontology-based approach for personalizing Human Machine Interaction (HMI) elements in ADAS systems. The main features of the presented research work include: (a) semantic modelling of relevant data in the form of an ontology meta-model that includes the driver/ rider information, the vehicle and its HMI elements, as well as the external environment, (b) rule-based reasoning on top of the meta-model to derive appropriate personalization decisions, and (c) adaptation of the vehicle’s HMI elements and interaction paradigms to best fit the particular driver or rider, as well as the overall driving context.
The population of elderly people and disabled has exponentially increased thanks to advances of medicine which allow people to live longer and healthier than the previous generations. In this context, Ambient Assisted Living (AAL) applications which promotes independent living is more necessary than ever. Also, the Internet of Things (IoT) proliferates as the dominant technological paradigm for the open deployment of networked smart objects in the environment, including physical things, smart devices and entire applications. In our work, a primary objective was the delivery of an AAL framework on the top of smart objects which uses the full range of IoT technologies. Very early, it became evident that the demand of personalized applications in the context of AAL is very intense. This is mainly due to the highly individualized and fluid nature of the required applications. Along these lines, we focus in providing an end-user programming environment to empower carers, possibly the elderly and family themselves, with the necessary tools to easily and quickly craft, test, modify and deploy smart object applications they would like to have in their everyday life. In this paper, we support personalized automations using smart objects for outdoor daily activities, outside the elderly's protected home environment. We initially outline possible useful mobility scenarios. Then, we elaborate on the visual tools we are developing, followed by a brief case study using them.
Ambient Assisted Living (AAL) promotes independent living, while the Internet of Things (IoT) proliferates as the dominant technology for the deployment of pervasive smart objects. In this work, we focus on the delivery of an AAL framework utilizing IoT technologies, while addressing the demand for very customized automations due to the diverse and fluid (can change over time) user requirements. The latter turns the idea of a general-purpose application suite to fit all users mostly unrealistic and suboptimal. Driven by the popularity of visual programming tools, especially for children, we focused in directly enabling end-users, including carers, family or friends, even the elderly/disabled themselves, to easily craft and modify custom automations. In this paper we firstly discuss scenarios of highly personalized AAL automations through smart objects, and then elaborate on the capabilities of the visual tools we are currently developing on a basis of a brief case study.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Although activities of daily living are often difficult for individuals with cognitive impairments, their autonomy and independence can be fostered through interactive technologies. The use of traditional computer interfaces has however proved to be difficult for these users, bringing to the surface the need for novel interaction methods. This paper proposes Let’s Cook, an innovative Augmented Reality game, designed to teach children with cognitive impairments how to prepare simple meals, following a playful approach. Let’s Cook supports multimodal interaction techniques utilizing tangible objects on a table-top surface, as well as multimedia output. Additionally, it can be personalized to accommodate the diverse needs of children with cognitive impairments by employing individual user profiling. The system is currently installed in the kitchen of the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete where it was evaluated by the students.