In the present Internet of Things (IoT) era, smart city components (e.g. Smart buildings, Smart infrastructures, etc.) are increasingly embracing cutting edge technologies to support complex scenarios that include decision-making, prediction and intelligent actuation. In this context, there is an increased need for information visualization, so as to propagate information to end users in a smart, sustainable, and resilient way. Currently, despite the growth of the IoT sector, many IoT operators only provide static visualizations. However, interactive data visualizations are required to achieve deeper and faster insights, beyond what is available in existing infrastructure, towards supporting decision-making by city authorities; while offering real time information to citizens. This paper builds on top of ongoing research work carried out at the Human Computer Interaction (HCI) Laboratory of ICS-FORTH in the domain of visualizing and interacting with information in Ambient Intelligence environments, in order to propose the design of an interactive Smart City Visualization framework. In this context, advanced user interaction techniques can be employed, including gesture-based interaction with high resolution large screen displays in alternative contexts of use and immersive VR experiences. To this end, several gesture-based interaction techniques have been validated to propose a sufficiently rich set of gestures that are adaptable to user and context requirements and are ergonomic, intuitive, and easy to perform and remember, while remaining metaphorically appropriate for the addressed functionality. Additionally, Big Data visualization is accomplished by employing 3D solutions. The proposed design supports experiencing and interacting with information through VR technologies and large displays, offering improved data visualization capacity and enhanced data dimensionality, thus overcoming issues related to data complexity and heterogeneity.
ICS-FORTH has recently initiated AmI-Garden, a smart farming project in the framework of its Ambient Intelligence Research Programme. A small experimental IoT greenhouse has been constructed and equipped with polycarbonate cover sheets and all the necessary infrastructure and hardware (automatic window-roof opening/closing, sliding door, fan installation for heating/cooling, vegetable breeding lamps etc.). Inside the greenhouse, a network of wireless sensors is used to measure environmental conditions and parameters, such as air/soil temperature and moisture, sunlight level, soil conductivity, quality and level of chemical ions in irrigation water, etc. The sensors communicate through IoT gateways to the greenhouse’s data centre for storage and post-processing. The system comes with pre-installed agricultural scenarios, a set of activity flows based on environmental conditions that are ideal for each plant species and are monitored in the greenhouse as explained above. The scenarios currently contain parameters to predict common diseases of the plants, as well as unexpected changes in the greenhouse’s microclimate. For example, the irrigation process is built as an agricultural scenario using data from current plant status and past data in order to establish the optimal amount of water to irrigate. The parameters of this scenario are based on specific plant breed and environmental variables. The intelligence behind the scenarios is based on critical limits and thresholds to create cultivation rules. On top of this rule based process, event-driven activation of various automations in the greenhouse is provided, for example, automatic humidity/temperature control, soil fertilisation (hydro fusion) and precise irrigation. Various sets of raw data are produced and ingested into the system, as the life cycle of each one of the plants evolves, in order to be used as the main input for the system’s actuations based on the agricultural treatment scenarios.
The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.
Public interaction displays contribute to upgrading the quality of public spaces since they attract many users and stimulate social interaction. In this paper, BubbleFeed is presented, a system for visualizing RSS news from multiple sources in public spaces. RSS news headlines are displayed inside virtual interactive bubbles ascending from the bottom of a vertical screen to the top, resembling the bubbles formed in a glass of soft drink. Besides touching the bubbles to expand and read the respective news stories, playful user interaction is supported to promote better engagement and motivate multiple users to participate. To support custom news feeds and Facebook posts in addition to RSS feeds, we have built a tool and a library that produce RSS files from the respective sources. BubbleFeed also supports displaying weather information, hosting media galleries and providing useful information such as Wi-Fi hotspot maps.
The population of elderly people and disabled has exponentially increased thanks to advances of medicine which allow people to live longer and healthier than the previous generations. In this context, Ambient Assisted Living (AAL) applications which promotes independent living is more necessary than ever. Also, the Internet of Things (IoT) proliferates as the dominant technological paradigm for the open deployment of networked smart objects in the environment, including physical things, smart devices and entire applications. In our work, a primary objective was the delivery of an AAL framework on the top of smart objects which uses the full range of IoT technologies. Very early, it became evident that the demand of personalized applications in the context of AAL is very intense. This is mainly due to the highly individualized and fluid nature of the required applications. Along these lines, we focus in providing an end-user programming environment to empower carers, possibly the elderly and family themselves, with the necessary tools to easily and quickly craft, test, modify and deploy smart object applications they would like to have in their everyday life. In this paper, we support personalized automations using smart objects for outdoor daily activities, outside the elderly's protected home environment. We initially outline possible useful mobility scenarios. Then, we elaborate on the visual tools we are developing, followed by a brief case study using them.
Ambient Assisted Living (AAL) promotes independent living, while the Internet of Things (IoT) proliferates as the dominant technology for the deployment of pervasive smart objects. In this work, we focus on the delivery of an AAL framework utilizing IoT technologies, while addressing the demand for very customized automations due to the diverse and fluid (can change over time) user requirements. The latter turns the idea of a general-purpose application suite to fit all users mostly unrealistic and suboptimal. Driven by the popularity of visual programming tools, especially for children, we focused in directly enabling end-users, including carers, family or friends, even the elderly/disabled themselves, to easily craft and modify custom automations. In this paper we firstly discuss scenarios of highly personalized AAL automations through smart objects, and then elaborate on the capabilities of the visual tools we are currently developing on a basis of a brief case study.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Although activities of daily living are often difficult for individuals with cognitive impairments, their autonomy and independence can be fostered through interactive technologies. The use of traditional computer interfaces has however proved to be difficult for these users, bringing to the surface the need for novel interaction methods. This paper proposes Let’s Cook, an innovative Augmented Reality game, designed to teach children with cognitive impairments how to prepare simple meals, following a playful approach. Let’s Cook supports multimodal interaction techniques utilizing tangible objects on a table-top surface, as well as multimedia output. Additionally, it can be personalized to accommodate the diverse needs of children with cognitive impairments by employing individual user profiling. The system is currently installed in the kitchen of the Rehabilitation Centre for Children with Disabilities in Heraklion, Crete where it was evaluated by the students.
This paper describes an educational game that aims to familiarize cognitive impaired children with household objects, the overall home environment and the daily activities that take place in it. In addition to touch-based interaction, the game supports physical manipulation through printed cards on a tabletop setup, using a webcam to detect and track the cards placed on the game board.
This paper presents a user experience study of interaction with printed maps for providing digitally augmented tourism information. The Interactive Maps system has been implemented based on an interactive printed matter framework which provides all the necessary components for developing smart applications that offer printed matter interaction, and has been deployed and evaluated in the context of the publicly available Tourism InfoPoint of the Municipality of Heraklion. The results of the evaluation highlight that interacting with digitally augmented paper is quite easy and natural, while the overall user experience is positive.