This work blends the domain of Precision Agriculture with the prevalent paradigm of Ambient Intelligence, so as to enhance the interaction between farmers and Intelligent Environments, and support their various daily agricultural activities, aspiring to improve the quality and quantity of cultivated plants. In this paper, two systems are presented, namely the Intelligent Greenhouse and the AmI seedbed, targeting a wide range of agricultural activities, starting from planting the seeds, caring for each individual sprouted plant up to their transplantation in the greenhouse, where the provision for the entire plantation lasts until the harvesting period.
Abstract— Considering the prevalence of Ambient Intelligence, this work aims to enhance the interaction between farmers and Intelligent Environments, in order to support their various daily Agricultural activities, aspiring to improve the quality and quantity of cultivated species. Towards this direction, the Greta system was designed and developed, following a user-centered design process, permitting farmers/agronomists to easily monitor and control an Intelligent Greenhouse via a set of useful and usable applications. Greta offers a progressive web app (PWAs) targeting PCs, handheld devices, and technologically-enhanced artifacts of Smart Homes, while it also delivers an Augmented Reality application that visualizes the greenhouse’s interior conditions in a sophisticated manner, and provides context-sensitive assistance regarding cultivation activities. In more detail, the system interoperates with the ambient facilities of an Intelligent Greenhouse allowing end-users to: monitor the conditions inside the greenhouse, remotely control the state of various actuators, be notified regarding the available/active automations, be aware of the optimal conditions for their plants to grow and receive relevant guidelines, be informed regarding any diseases, and communicate with experts for receiving treatment advice. This work describes the design methodology and functionality of Greta, and documents the results of a series of expert-based evaluation experiments.
The proliferation of Internet of Things devices and services and their integration in everyday environments led to the emergence of intelligent offices, classrooms, conference, and meeting rooms that adhere to the paradigm of Ambient Intelligence. Usually, the type of activities performed in such environments (i.e., presentations and lectures) can be enhanced by the use of large Interactive Boards that—among others—allow access to digital content, promote collaboration, enhance the process of exchanging ideas, and increase the engagement of the audience. Additionally, the board contents are expected to be plenty, in terms of quantity, and diverse, in terms of type (e.g., textual data, pictorial data, multimedia, figures, and charts), which unavoidably makes their manipulation over a large display tiring and cumbersome, especially when the interaction lasts for a considerable amount of time (e.g., during a class hour). Acknowledging both the shortcomings and potentials of Interactive Boards in intelligent conference rooms, meeting rooms, and classrooms, this work introduces a sophisticated framework named CognitOS Board, which takes advantage of (i) the intelligent facilities offered by the environment and (ii) the amenities offered by wall-to-wall displays, in order to enhance presentation-related activities. In this article, we describe the design process of CognitOS Board, elaborate on the available functionality, and discuss the results of a user-based evaluation study.
In the domain of education, an Intelligent Classroom that employs Ambient Intelligence technologies can not only improve learning and student performance, but also support educators with the various educational tasks, such as lecturing, course preparation and classroom management. Given that the board is one of the key artifacts of any classroom, using technology to enhance it offers students and educators rich opportunities by providing access to a wide range of applications, capturing and maintaining a simultaneous focus of attention for large learner groups, supporting collaboration and encouraging discussion. To this end, this work presents the CognitOS Classboard, an educator- and student- oriented framework, employed on the “Intelligent Classroom Board” - a wall-to-wall projected interactive board- offering a variety of tools and applications aiming to support lecturing and enhance the learning process. Aiming to create highly engaging and fascinating learning experiences for the students, the CognitOS Classboard apart from offering access to useful educational applications, features sophisticated mechanisms that can transform the classroom into an immersive environment on demand. It supports multimodal interaction through touch, mid-air gestures, voice commands, and user position tracking, while a tablet and a desktop application were developed for permitting the management and overview of the board. This paper reports the functionality of the “CognitOS Classboard” and the findings of an evaluation experiment conducted with User Experience experts.
ICS-FORTH has recently initiated AmI-Garden, a smart farming project in the framework of its Ambient Intelligence Research Programme. A small experimental IoT greenhouse has been constructed and equipped with polycarbonate cover sheets and all the necessary infrastructure and hardware (automatic window-roof opening/closing, sliding door, fan installation for heating/cooling, vegetable breeding lamps etc.). Inside the greenhouse, a network of wireless sensors is used to measure environmental conditions and parameters, such as air/soil temperature and moisture, sunlight level, soil conductivity, quality and level of chemical ions in irrigation water, etc. The sensors communicate through IoT gateways to the greenhouse’s data centre for storage and post-processing. The system comes with pre-installed agricultural scenarios, a set of activity flows based on environmental conditions that are ideal for each plant species and are monitored in the greenhouse as explained above. The scenarios currently contain parameters to predict common diseases of the plants, as well as unexpected changes in the greenhouse’s microclimate. For example, the irrigation process is built as an agricultural scenario using data from current plant status and past data in order to establish the optimal amount of water to irrigate. The parameters of this scenario are based on specific plant breed and environmental variables. The intelligence behind the scenarios is based on critical limits and thresholds to create cultivation rules. On top of this rule based process, event-driven activation of various automations in the greenhouse is provided, for example, automatic humidity/temperature control, soil fertilisation (hydro fusion) and precise irrigation. Various sets of raw data are produced and ingested into the system, as the life cycle of each one of the plants evolves, in order to be used as the main input for the system’s actuations based on the agricultural treatment scenarios.
Public interaction displays contribute to upgrading the quality of public spaces since they attract many users and stimulate social interaction. In this paper, BubbleFeed is presented, a system for visualizing RSS news from multiple sources in public spaces. RSS news headlines are displayed inside virtual interactive bubbles ascending from the bottom of a vertical screen to the top, resembling the bubbles formed in a glass of soft drink. Besides touching the bubbles to expand and read the respective news stories, playful user interaction is supported to promote better engagement and motivate multiple users to participate. To support custom news feeds and Facebook posts in addition to RSS feeds, we have built a tool and a library that produce RSS files from the respective sources. BubbleFeed also supports displaying weather information, hosting media galleries and providing useful information such as Wi-Fi hotspot maps.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Augmented reality fitting rooms enrich customers’ experience and expedite the shopping procedure. This paper presents an Augmented Reality (AR) mirror which provides motion-based interaction to the users and suggests various outfits. The proposed system can be easily installed inside or at the window of a retail shop, enabling the users to stand in front of it and see themselves wearing clothes that the system suggests while they are able to naturally interact with the system remotely, using gestures, in order to like or dislike the recommended outfit. The users can also choose to post photos wearing the proposed clothes on their social media accounts, as well as to buy the clothes either directly from the store or on-line.
This work regards fingertip contact detection and localization upon planar surfaces, for the purpose of providing interactivity in augmented, interactive displays that are implemented upon these surfaces. The proposed approach differs from the widely employed approach where user hands are observed from above, in that user hands are imaged laterally. An algorithmic approach for the treatment of the corresponding visual input is proposed. The proposed approach is extensively evaluated and compared to the top view approach. Advantages of the proposed approach include increased sensitivity, localization accuracy, scalability, as well as, practicality and cost efficiency of installation.
This paper reports on the design, development and evaluation of a framework which implements virtual humans for information provision. The framework can be used to create interactive multimedia information visualizations (e.g., images, text, audio, videos, 3D models) and provides a dynamic data modeling mechanism for storage and retrieval and implements communication through multimodal interaction techniques. The interaction may involve human-to-agent, agent-to-environment or agent-to-agent communication. The framework supports alternative roles for the virtual agents who may act as assistants for existing systems, standalone “applications” or even as integral parts of emerging smart environments. Finally, an evaluation study was conducted with the participation of 10 people to study the developed system in terms of usability and effectiveness, when it is employed as an assisting mechanism for another application. The evaluation results were highly positive and promising, confirming the system’s usability and encouraging further research in this area.