Public interaction displays contribute to upgrading the quality of public spaces since they attract many users and stimulate social interaction. In this paper, BubbleFeed is presented, a system for visualizing RSS news from multiple sources in public spaces. RSS news headlines are displayed inside virtual interactive bubbles ascending from the bottom of a vertical screen to the top, resembling the bubbles formed in a glass of soft drink. Besides touching the bubbles to expand and read the respective news stories, playful user interaction is supported to promote better engagement and motivate multiple users to participate. To support custom news feeds and Facebook posts in addition to RSS feeds, we have built a tool and a library that produce RSS files from the respective sources. BubbleFeed also supports displaying weather information, hosting media galleries and providing useful information such as Wi-Fi hotspot maps.
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
Augmented reality fitting rooms enrich customers’ experience and expedite the shopping procedure. This paper presents an Augmented Reality (AR) mirror which provides motion-based interaction to the users and suggests various outfits. The proposed system can be easily installed inside or at the window of a retail shop, enabling the users to stand in front of it and see themselves wearing clothes that the system suggests while they are able to naturally interact with the system remotely, using gestures, in order to like or dislike the recommended outfit. The users can also choose to post photos wearing the proposed clothes on their social media accounts, as well as to buy the clothes either directly from the store or on-line.
This work regards fingertip contact detection and localization upon planar surfaces, for the purpose of providing interactivity in augmented, interactive displays that are implemented upon these surfaces. The proposed approach differs from the widely employed approach where user hands are observed from above, in that user hands are imaged laterally. An algorithmic approach for the treatment of the corresponding visual input is proposed. The proposed approach is extensively evaluated and compared to the top view approach. Advantages of the proposed approach include increased sensitivity, localization accuracy, scalability, as well as, practicality and cost efficiency of installation.
This paper reports on the design and implementation of BeThereNow, a public interactive information system where users are depicted immersed in various sceneries. The work is focused on the domain of info-tainment in public spaces using large displays and aims on short-time usage. The implemented system employs a mixed reality application through which users are informed about different sceneries and also create personalized digital postcards. This process is accomplished using computer vision algorithms in order to depict users and objects, while removing the background of the scene. Finally, the lessons learned from the long-term deployment of the system out-in-the-wild are presented, providing an insight on the users’ actions and reactions and feedback on future research directions.
This paper reports on the design, development and evaluation of a framework which implements virtual humans for information provision. The framework can be used to create interactive multimedia information visualizations (e.g., images, text, audio, videos, 3D models) and provides a dynamic data modeling mechanism for storage and retrieval and implements communication through multimodal interaction techniques. The interaction may involve human-to-agent, agent-to-environment or agent-to-agent communication. The framework supports alternative roles for the virtual agents who may act as assistants for existing systems, standalone “applications” or even as integral parts of emerging smart environments. Finally, an evaluation study was conducted with the participation of 10 people to study the developed system in terms of usability and effectiveness, when it is employed as an assisting mechanism for another application. The evaluation results were highly positive and promising, confirming the system’s usability and encouraging further research in this area.
This work regards fingertip contact detection and localization upon planar surfaces to provide interactivity in augmented displays implemented upon these surfaces, by projector-camera systems. In contrast to the widely employed approach where user hands are observed from above, lateral camera placement avails increased sensitivity to touch detection. An algorithmic approach for the treatment of the laterally acquired visual input is proposed and is comparatively evaluated against the conventional.
Touchless remote interaction empowers users to interact with systems at a distance without the burden of actually coming to physical contact with any tangible object. The research presented in this paper focuses on motion-based interaction in public spaces through hand detection using Microsoft’s Kinect, in order to allow natural interaction in mid-air. The paper presents the development of a system that allows browsing and exploring large collections of multimedia information (images and videos).
An active field of research today is the technological enrichment of everyday activities using augmented reality and ambient intelligence technologies. To this end, augmenting dinner tables is a challenging task, requiring a high-quality user experience unobtrusively supporting and enhancing the user’s main goal: eating and socializing. This work presents an augmented restaurant table, facilitating customers’ ordering and enhancing their experience through entertainment and socialization features, as well as through interaction with physical objects placed upon the table surface.
In this paper, a steerable, interactive projection display that has the shape of a disk is presented. Interactivity is provided through sensitivity to the contact of multiple fingertips and is achieved through the use of a RGBD camera. The surface is mounted on two gimbals which, in turn, provide two rotational degrees of freedom. Modulation of surface posture supports the ergonomy of the device but can be, alternatively, used as a means of user-interface input. The geometry for mapping visual content and localizing fingertip contacts upon this steerable display is provided, along with pertinent calibration methods for the proposed system. An accurate technique for touch detection is proposed, while touch detection and projection accuracy issues are studied and evaluated through extensive experimentation. Most importantly, the system is thoroughly evaluated as to its usability, through a pilot application that was developed for this purpose. We show that the outcome meets real-time performance, accuracy and usability requirements for employing the approach in human computer interaction.