The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
This work regards fingertip contact detection and localization upon planar surfaces, for the purpose of providing interactivity in augmented, interactive displays that are implemented upon these surfaces. The proposed approach differs from the widely employed approach where user hands are observed from above, in that user hands are imaged laterally. An algorithmic approach for the treatment of the corresponding visual input is proposed. The proposed approach is extensively evaluated and compared to the top view approach. Advantages of the proposed approach include increased sensitivity, localization accuracy, scalability, as well as, practicality and cost efficiency of installation.
Two dimensional paintings were exhibited in museums and art galleries in the same manner since at least three centuries. However, the emergence of novel interactive technologies provides the opportunity to change this status quo. By 2006, according to the Institute for Museum and Library Services, 43 % of museum visits in the U.S. were remote. According to the Institute for the Future, “Emerging technologies are transforming everything that constitutes our notion of “reality” – our ability to sense our surroundings, our capacity to reason, our perception of the world”. In the present age, that technology is becoming mixed to the fabric of reality to offer novel experiences in Cultural Heritage Institutions. This work presents the design and implementation of a technological framework based on ambient intelligence to enhance visitor experiences within Heritage Institutions by augmenting two dimensional paintings. Among the major contributions of this chapter is the support of personalized multi user access to exhibits, facilitating also adaptation mechanisms for altering the interaction style and content based on the requirements of each Heritage Institution’s visitor. A standards compliant knowledge representation and the appropriate authoring tools guarantee the effective integration of this approach in any relevant context. The developed applications have been deployed within a simulation space of the FORTH-ICS AmI facility and evaluated by users in the context of a pilot study.
Adaptation and content personalization in the context of multi user museum exhibits Conference Paper · June 2016 with 2 Reads Conference: 1st Workshop on Advanced Visual Interfaces for Cultural Heritage co-located with the International Working Conference on Advanced Visual Interfaces (AVI 2016), At Bari, Italy 1st Nikolaos Partarakis 4.18 · Foundation for Research and Technology - Hellas 2nd Margherita Antona 15.64 · Foundation for Research and Technology - Hellas 3rd Emmanouil Zidianakis 1.71 · Foundation for Research and Technology - Hellas 4th Constantine Stephanidis Abstract Two dimensional paintings are exhibited in museums and art galleries in the same manner since at least three centuries. However, the emergence of novel interaction techniques and metaphors provides the opportunity to change this status quo, by supporting mixing physical and digital Cultural Heritage experiences. This paper presents the design and implementation of a technological framework based on Ambient Intelligence to enhance visitor experiences within Cultural Heritage Institutions (CHIs) by augmenting two dimensional paintings. Among the major contributions of this research work is the support of personalized multi user access to exhibits, facilitating also adaptation mechanisms for altering the interaction style and content to the requirements of each CHI visitor. A standards compliant knowledge representation and the appropriate authoring tools guarantee the effective integration of this approach to the CHI context.
This work regards fingertip contact detection and localization upon planar surfaces to provide interactivity in augmented displays implemented upon these surfaces, by projector-camera systems. In contrast to the widely employed approach where user hands are observed from above, lateral camera placement avails increased sensitivity to touch detection. An algorithmic approach for the treatment of the laterally acquired visual input is proposed and is comparatively evaluated against the conventional.
Today, many forms of art are influenced by the emergence of interactive technologies, including the mixing of physical media with digital technology for forming new hybrid works of art and the usage of mobile phones to create art projected on public spaces. Many artists and painters use digital technology to augment their work technically and creatively. In the same context many believe that the time of transition from traditional analogue art to postmodern digital art, that is, to an art grounded in codes rather than images has arrived. The research work described in this paper contributes towards supporting, through the use of Ambient Intelligence technologies, traditional painters’ creativity, as well as methods and techniques of art masters. The paper presents the design and implementation of an intelligent environment and its software infrastructure, to form a digitally augmented Art Workshop. Its practical exploitation was conducted in an Ambient Intelligence (AmI) simulation space and four feasibility studies were conducted. In each of these studies an oil painting was created following an alternative, yet accredited by artists, approach.
This paper reports on the design and implementation of BeThereNow, a public interactive information system where users are depicted immersed in various sceneries. The work is focused on the domain of info-tainment in public spaces using large displays and aims on short-time usage. The implemented system employs a mixed reality application through which users are informed about different sceneries and also create personalized digital postcards. This process is accomplished using computer vision algorithms in order to depict users and objects, while removing the background of the scene. Finally, the lessons learned from the long-term deployment of the system out-in-the-wild are presented, providing an insight on the users’ actions and reactions and feedback on future research directions.
Natural interaction refers to people interacting with technology as they are used to interact with the real world in everyday life, through gestures, expressions, movements, etc., and discovering the world by looking around and manipulating physical objects . In the domain of cultural heritage research has been conducted in a number of directions including (a) Personalised Information in Museums, (b) Interactive Exhibits, (c) Interactive Games Installations in Museums, (d) Museum Mobile Applications, (e) Museums presence on the Web and (f) Museum Social Applications. Most museums target family groups and organize family-oriented events in their programs but how families choose to visit particular museums in response to their leisure needs has rarely been highlighted. This work exploits the possibility of extending the usage of AmI technology, and thus the user experience, within leisure spaces provided by museums such as cafeterias. The Museum Coffee Table is an augmented physical surface where physical objects can be used for accessing information about artists and their creations. At the same entertainment for children is facilitated through the integration of popular games on the surface. As a result, the entire family can seat around the table, drink coffee and complete their visit to the museum acquiring additional knowledge and playing games.
This paper reports on the results of a user-based evaluation that was conducted on a 3D virtual environment that supports diverse interaction techniques. More specifically, the interaction techniques that were evaluated were touch, gestures (hands and legs) and the use of a smart object. The goal of the experiment was to assess the effectiveness of each interaction modes as a means for the user to complete common tasks within the application. A comparison is attempted in order to provide an insight to the suitability of each technique and direct future research in the area.
This poster describes the design and development of a comprehensive Museum Tour Guide mobile application that can be installed on user-owned devices. The purpose of the application is to provide museum visitors with a device that can improve their experience through optimised planning of their visit and an always-available stream of information regarding the museum and its exhibits. The main goals, the design, as well as the implementation of the application are described and the main functions of the application are presented. Finally, conclusions are drawn and further development ideas are discussed.