This paper presents a computer vision system that supports non-instrumented, location-based interaction of multiple users with digital representations of large-scale artifacts. The proposed system is based on a camera network that observes multiple humans in front of a very large display. The acquired views are used to volumetrically reconstruct and track the humans robustly and in real time, even in crowded scenes and challenging human configurations. Given the frequent and accurate monitoring of humans in space and time, a dynamic and personalized textual/graphical annotation of the display can be achieved based on the location and the walk-through trajectory of each visitor. The proposed system has been successfully deployed in an archaeological museum, offering its visitors the capability to interact with and explore a digital representation of an ancient wall painting. This installation permits an extensive evaluation of the proposed system in terms of tracking robustness, computational performance and usability. Furthermore, it proves that computer vision technology can be effectively used to support non-instrumented interaction of humans with their environments in realistic settings.
The theme of this paper is an exhibition of prototypical interactive systems with subjects drawn from ancient Macedonia, named "Macedonia from fragments to pixels". Since 2010, the exhibition is hosted by the Archaeological Museum of Thessaloniki and is open daily to the general public. Up to now, more than 165.000 people have visited it. The exhibition comprises 7 interactive systems which are based on some research outcomes of the Ambient Intelligence Programme of the Institute of Computer Science, Foundation for Research and Technology - Hellas. The digital content of these systems includes objects from the Museum’s permanent collection and from Macedonia.
A visual user interface providing augmented, multitouch interaction upon a non-instrumented disk that can dynamically rotate in two axes is proposed. While the user manipulates the disk, the system uses a projector to visualize a display upon it. A depth camera is used to estimate the pose of the surface and multiple simultaneous fingertip contacts upon it. The estimates are transformed into meaningful user input, availing both fingertip contact and disk pose information. Calibration and real-time implementation issues are studied and evaluated through extensive experimentation. We show that the outcome meets accuracy and usability requirements for employing the approach in human computer interaction.
This paper presents an advergame installation for promoting the brand and products of a food company producing various types of traditional Cretan rusks. The paper first presents some background and related work. Then the requirements set towards creating the game are outlined, followed by concept creation and design decisions taken in order to meet these requirements, as well as a description of the user interface, gameplay and implementation characteristics of the resulting game. The game has already been installed with remarkable success in two different food exhibitions.
This paper presents an innovative advergame installation for promoting the brand and products of a company producing Cretan rusks. The paper first presents some background and related work. Then, the requirements set towards creating the game are outlined, followed by concept creation and design decisions taken to meet these requirements, as well as a description of the user interface, gameplay and technical characteristics of the resulting game. The game has been installed with remarkable success in two different food exhibitions in key locations in Athens, Greece, where it has been played by more than 500 people of ages ranging from 2 to 76 years old. A large variety of qualitative and quantitative data were collected. The paper presents several findings stemming from these data. Additionally, changes made to the game as a result of the findings are presented, along with lessons learnt from the acquired experience.
This paper presents the application of the PaperView system in the domain of cartographic heritage. PaperView is a multi-user augmented-reality system for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. By applying the proposed method of reality augmentation in the cartographic heritage domain, the system provides the capability of retrieving multimedia information about areas of interest, overlaying information on a 2D or 3D (i.e., scale model) map, as well as comparing different versions of a single map. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in three setups: (i) a map of Macedonia, Greece, including ancient Greek cities with archeological interest; (ii) a glass case containing a scale model and (iii) a part of Rigas Velestinlis’ Charta. The first two systems are currently installed and available to the general public at the Archaeological Museum of Thessaloniki, Greece, as part of a permanent exhibition of interactive systems.
This paper describes the outcomes stemming from the work of a multidisciplinary R&D project of ICS-FORTH, aiming to explore and experiment with novel interactive museum exhibits, and to assess their utility, usability and potential impact. More specifically, four interactive systems are presented in this paper which have been integrated, tested and evaluated in a dedicated, appropriately designed, laboratory space. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions with a large number of participants.
A frequent need of museums is to provide visitors with context-sensitive information about exhibits in the form of maps, or scale models. This paper suggests an augmented-reality approach for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in the form of two prototype setups: a wooden table covered with a printed map and a glass case containing a scale model. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions.
This paper presents a system that supports the exploration of digital representations of large-scale museum artifacts in through non-instrumented, location-based interaction. The system employs a state-of-the-art computer vision system, which localizes and tracks multiple visitors. The artifact is presented in a wall-sized projection screen and it is visually annotated with text and images according to the location as well as walkthrough trajectories of the tracked visitors. The system is evaluated in terms of computational performance, localization accuracy, tracking robustness and usability.
In this paper, the design and implementation of a hardware/ software platform for parallel and distributed multiview vision processing is presented. The platform is focused at supporting the monitoring of human presence in indoor environments. Its architecture is focused at increased throughput through process pipelining as well as at reducing communication costs and hardware requirements. Using this platform, we present efficient implementations of basic visual processes such as person tracking, textured visual hull computation and head pose estimation. Using the proposed platform multiview visual operations can be combined and third-party ones integrated, to ultimately facilitate the development of interactive applications that employ visual input. Computational performance is benchmarked comparatively to state of the art and the efficacy of the approach is qualitatively assessed in the context of already developed applications related to interactive environments.