In this paper, a steerable, interactive projection display that has the shape of a disk is presented. Interactivity is provided through sensitivity to the contact of multiple fingertips and is achieved through the use of a RGBD camera. The surface is mounted on two gimbals which, in turn, provide two rotational degrees of freedom. Modulation of surface posture supports the ergonomy of the device but can be, alternatively, used as a means of user-interface input. The geometry for mapping visual content and localizing fingertip contacts upon this steerable display is provided, along with pertinent calibration methods for the proposed system. An accurate technique for touch detection is proposed, while touch detection and projection accuracy issues are studied and evaluated through extensive experimentation. Most importantly, the system is thoroughly evaluated as to its usability, through a pilot application that was developed for this purpose. We show that the outcome meets real-time performance, accuracy and usability requirements for employing the approach in human computer interaction.
This paper reports on the results of a user-based evaluation that was conducted on a 3D virtual environment that supports diverse interaction techniques. More specifically, the interaction techniques that were evaluated were touch, gestures (hands and legs) and the use of a smart object. The goal of the experiment was to assess the effectiveness of each interaction modes as a means for the user to complete common tasks within the application. A comparison is attempted in order to provide an insight to the suitability of each technique and direct future research in the area.
A visual user interface providing augmented, multitouch interaction upon a non-instrumented disk that can dynamically rotate in two axes is proposed. While the user manipulates the disk, the system uses a projector to visualize a display upon it. A depth camera is used to estimate the pose of the surface and multiple simultaneous fingertip contacts upon it. The estimates are transformed into meaningful user input, availing both fingertip contact and disk pose information. Calibration and real-time implementation issues are studied and evaluated through extensive experimentation. We show that the outcome meets accuracy and usability requirements for employing the approach in human computer interaction.
The theme of this paper is an exhibition of prototypical interactive systems with subjects drawn from ancient Macedonia, named "Macedonia from fragments to pixels". Since 2010, the exhibition is hosted by the Archaeological Museum of Thessaloniki and is open daily to the general public. Up to now, more than 165.000 people have visited it. The exhibition comprises 7 interactive systems which are based on some research outcomes of the Ambient Intelligence Programme of the Institute of Computer Science, Foundation for Research and Technology - Hellas. The digital content of these systems includes objects from the Museum’s permanent collection and from Macedonia.
This paper presents a computer vision system that supports non-instrumented, location-based interaction of multiple users with digital representations of large-scale artifacts. The proposed system is based on a camera network that observes multiple humans in front of a very large display. The acquired views are used to volumetrically reconstruct and track the humans robustly and in real time, even in crowded scenes and challenging human configurations. Given the frequent and accurate monitoring of humans in space and time, a dynamic and personalized textual/graphical annotation of the display can be achieved based on the location and the walk-through trajectory of each visitor. The proposed system has been successfully deployed in an archaeological museum, offering its visitors the capability to interact with and explore a digital representation of an ancient wall painting. This installation permits an extensive evaluation of the proposed system in terms of tracking robustness, computational performance and usability. Furthermore, it proves that computer vision technology can be effectively used to support non-instrumented interaction of humans with their environments in realistic settings.
This paper presents the application of the PaperView system in the domain of cartographic heritage. PaperView is a multi-user augmented-reality system for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. By applying the proposed method of reality augmentation in the cartographic heritage domain, the system provides the capability of retrieving multimedia information about areas of interest, overlaying information on a 2D or 3D (i.e., scale model) map, as well as comparing different versions of a single map. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in three setups: (i) a map of Macedonia, Greece, including ancient Greek cities with archeological interest; (ii) a glass case containing a scale model and (iii) a part of Rigas Velestinlis’ Charta. The first two systems are currently installed and available to the general public at the Archaeological Museum of Thessaloniki, Greece, as part of a permanent exhibition of interactive systems.
This paper describes the outcomes stemming from the work of a multidisciplinary R&D project of ICS-FORTH, aiming to explore and experiment with novel interactive museum exhibits, and to assess their utility, usability and potential impact. More specifically, four interactive systems are presented in this paper which have been integrated, tested and evaluated in a dedicated, appropriately designed, laboratory space. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions with a large number of participants.
A frequent need of museums is to provide visitors with context-sensitive information about exhibits in the form of maps, or scale models. This paper suggests an augmented-reality approach for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in the form of two prototype setups: a wooden table covered with a printed map and a glass case containing a scale model. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions.
In this paper, the design and implementation of a hardware/ software platform for parallel and distributed multiview vision processing is presented. The platform is focused at supporting the monitoring of human presence in indoor environments. Its architecture is focused at increased throughput through process pipelining as well as at reducing communication costs and hardware requirements. Using this platform, we present efficient implementations of basic visual processes such as person tracking, textured visual hull computation and head pose estimation. Using the proposed platform multiview visual operations can be combined and third-party ones integrated, to ultimately facilitate the development of interactive applications that employ visual input. Computational performance is benchmarked comparatively to state of the art and the efficacy of the approach is qualitatively assessed in the context of already developed applications related to interactive environments.
This paper presents a system that supports the exploration of digital representations of large-scale museum artifacts in through non-instrumented, location-based interaction. The system employs a state-of-the-art computer vision system, which localizes and tracks multiple visitors. The artifact is presented in a wall-sized projection screen and it is visually annotated with text and images according to the location as well as walkthrough trajectories of the tracked visitors. The system is evaluated in terms of computational performance, localization accuracy, tracking robustness and usability.