PaperView: augmenting physical surfaces with location-aware digital information

Grammenos, D., Michel, D., Zabulis, X., and Argyros, A., A. (2011) PaperView: augmenting physical surfaces with location-aware digital information In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI '11). ACM, New York, NY, USA, 57-60.

Abstract

A frequent need of museums is to provide visitors with context-sensitive information about exhibits in the form of maps, or scale models. This paper suggests an augmented-reality approach for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in the form of two prototype setups: a wooden table covered with a printed map and a glass case containing a scale model. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions.

A platform for monitoring aspects of human presence in real-time

Zabulis, X., Sarmis, T., Tzevanidis, K., Koutlemanis, P., Grammenos, D. and Argyros, A. A. (2010) A platform for monitoring aspects of human presence in real-time International Symposium on Visual Computing, Las Vegas, Nevada, USA, November 29 - December 1, 2010

Abstract

In this paper, the design and implementation of a hardware/ software platform for parallel and distributed multiview vision processing is presented. The platform is focused at supporting the monitoring of human presence in indoor environments. Its architecture is focused at increased throughput through process pipelining as well as at reducing communication costs and hardware requirements. Using this platform, we present efficient implementations of basic visual processes such as person tracking, textured visual hull computation and head pose estimation. Using the proposed platform multiview visual operations can be combined and third-party ones integrated, to ultimately facilitate the development of interactive applications that employ visual input. Computational performance is benchmarked comparatively to state of the art and the efficacy of the approach is qualitatively assessed in the context of already developed applications related to interactive environments.

Exploration of large-scale museum artifacts through non-instrumented, location-based, multi-user interaction

Zabulis, X., Grammenos, D., Sarmis, T., Tzevanidis, K., Argyros, A.A. (2010) Exploration of large-scale museum artifacts through non-instrumented, location-based, multi-user interaction In Proceedings of the 11th VAST International Symposium on Virtual Reality, Archaeology and Cultural Heritage, VAST’2010, Palais du Louvre, Paris, France, 21-24 September 2010, 155-162.

Abstract

This paper presents a system that supports the exploration of digital representations of large-scale museum artifacts in through non-instrumented, location-based interaction. The system employs a state-of-the-art computer vision system, which localizes and tracks multiple visitors. The artifact is presented in a wall-sized projection screen and it is visually annotated with text and images according to the location as well as walkthrough trajectories of the tracked visitors. The system is evaluated in terms of computational performance, localization accuracy, tracking robustness and usability.

Building a multi-touch display based on computer vision techniques

Michel, D., Argyros, A. A., Grammenos, D., Zabulis, X. C., & Sarmis, T. (2009) Building a multi-touch display based on computer vision techniques In Proceedings of the IAPR Conference on Machine Vision and Applications (MVAΆ09), Hiyoshi Campus, Keio University, Japan. 74-77.

Abstract

We present the development of a multi-touch display based on computer vision techniques. The developed system is built upon low cost, off-the-shelf hardware components and a careful selection of computer vision techniques. The resulting system is capable of detecting and tracking several objects that may move freely on the surface of a wide projection screen. It also provides additional information regarding the detected and tracked objects, such as their orientation, their full contour, etc. All of the above are achieved robustly, in real time and regardless of the visual appearance of what may be independently projected on the projection screen. We also present indicative results from the exploitation of the developed system in three application scenarios and discuss directions for further research.

1 2 3