This paper describes the outcomes stemming from the work of a multidisciplinary R&D project of ICS-FORTH, aiming to explore and experiment with novel interactive museum exhibits, and to assess their utility, usability and potential impact. More specifically, four interactive systems are presented in this paper which have been integrated, tested and evaluated in a dedicated, appropriately designed, laboratory space. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions with a large number of participants.
This paper introduces a suite of Window Managers purposed for the technologically enhanced classroom. The overall objective is to instantiate a common look and feel across various classroom artifacts, thus providing a unified working environment for the students and teachers. To achieve optimal interaction and application display, the workspaces for each artifact are designed keeping in mind both the platform’s characteristics and the user’s requirements. The usability evaluation of the developed system is reported.
A method is proposed that visually estimates the 3D pose and endpoints of a thin cylindrical physical object, such as a wand, a baton, or a stylus, that is manipulated by a user. The method utilizes multiple synchronous images of the object to cover wide spatial ranges, increase accuracy and deal with occlusions. Experiments demonstrate that the method can be applied in real-time using modest and conventional hardware and that the outcome suits the purposes of employing the approach for human computer interaction.
A frequent need of museums is to provide visitors with context-sensitive information about exhibits in the form of maps, or scale models. This paper suggests an augmented-reality approach for supplementing physical surfaces with digital information, through the use of pieces of plain paper that act as personal, location-aware, interactive screens. The technologies employed are presented, along with the interactive behavior of the system, which was instantiated and tested in the form of two prototype setups: a wooden table covered with a printed map and a glass case containing a scale model. The paper also discusses key issues stemming from experience and observations in the course of qualitative evaluation sessions.
In this paper we describe a novel methodology for performing real-time analysis of localization data streams produced by sensors embedded in ambient intelligence (AmI) environments. The methodology aims to handle different types of real-time events, detect interesting behavior in sequences of such events, and calculate statistical information using a scalable stream-processing engine (SPE) that executes continuous queries expressed in a stream-oriented query language. Key contributions of our approach are the integration of the Borealis SPE into a large-scale interactive museum exhibit system that tracks visitor positions through a number of cameras; the extension and customization of Borealis to support the types of real-time analysis useful in the context of the museum exhibit as well as in other AmI applications; and the integration with a visualization component responsible for rendering events received by the SPE in a variety of human readable forms.
Taking into account the potential of ICT in education and recognizing the need for smart environments and artifacts, this paper presents Study-Buddy, a context aware system aiming to augment the learning process. The system is constituted of an intelligent reading lamp that monitors students’ interaction with reading material and provides appropriate information through any near computational device (e.g., tablet, notebook, etc.). Study-Buddy is accompanied by LexiMedia, an educational software targeted to language learning.
In this paper, the design and implementation of a hardware/ software platform for parallel and distributed multiview vision processing is presented. The platform is focused at supporting the monitoring of human presence in indoor environments. Its architecture is focused at increased throughput through process pipelining as well as at reducing communication costs and hardware requirements. Using this platform, we present efficient implementations of basic visual processes such as person tracking, textured visual hull computation and head pose estimation. Using the proposed platform multiview visual operations can be combined and third-party ones integrated, to ultimately facilitate the development of interactive applications that employ visual input. Computational performance is benchmarked comparatively to state of the art and the efficacy of the approach is qualitatively assessed in the context of already developed applications related to interactive environments.
This paper discusses the opportunities and challenges of Ambient Intelligence (AmI) technologies in the context of classroom education, and presents the methodology and preliminary results of the development of an augmented school desk which integrates various AmI educational applications. The overall objective is to assess how AmI technologies can contribute to support common learning activities and enhance the learner’s experience in the classroom. Young learners were involved from the first phases of the design of the desk and its applications using scenario-based techniques.
This paper presents a system that supports the exploration of digital representations of large-scale museum artifacts in through non-instrumented, location-based interaction. The system employs a state-of-the-art computer vision system, which localizes and tracks multiple visitors. The artifact is presented in a wall-sized projection screen and it is visually annotated with text and images according to the location as well as walkthrough trajectories of the tracked visitors. The system is evaluated in terms of computational performance, localization accuracy, tracking robustness and usability.