Improved Model-Driven Engineering of User-Interfaces with Generative Macros

Savidis, A., Valsamakis, Y., & Lilis, Y. (2014) Improved Model-Driven Engineering of User-Interfaces with Generative Macros In C. Stephanidis & M. Antona (Eds.), Universal Access in Human-Computer Interaction. Design and Development Methods for Universal Access - Volume 4 of the combined Proceedings of the 16th International Conference on Human-Computer Interaction (HCI International 2014), Crete, Greece, 22-27 June, pp. 137-148. Berlin Heidelberg: Lecture Notes in Computer Science Series of Springer (LNCS 8513, ISBN: 978-3-319-07436-8).

Abstract

Model-driven engineering entails various modeling, abstraction and specialization levels for user-interface development. We focus on model-driven tools generating user-interface code, either entire or partial, providing a tangible basis for programmers to introduce custom refinements and extensions. The latter introduces two maintenance issues: (i) once the generated code is modified the source-to-model extraction path, if supported, is broken; and (ii) if the model is updated, code regeneration overwrites custom changes. To address these issues we proposed an alternative path: (i) instead of directly generating code, the model driven tool generates source fragments in the form of abstract syntax trees (ASTs) as XML files; (ii) the application deploys compile-time metaprogramming to manipulate, generate and insert code on-demand from such ASTs, using calls similar to macro invocations. The latter leads to improved separation of concerns: (a) the application programmer controls when and where interface source is generated and integrated in the application source; and (b) interface regeneration overwrites no source code as it only produces ASTs that are manipulated (input) via generator macros.

Jigsaw together: a distributed collaborative game for players with diverse skills and preferences

Dimitris Grammenos and Antonis Chatziantoniou (2014) Jigsaw together: a distributed collaborative game for players with diverse skills and preferences Proceeding, IDC '14 Proceedings of the 2014 conference on Interaction design and children, Pages 205-208

Abstract

Presently it is very hard (or even impossible) to allow multiple players with highly diverse characteristics (including age, skills, and preferences) to collaboratively share and play a single jigsaw puzzle. Towards this end, the work presented in this paper aims to expand the capabilities of digital jigsaw puzzles in 3 directions: (a) multiplayability by a large number of players; (b) accessibility by people with handmotor and visual impairments; and (c) concurrent playability by people with highly diverse characteristics. In this context, we present an electronic puzzle game which supports single player as well as distributed multiplayer sessions by people with diverse characteristics. The paper introduces the background against which the work is based and describes the key design features of the resulting game's user interface and gameplay.

Public Systems Supporting Noninstrumented Body-Based Interaction

Grammenos, D., Drossis, G., and Zabulis, X. (2014) Public Systems Supporting Noninstrumented Body-Based Interaction In A. Nijholt (Ed.), Playful User Interfaces (Interfaces that Invite Social and Physical Interaction: Gaming Media and Social Effects series), (pp. 25-45). Singapore: Springer

Abstract

Body-based interaction constitutes a very intuitive way for humans to communicate with their environment but also among themselves. Nowadays, various technological solutions allow for fast and robust, noninstrumented body tracking at various levels of granularity and sophistication. This chapter studies three distinct cases showcasing different representative approaches of employing body-based interaction for the creation of public systems, in two application domains: culture and marketing. The first case is a room-sized exhibit at an archaeological museum, where multiple visitors concurrently interact with a large wall projection through their position in space, as well as through the path they follow. The second example is an ‘‘advergame’’ used as a means of enhancing the outdoor advertising campaign of a food company. In this case, players interact with the wall-projected game world through a virtual, two-dimensional shadow of their body. Finally, the third case presents a public system for exploring timelines in both two and three dimensions that supports detailed body tracking in combination with single-hand, two-hands, and leg gestures. Design considerations are provided for each case, including related benefits and shortcomings. Additionally, findings stemming from user-based evaluations and field observations on the actual use of these systems are presented, along with pointers to potential improvements and upcoming challenges.

Recognition of Simple Head Gestures Based on Head Pose Estimation Analysis

Galanakis, G., Katsifarakis, P., Zabulis, X., & Adami, I. (2014) Recognition of Simple Head Gestures Based on Head Pose Estimation Analysis In M. Weyn & I. Evgeniev (Eds.), Proceedings of The Fourth International Conference on Ambient Computing, Applications, Services and Technologies (AMBIENT 2014), Rome, Italy, 24-28 August 2014 (pp. 88-96).

Abstract

A recognition method for simple gestures is proposed and evaluated. Such gestures are of interest as they are the primitive elements of more complex gestures utilized in natural communication and human computer interaction. The input to the recognition method is obtained from a head tracker that is based on images acquired from a depth camera. Candidate gestures are detected within continuous head motion and recognized, acknowledging that head pose estimates might be inaccurate. The proposed method is evaluated within the context of human-computer dialog.The reported results show that the proposed approach yields competitive recognition results to state of-the-art approaches.

Staged Model-Driven Generators – shifting responsibility for code emission to embedded metaprograms

Lilis, Y., Savidis, A., & Valsamakis, Y. (2014) Staged Model-Driven Generators – shifting responsibility for code emission to embedded metaprograms In the Proceedings of the 2nd International Conference on Model-Driven Engineering and Software Development (MODELSWARD 2014), Lisbon, Portugal, 7-9 January (pp. 509-521). Portugal: SCITEPRESS.

Abstract

We focus on MDE tools generating source code, entire or partial, providing a basis for programmers to introduce custom system refinements and extensions. The latter may introduce two maintenance issues once code is freely edited: (i) if source tags are affected model reconstruction is broken; and (ii) code inserted without special tags is overwritten on regeneration. Additionally, little progress has been made in combining sources whose code originates from multiple generative tools. To address these issues we propose an alternative path. Instead of generating code MDE tools generate source fragments as abstract syntax trees (ASTs). Then, programmers deploy metaprogramming to manipulate, combine and insert code on-demand from ASTs with calls resembling macro invocations. The latter shifts responsibility for source code emission from MDE tools to embedded metaprograms and enables programmers control where the produced code is inserted and integrated. Moreover, it supports source regeneration and model reconstruction causing no maintenance issues since MDE tools produce non-editable ASTs. We validate our proposition with case studies involving a user-interface builder and a general purpose modeling tool.

Tracking persons using a network of RGBD cameras

Galanakis, G., Zabulis, X., Koutlemanis, P., Paparoulis, S., & Kouroumalis, V. (2014) Tracking persons using a network of RGBD cameras In the Proceedings of the 7th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2014), Rhodes, Greece, 27-30 May.

Abstract

A computer vision system that employs an RGBD camera network to track multiple humans is presented. The acquired views are used to volumetrically and photometrically reconstruct and track the humans robustly and in real time. Given the frequent and accurate monitoring of humans in space and time, their locations and walk-through trajectory can be robustly tracked in real-time.

A Multimodal Ambient Intelligence Environment for Playful Learning

Papagiannakis, H., Antona, M., Ntoa, S., & Stephanidis, C (2013) A Multimodal Ambient Intelligence Environment for Playful Learning Journal of Universal Computer Science, special issue on “Towards Sustainable Computing through Ambient Intelligence”, 19 (17), 2617-2636.

Abstract

This paper reports the design, development and evaluation of a technological framework for learning applications, named AmI Playfield, aimed at creating challenging learning conditions through play and entertainment. AmI Playfield is an educative Ambient Intelligent (AmI) environment which emphasizes the use of kinesthetic and collaborative technology in a natural playful learning context and embodies performance measurement techniques. In order to test and assess AmI Playfield, the "Apple Hunt" application was developed, which engages (young) learners in arithmetic thinking through kinesthetic and collaborative play, observed by unobtrusive AmI technology behind the scene. "Apple Hunt" has been evaluated according to a combination of methodologies suitable for young testers, whereas Children Committees are introduced as a promising approach to evaluation with children. The obtained results demonstrate the system's high potential to generate thinking and fun, deriving from the learners' full-body kinesthetic play and team work.

A Museum Guide Application for Deployment on User-Owned Mobile Devices

Kapnas, G., Leonidis, A., Korozi, M., Ntoa, S., Margetis, G., & Stephanidis, C (2013) A Museum Guide Application for Deployment on User-Owned Mobile Devices In C. Stephanidis (Ed.), HCI International 2013 - Posters' Extended Abstracts, Part II - Volume 29 of the combined Proceedings of HCI International 2013 (15th International Conference on Human-Computer Interaction), Las Vegas, Nevada, USA, 21-26 July, pp. 253-257. Berlin Heidelberg: Communications in Computer and Information Science (CCIS 374, ISBN: 978-3-642-39475-1).

Abstract

This poster describes the design and development of a comprehensive Museum Tour Guide mobile application that can be installed on user-owned devices. The purpose of the application is to provide museum visitors with a device that can improve their experience through optimised planning of their visit and an always-available stream of information regarding the museum and its exhibits. The main goals, the design, as well as the implementation of the application are described and the main functions of the application are presented. Finally, conclusions are drawn and further development ideas are discussed.

A Prototypical Interactive exhibition for the Archaeological Museum of Thessaloniki

D. Grammenos, X. Zabulis, D. Michel, P. Padeleris, T. Sarmis, G. Georgalis, P. Koutlemanis, K. Tzevanidis, A. A. Argyros, M. Sifakis, P. Adam-Veleni, C. Stephanidis (2013) A Prototypical Interactive exhibition for the Archaeological Museum of Thessaloniki International Journal of Heritage in the Digital Era, 2(1):75-99, 2013

Abstract

In 2010, the Institute of Computer Science of the Foundation for Research and Technology-Hellas (ICS-FORTH) and the Archaeological Museum of Thessaloniki (AMTh) collaborated towards the creation of a special exhibition of prototypical interactive systems with subjects drawn from ancient Macedonia, named “Macedonia from fragments to pixels”. The exhibition comprises seven interactive systems based on the research outcomes of ICS-FORTH's Ambient Intelligence Programme. Up to the summer of 2012, more than 165.000 people have visited it. The paper initially provides some background information, including related previous research work, and then illustrates and discusses the development process that was followed for creating the exhibition. Subsequently, the technological and interactive characteristics of the project's outcomes (i.e., the interactive systems) are analysed and the complementary evaluation approaches followed are briefly described. Finally, some conclusions stemming from the project are highlighted.

A Steerable Multitouch Display for Surface Computing and its Evaluation

P. Koutlemanis, A. Ntelidakis, X. Zabulis, D. Grammenos, I. Adami (2013) A Steerable Multitouch Display for Surface Computing and its Evaluation International Journal on Artificial Intelligence Tools, Vol. 22, No. 6 (2013) 1360016, World Scientific Publishing Company.

Abstract

In this paper, a steerable, interactive projection display that has the shape of a disk is presented. Interactivity is provided through sensitivity to the contact of multiple fingertips and is achieved through the use of a RGBD camera. The surface is mounted on two gimbals which, in turn, provide two rotational degrees of freedom. Modulation of surface posture supports the ergonomy of the device but can be, alternatively, used as a means of user-interface input. The geometry for mapping visual content and localizing fingertip contacts upon this steerable display is provided, along with pertinent calibration methods for the proposed system. An accurate technique for touch detection is proposed, while touch detection and projection accuracy issues are studied and evaluated through extensive experimentation. Most importantly, the system is thoroughly evaluated as to its usability, through a pilot application that was developed for this purpose. We show that the outcome meets real-time performance, accuracy and usability requirements for employing the approach in human computer interaction.

1 6 7 8 9 10 13