Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 6, September
Previous Issue
Volume 6, March
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Informatics, Volume 6, Issue 2 (June 2019) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 2595 KiB  
Article
Computational Thinking and Down Syndrome: An Exploratory Study Using the KIBO Robot
by Carina S. González-González, Erika Herrera-González, Lorenzo Moreno-Ruiz, Nuria Reyes-Alonso, Selene Hernández-Morales, María D. Guzmán-Franco and Alfonso Infante-Moro
Informatics 2019, 6(2), 25; https://doi.org/10.3390/informatics6020025 - 20 Jun 2019
Cited by 37 | Viewed by 8614
Abstract
Computational thinking and coding are key competencies in the 21st century. People with Down syndrome need to be part of this new literacy. For this reason, in this work, we present an exploratory study carried out with students with Down syndrome with cognitive [...] Read more.
Computational thinking and coding are key competencies in the 21st century. People with Down syndrome need to be part of this new literacy. For this reason, in this work, we present an exploratory study carried out with students with Down syndrome with cognitive ages of 3–6 years old using a tangible robot We applied the observational method during the sessions to analyze the participants’ emotional states, engagement, and comprehension of the programming sequences. Results show that people with cognitive disabilities can acquire basic programming and computational skills using tangible robots such as KIBO. Full article
Show Figures

Figure 1

Figure 1
<p>Observational instrument for evaluating emotions in educational contexts [<a href="#B55-informatics-06-00025" class="html-bibr">55</a>].</p>
Full article ">Figure 2
<p>KIBO robot with sensors and light output attached.</p>
Full article ">Figure 3
<p>Blocks for programming KIBO and a sample KIBO program. This program tells the robot to spin, shake, move backward, move forward, and turn a red light on.</p>
Full article ">Figure 4
<p>Student with Down syndrome creating a basic program sequence.</p>
Full article ">Figure 5
<p>Student with DS scanning a basic sequence to move KIBO.</p>
Full article ">Figure 6
<p>Results of the behaviors observed in the first, introductory session to the KIBO robot.</p>
Full article ">Figure 7
<p>Results of the behaviors observed in the second session, on programming KIBO.</p>
Full article ">
36 pages, 1944 KiB  
Article
Improving the Translation Environment for Professional Translators
by Vincent Vandeghinste, Tom Vanallemeersch, Liesbeth Augustinus, Bram Bulté, Frank Van Eynde, Joris Pelemans, Lyan Verwimp, Patrick Wambacq, Geert Heyman, Marie-Francine Moens, Iulianna van der Lek-Ciudin, Frieda Steurs, Ayla Rigouts Terryn, Els Lefever, Arda Tezcan, Lieve Macken, Véronique Hoste, Joke Daems, Joost Buysschaert, Sven Coppers, Jan Van den Bergh and Kris Luytenadd Show full author list remove Hide full author list
Informatics 2019, 6(2), 24; https://doi.org/10.3390/informatics6020024 - 20 Jun 2019
Cited by 7 | Viewed by 12248
Abstract
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as [...] Read more.
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Translation Technology)
Show Figures

Figure 1

Figure 1
<p>An overview of the SCATE interface. Ⓐ The sentence to translate, Ⓑ the editing field, Ⓒ the hybrid MT that also includes pretranslations, Ⓓ a list of translation alternatives coming from the term base, TM and MT, Ⓔ fuzzy matches, Ⓕ suggestion from autocomplete, Ⓖ previous source sentences, Ⓗ upcoming source sentences and Ⓘ a progress bar.</p>
Full article ">Figure 2
<p>Fuzzy matches (bottom right) and integrated TM-MT suggestion (middle) in the prototype.</p>
Full article ">Figure 3
<p>Overview BLEU scores TM-MT integration and baselines [<a href="#B12-informatics-06-00024" class="html-bibr">12</a>].</p>
Full article ">Figure 4
<p>An example node-aligned parallel tree (Gloss of the Dutch sentence is “<span class="html-italic">the men look at the show</span>”).</p>
Full article ">Figure 5
<p>The SCATE MT error taxonomy.</p>
Full article ">Figure 6
<p>Binary vector for <span class="html-italic">zijn (are)</span> consisting of 1s for its PoS, morphology and dependency features and 0s for the remaining items in the vocabulary.</p>
Full article ">Figure 7
<p>The proposed neural network architecture for detecting fluency errors. While <span class="html-italic">n</span> represents a surface <span class="html-italic">n</span>-gram, <math display="inline"><semantics> <mrow> <mi>s</mi> <msub> <mi>n</mi> <mi>p</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>s</mi> <msub> <mi>n</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>s</mi> <msub> <mi>n</mi> <mi>c</mi> </msub> </mrow> </semantics></math> represent syntactic <span class="html-italic">n</span>-grams obtained around the target word by considering its parents, siblings and children as context in a given dependency tree.</p>
Full article ">Figure 8
<p>Quality estimation output in the SCATE user interface.</p>
Full article ">Figure 9
<p>Most used online terminological resources.</p>
Full article ">Figure 10
<p>Character-level representation in an LSTM framework.</p>
Full article ">Figure 11
<p>The different punctuation prediction strategies in a translation context.</p>
Full article ">Figure 12
<p>All translation suggestions are closely related to each other. When a translator types a character, (<b>A</b>) the auto-completion algorithm generates a suggestion. (<b>B</b>) The translator compares this prediction to other alternatives. (<b>C</b>) Interesting alternatives can be inspected in the context in which they have been used by other translators. (<b>D</b>) When a translator decides which alternative to use, it can be added to the translation by pressing ENTER.</p>
Full article ">Figure 13
<p>Two configurations of the SCATE interface with the same functionality and suggestions.</p>
Full article ">Figure 14
<p>Each transition in the workflow is optional and can be enforced by the translation environment.</p>
Full article ">Figure 15
<p>Percentage of time devoted to keystrokes vs. mouse actions per translation environment per participant.</p>
Full article ">Figure 16
<p>Total number of characters typed versus text deleted per translation environment per participant.</p>
Full article ">Figure 17
<p>Example of how a translation is produced in SCATE.</p>
Full article ">
16 pages, 1546 KiB  
Article
Frame-Based Elicitation of Mid-Air Gestures for a Smart Home Device Ecosystem
by Panagiotis Vogiatzidakis and Panayiotis Koutsabasis
Informatics 2019, 6(2), 23; https://doi.org/10.3390/informatics6020023 - 5 Jun 2019
Cited by 20 | Viewed by 6926
Abstract
If mid-air interaction is to be implemented in smart home environments, then the user would have to exercise in-air gestures to address and manipulate multiple devices. This paper investigates a user-defined gesture vocabulary for basic control of a smart home device ecosystem, consisting [...] Read more.
If mid-air interaction is to be implemented in smart home environments, then the user would have to exercise in-air gestures to address and manipulate multiple devices. This paper investigates a user-defined gesture vocabulary for basic control of a smart home device ecosystem, consisting of 7 devices and a total of 55 referents (commands for device) that can be grouped to 14 commands (that refer to more than one device). The elicitation study was conducted in a frame (general scenario) of use of all devices to support contextual relevance; also, the referents were presented with minimal affordances to minimize widget-specific proposals. In addition to computing agreement rates for all referents, we also computed the internal consistency of user proposals (single-user agreement for multiple commands). In all, 1047 gestures from 18 participants were recorded, analyzed, and paired with think-aloud data. The study reached to a mid-air gesture vocabulary for a smart-device ecosystem, which includes several gestures with very high, high and medium agreement rates. Furthermore, there was high consistency within most of the single-user gesture proposals, which reveals that each user developed and applied her/his own mental model about the whole set of interactions with the device ecosystem. Thus, we suggest that mid-air interaction support for smart homes should not only offer a built-in gesture set but also provide for functions of identification and definition of personalized gesture assignments to basic user commands. Full article
Show Figures

Figure 1

Figure 1
<p>Sample slides for the list of referents.</p>
Full article ">Figure 2
<p>Comparison of agreement rate and consistency rate.</p>
Full article ">Figure 3
<p>Proposed gestures with higher agreement rates (*: Clockwise, **: Counter-Clockwise).</p>
Full article ">Figure 4
<p>Gesture taxonomy for identified gestures.</p>
Full article ">Figure 5
<p>Average consistency rate per user.</p>
Full article ">
12 pages, 2024 KiB  
Article
Teaching HCI Skills in Higher Education through Game Design: A Study of Students’ Perceptions
by Pedro C. Santana-Mancilla, Miguel A. Rodriguez-Ortiz, Miguel A. Garcia-Ruiz, Laura S. Gaytan-Lugo, Silvia B. Fajardo-Flores and Juan Contreras-Castillo
Informatics 2019, 6(2), 22; https://doi.org/10.3390/informatics6020022 - 14 May 2019
Cited by 16 | Viewed by 9561
Abstract
Human-computer interaction (HCI) is an area with a wide range of concepts and knowledge. Therefore, a need to innovate in the teaching-learning processes to achieve an effective education arises. This article describes a proposal for teaching HCI through the development of projects that [...] Read more.
Human-computer interaction (HCI) is an area with a wide range of concepts and knowledge. Therefore, a need to innovate in the teaching-learning processes to achieve an effective education arises. This article describes a proposal for teaching HCI through the development of projects that allow students to acquire higher education competencies through the design and evaluation of computer games. Finally, an empirical validation (questionnaires and case study) with 40 undergraduate students (studying their fifth semester of software engineering) was applied at the end of the semester. The results indicated that this teaching method provides the students with the HCI skills (psychology of everyday things, involving users, task-centered system design, models of human behavior, creativity and metaphors, and graphical screen design) and, more importantly, they have a positive perception on the efficacy of the use of videogame design in a higher education course. Full article
Show Figures

Figure 1

Figure 1
<p>The computer game Fallbox. The avatar must dodge the falling boxes.</p>
Full article ">Figure 2
<p>The students create the interacting devices. The image illustrates a gesture interaction.</p>
Full article ">Figure 3
<p>Human-computer interaction (HCI) as a usability engineering process.</p>
Full article ">Figure 4
<p>Design of the game mechanism.</p>
Full article ">Figure 5
<p>Design of the interaction with the game control device.</p>
Full article ">Figure 6
<p>Head-tracking interaction device.</p>
Full article ">Figure 7
<p>Number of students’ responses about acquiring the required HCI skills by gender.</p>
Full article ">
14 pages, 1493 KiB  
Article
A New Co-Evolution Binary Particle Swarm Optimization with Multiple Inertia Weight Strategy for Feature Selection
by Jingwei Too, Abdul Rahim Abdullah and Norhashimah Mohd Saad
Informatics 2019, 6(2), 21; https://doi.org/10.3390/informatics6020021 - 8 May 2019
Cited by 70 | Viewed by 8454
Abstract
Feature selection is a task of choosing the best combination of potential features that best describes the target concept during a classification process. However, selecting such relevant features becomes a difficult matter when large number of features are involved. Therefore, this study aims [...] Read more.
Feature selection is a task of choosing the best combination of potential features that best describes the target concept during a classification process. However, selecting such relevant features becomes a difficult matter when large number of features are involved. Therefore, this study aims to solve the feature selection problem using binary particle swarm optimization (BPSO). Nevertheless, BPSO has limitations of premature convergence and the setting of inertia weight. Hence, a new co-evolution binary particle swarm optimization with a multiple inertia weight strategy (CBPSO-MIWS) is proposed in this work. The proposed method is validated with ten benchmark datasets from UCI machine learning repository. To examine the effectiveness of proposed method, four recent and popular feature selection methods namely BPSO, genetic algorithm (GA), binary gravitational search algorithm (BGSA) and competitive binary grey wolf optimizer (CBGWO) are used in a performance comparison. Our results show that CBPSO-MIWS can achieve competitive performance in feature selection, which is appropriate for application in engineering, rehabilitation and clinical areas. Full article
Show Figures

Figure 1

Figure 1
<p>The example of structure of co-evolution binary particle swarm optimization with multiple inertia weight strategy (CBPSO-MIWS).</p>
Full article ">Figure 2
<p>An example of a solution with 10 dimensions.</p>
Full article ">Figure 3
<p>Overview of proposed CBPSO-MIWS for feature selection and classification.</p>
Full article ">Figure 4
<p>Convergence curves of five feature selection methods on datasets 1 to 6.</p>
Full article ">Figure 5
<p>Convergence curves of five feature selection methods on datasets 7 to 10.</p>
Full article ">Figure 6
<p>Mean accuracy of five different feature selection methods over 10 datasets.</p>
Full article ">
23 pages, 7026 KiB  
Article
Conceptualization and Non-Relational Implementation of Ontological and Epistemic Vagueness of Information in Digital Humanities
by Patricia Martin-Rodilla and Cesar Gonzalez-Perez
Informatics 2019, 6(2), 20; https://doi.org/10.3390/informatics6020020 - 6 May 2019
Cited by 17 | Viewed by 7864
Abstract
Research in the digital humanities often involves vague information, either because our objects of study lack clearly defined boundaries, or because our knowledge about them is incomplete or hypothetical, which is especially true in disciplines about our past (such as history, archaeology, and [...] Read more.
Research in the digital humanities often involves vague information, either because our objects of study lack clearly defined boundaries, or because our knowledge about them is incomplete or hypothetical, which is especially true in disciplines about our past (such as history, archaeology, and classical studies). Most techniques used to represent data vagueness emerged from natural sciences, and lack the expressiveness that would be ideal for humanistic contexts. Building on previous work, we present here a conceptual framework based on the ConML modelling language for the expression of information vagueness in digital humanities. In addition, we propose an implementation on non-relational data stores, which are becoming popular within the digital humanities. Having clear implementation guidelines allow us to employ search engines or big data systems (commonly implemented using non-relational approaches) to handle the vague aspects of information. The proposed implementation guidelines have been validated in practice, and show how we can query a vagueness-aware system without a large penalty in analytical and processing power. Full article
(This article belongs to the Collection Uncertainty in Digital Humanities)
Show Figures

Figure 1

Figure 1
<p>ConML class model for Toponym Studies in DICTOMAGRED project.</p>
Full article ">Figure 2
<p>ConML model for Sijilmasa, Tamdalt, and Aghmat Ourika toponyms information in DICTOMAGRED project. In grey, objects created for instantiate the class model, representing imprecise and uncertain information regarding toponym, ToponymDistance, and geographic area.</p>
Full article ">Figure 3
<p>Firebase console showing the data node for defining null and unknown semantics.</p>
Full article ">Figure 4
<p>Firebase console showing the data node for defining certainty qualifiers in DICTOMAGRED implementation.</p>
Full article ">Figure 5
<p>Firebase console showing the regions data node implementing the abstract enumerated items mechanism.</p>
Full article ">Figure 6
<p>Firebase console showing UsedIn attribute implementation according the arbitrary time resolution mechanism.</p>
Full article ">Figure 7
<p>Firebase console showing final implementation details. At right, the values marhalas or parasangs (Iranian past measure unit for distance) as vague measurement units for distance in the DICTOMAGRED data model. At left, the final values for the specific Tamdalt toponym supporting vague information.</p>
Full article ">Figure 8
<p>Query A execution through Algolia search engine. We have added two facets with the two requirements of the query about the region and the certainty in the current name use of the toponyms.</p>
Full article ">Figure 9
<p>Results for query A.</p>
Full article ">Figure 10
<p>Query B execution using Algolia search engine. We have added custom expression on the Algolia console referring to Sijilmasa internal code as the reference point for recovering distances to it.</p>
Full article ">Figure 11
<p>Results for Query B.</p>
Full article ">Figure 12
<p>Query C execution through Algolia search engine. We have added a custom expression on the Algolia console with an OR expression for executing it.</p>
Full article ">Figure 13
<p>Results for query C.</p>
Full article ">
21 pages, 1385 KiB  
Article
Improving Semantic Similarity with Cross-Lingual Resources: A Study in Bangla—A Low Resourced Language
by Rajat Pandit, Saptarshi Sengupta, Sudip Kumar Naskar, Niladri Sekhar Dash and Mohini Mohan Sardar
Informatics 2019, 6(2), 19; https://doi.org/10.3390/informatics6020019 - 5 May 2019
Cited by 12 | Viewed by 7169
Abstract
Semantic similarity is a long-standing problem in natural language processing (NLP). It is a topic of great interest as its understanding can provide a look into how human beings comprehend meaning and make associations between words. However, when this problem is looked at [...] Read more.
Semantic similarity is a long-standing problem in natural language processing (NLP). It is a topic of great interest as its understanding can provide a look into how human beings comprehend meaning and make associations between words. However, when this problem is looked at from the viewpoint of machine understanding, particularly for under resourced languages, it poses a different problem altogether. In this paper, semantic similarity is explored in Bangla, a less resourced language. For ameliorating the situation in such languages, the most rudimentary method (path-based) and the latest state-of-the-art method (Word2Vec) for semantic similarity calculation were augmented using cross-lingual resources in English and the results obtained are truly astonishing. In the presented paper, two semantic similarity approaches have been explored in Bangla, namely the path-based and distributional model and their cross-lingual counterparts were synthesized in light of the English WordNet and Corpora. The proposed methods were evaluated on a dataset comprising of 162 Bangla word pairs, which were annotated by five expert raters. The correlation scores obtained between the four metrics and human evaluation scores demonstrate a marked enhancement that the cross-lingual approach brings into the process of semantic similarity calculation for Bangla. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram for cross-lingual approaches.</p>
Full article ">Figure 2
<p>A snapshot of the hypernym–hyponym relations in the Bangla WordNet.</p>
Full article ">Figure 3
<p>Human Score (H)/System Score (S) vs. Pair Number for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>P</mi> <mi>A</mi> <mi>T</mi> <mi>H</mi> <mo>_</mo> <mi>B</mi> <mi>A</mi> <mi>S</mi> <mi>E</mi> <mi>D</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>P</mi> <mi>A</mi> <mi>T</mi> <mi>H</mi> <mo>_</mo> <mi>B</mi> <mi>A</mi> <mi>S</mi> <mi>E</mi> <mi>D</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <mi>C</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <msub> <mi>C</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math> (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <msub> <mi>C</mi> <mrow> <mi>G</mi> <mi>i</mi> <mi>g</mi> <mi>a</mi> <mi>w</mi> <mi>o</mi> <mi>r</mi> <mi>d</mi> </mrow> </msub> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 3 Cont.
<p>Human Score (H)/System Score (S) vs. Pair Number for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>P</mi> <mi>A</mi> <mi>T</mi> <mi>H</mi> <mo>_</mo> <mi>B</mi> <mi>A</mi> <mi>S</mi> <mi>E</mi> <mi>D</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>P</mi> <mi>A</mi> <mi>T</mi> <mi>H</mi> <mo>_</mo> <mi>B</mi> <mi>A</mi> <mi>S</mi> <mi>E</mi> <mi>D</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <mi>C</mi> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <msub> <mi>C</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math> (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <msubsup> <mi>M</mi> <mrow> <mi>W</mi> <mi>O</mi> <mi>R</mi> <mi>D</mi> <mn>2</mn> <mi>V</mi> <mi>E</mi> <msub> <mi>C</mi> <mrow> <mi>G</mi> <mi>i</mi> <mi>g</mi> <mi>a</mi> <mi>w</mi> <mi>o</mi> <mi>r</mi> <mi>d</mi> </mrow> </msub> </mrow> <mrow> <mi>B</mi> <mi>E</mi> <mi>N</mi> <mi>G</mi> <mo>→</mo> <mi>E</mi> <mi>N</mi> <mi>G</mi> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 21322 KiB  
Article
The Effects of Motion Artifacts on Self-Avatar Agency
by Alexandros Koilias, Christos Mousas and Christos-Nikolaos Anagnostopoulos
Informatics 2019, 6(2), 18; https://doi.org/10.3390/informatics6020018 - 29 Apr 2019
Cited by 27 | Viewed by 7385
Abstract
One way of achieving self-agency in virtual environments is by using a motion capture system and retargeting user’s motion to the virtual avatar. In this study, we investigated whether the self-agency is affected when motion artifacts appear on top of the baseline motion [...] Read more.
One way of achieving self-agency in virtual environments is by using a motion capture system and retargeting user’s motion to the virtual avatar. In this study, we investigated whether the self-agency is affected when motion artifacts appear on top of the baseline motion capture data assigned to the self-avatar. For this experiment, we implemented four artifacts: noise, latency, motion jump, and offset rotation of joints. The data provided directly from the motion capture system formed the baseline of the study. We developed three observation tasks to assess self-agency: self-observation, observation through a virtual mirror, and observation during locomotion. A questionnaire was adopted and used to capture the self-agency of participants. We analyzed the collected responses of participants to determine whether the motion artifacts significantly altered the participants’ sense of self-agency. The obtained results indicated that participants are not always sensitive to the motion artifacts assigned to the self-avatar, but the sense of self-agency is dependent on the observation task they were asked to perform. Implications for further research are discussed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The different self-avatars designed and used in this experiment. Each participant was responsible for choosing the avatar that best represented him/her. The gender of the avatar was based on the gender identification part of the demographics form.</p>
Full article ">Figure 2
<p>The virtual environment developed for this experiment. The mirror and the indicators on the ground were switched on or off by the experimenter depending on the task at different stages of the experiment.</p>
Full article ">Figure 3
<p>A participant observing his motion retargeted to the self-avatar that embodies him for the three developed tasks during the baseline condition.</p>
Full article ">Figure 4
<p>Participants’ responses to questions on body ownership for each of the five conditions of the experiment.</p>
Full article ">Figure 5
<p>Participants’ responses to questions on self-agency for each task performed.</p>
Full article ">Figure 6
<p>Participants’ responses for the self-agency and impact of the observation task from the conducted experiment. SO, self-observation; MO, observation through a virtual mirror; LO, observation during locomotion.</p>
Full article ">
14 pages, 2289 KiB  
Article
The Effect of Evidence Transfer on Latent Feature Relevance for Clustering
by Athanasios Davvetas, Iraklis A. Klampanos, Spiros Skiadopoulos and Vangelis Karkaletsis
Informatics 2019, 6(2), 17; https://doi.org/10.3390/informatics6020017 - 25 Apr 2019
Cited by 1 | Viewed by 6226
Abstract
Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with [...] Read more.
Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with a low quality of evidence, while increasing the effectiveness of the clustering accuracy during relevant corresponding evidence. We interpret the effects of evidence transfer on the latent representation of an autoencoder by comparing our method to the information bottleneck method. Information bottleneck is an optimisation problem of finding the best tradeoff between maximising the mutual information of data representations and a task outcome while at the same time being effective in compressing the original data source. We posit that the evidence transfer method has essentially the same objective regarding the latent representations produced by an autoencoder. We verify our hypothesis using information theoretic metrics from feature selection in order to perform an empirical analysis over the information that is carried through the bottleneck of the latent space. We use the relevance metric to compare the overall mutual information between the latent representations and the ground truth labels before and after their incremental manipulation, as well as, to study the effects of evidence transfer regarding the significance of each latent feature. Full article
(This article belongs to the Special Issue Feature Selection Meets Deep Learning)
Show Figures

Figure 1

Figure 1
<p>A comparison of the Relevance metric from feature selection with Unsupervised Accuracy (ACC) and Normalised Mutual Information used for clustering prediction evaluation: We compare both approaches of Relevance (Mutual Information and F-test). To interpret the effects of evidence transfer, we analyse the behaviour of these metrics in five different scenarios. Each subfigure represents a single scenario, and the comparison of the metrics is performed individually on each scenario. For visualisation purposes, we normalise all four metrics using the max norm. We observe similar trends in the behaviour of all these metrics for all predefined experiments and configurations. In subfigure (<b>e</b>), we plot the results of configurations involving three sources of evidence and, hence, the differences in shape compared to the other subfigures.</p>
Full article ">Figure 1 Cont.
<p>A comparison of the Relevance metric from feature selection with Unsupervised Accuracy (ACC) and Normalised Mutual Information used for clustering prediction evaluation: We compare both approaches of Relevance (Mutual Information and F-test). To interpret the effects of evidence transfer, we analyse the behaviour of these metrics in five different scenarios. Each subfigure represents a single scenario, and the comparison of the metrics is performed individually on each scenario. For visualisation purposes, we normalise all four metrics using the max norm. We observe similar trends in the behaviour of all these metrics for all predefined experiments and configurations. In subfigure (<b>e</b>), we plot the results of configurations involving three sources of evidence and, hence, the differences in shape compared to the other subfigures.</p>
Full article ">Figure 2
<p>A comparison between the Rank Variation and Relevance metric (Mutual Information estimation): For visualisation purposes, we normalise all four metrics using the max norm. Individual latent feature significances in cases of of real corresponding evidence seem to be correlated with the overall relevance measurements. In some cases of low quality evidence, inconsistencies in the behaviour of the ranking variation on the individual latent feature level are observed, which mostly represent swapping rankings between two or four features due to incremental reconstruction training.</p>
Full article ">
23 pages, 17081 KiB  
Article
RadViz++: Improvements on Radial-Based Visualizations
by Lucas de Carvalho Pagliosa and Alexandru C. Telea
Informatics 2019, 6(2), 16; https://doi.org/10.3390/informatics6020016 - 9 Apr 2019
Cited by 14 | Viewed by 7890
Abstract
RadViz is one of the few methods in Visual Analytics able to project high-dimensional data and explain formed structures in terms of data variables. However, RadViz methods have several limitations in terms of scalability in the number of variables, ambiguities created in the [...] Read more.
RadViz is one of the few methods in Visual Analytics able to project high-dimensional data and explain formed structures in terms of data variables. However, RadViz methods have several limitations in terms of scalability in the number of variables, ambiguities created in the projection by the placement of variables along the circular design space, and ability to segregate similar instances into visual clusters. To address these limitations, we propose RadViz++, a set of techniques for interactive exploration of high-dimensional data using a RadViz-type metaphor. We demonstrate the added value of our method by comparing it with existing high-dimensional visualization methods, and also by analyzing a complex real-world dataset having over a hundred variables. Full article
Show Figures

Figure 1

Figure 1
<p>An instance is pulled towards the anchors proportionally to its normalized variable values.</p>
Full article ">Figure 2
<p>(<b>a</b>) RadViz representation of a simple dataset showing clusters (red and blue) and one outlier (black). (<b>b</b>) RadViz Deluxe layout of the same data showing better cluster separation but poorer explanation of the outlier. (<b>c</b>) Differences highlighted between (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) RadViz with no variables ordering. (<b>b</b>) in RadViz++, anchors are rearranged in the circle according to their correlation coefficient. In our implementation, anchors are depicted by cells with the corresponding variable names above them, and points are colored based on their classes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dendrogram built from variable correlation (<a href="#sec3dot1-informatics-06-00016" class="html-sec">Section 3.1</a>). (<b>b</b>) Simplified dendrogram (<a href="#sec3dot2dot1-informatics-06-00016" class="html-sec">Section 3.2.1</a>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Circular icicle plot showing the full dendrogram (<math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0</mn> <mo>%</mo> </mrow> </semantics></math>). (<b>b</b>) Plot of the simplified dendrogram (<math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>) leading to a more compact layout.</p>
Full article ">Figure 6
<p>HEB bundles and variable histograms in RadViz++.</p>
Full article ">Figure 7
<p>(<b>a</b>) Aggregation of several variables. (<b>b</b>) Refining the aggregation for the bottom (brown) cluster.</p>
Full article ">Figure 8
<p>(<b>a</b>) Variables to filter away (white). (<b>b</b>) RadViz++ result after variable filtering (using 11 of the original 18 variables).</p>
Full article ">Figure 9
<p>Animation of RadViz scatterplot (<b>left</b>) towards the LAMP scatterplot (<b>right</b>) for the Segmentation dataset. Interpolation factors are <math display="inline"><semantics> <mrow> <mn>0.2</mn> <mo>,</mo> <mn>0.4</mn> <mo>,</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.8</mn> </mrow> </semantics></math>. While the LAMP plots offer better cluster segregation, the RadViz plot explains the points in terms of variables. Note how the icicle-plot background opacity changes to indicate the RadViz <span class="html-italic">vs</span> LAMP mode of the scatterplot.</p>
Full article ">Figure 10
<p>(<b>a</b>) LAMP scatterplot for the Segmentation dataset. (<b>b</b>) LAMP after the variable filtering shown in <a href="#informatics-06-00016-f008" class="html-fig">Figure 8</a>, leading to a better clustering, but using only 11 of the 18 variables.</p>
Full article ">Figure 11
<p>Brush-and-link explanation of the (<b>a</b>) blue and (<b>b</b>) brown clusters. Despite groups of points cannot be correlated to anchors in the LAMP scatterplot, it still valid to explain them in terms of variable ranges.</p>
Full article ">Figure 12
<p>(<b>a</b>) Attribute-based analysis of 7 Gaussian clusters dataset [<a href="#B5-informatics-06-00016" class="html-bibr">5</a>]. The variable ‘Dim <span class="html-italic">i</span>’ maps to <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> in our notation. (<b>b</b>) RadViz++ leads to the same conclusions with a cleaner and simpler layout.</p>
Full article ">Figure 13
<p>The brush-and-link tool helps explain clusters whose points overlap in the scatterplot, thereby decreasing ambiguity problems. For each selected cluster <math display="inline"><semantics> <msub> <mi>c</mi> <mi>i</mi> </msub> </semantics></math>, bundles show that its points have <span class="html-italic">multiple</span> values in at least one variable bins.</p>
Full article ">Figure 14
<p>Breast Cancer dataset analysis performed by Pagliosa et al. [<a href="#B5-informatics-06-00016" class="html-bibr">5</a>]. The variance of the involved variables is the main discriminative factor between the two clusters. All variables contribute quite similarly to discrimination, except <span class="html-italic">Mitosis</span>, which has a low overall variance.</p>
Full article ">Figure 15
<p>Breast Cancer dataset analyzed using RadViz++ with force-based (<b>a</b>) and LAMP (<b>b</b>) projection.</p>
Full article ">Figure 16
<p>Breast Cancer dataset, explaining the benign (<b>a</b>) and malignant (<b>b</b>) clusters by variables.</p>
Full article ">Figure 17
<p>Corel dataset visualized using RadViz++.</p>
Full article ">Figure 18
<p>Finding the most descriptive variables for the 10 clusters in the Corel dataset. Detailed description in the text.</p>
Full article ">Figure 19
<p>Verifying the explanatory power of each variable-set after selecting its respective anchor (<b>a</b>–<b>c</b>). Further aggregating these variables reduces cluster separation (<b>d</b>), so should be avoided.</p>
Full article ">
22 pages, 365 KiB  
Article
Understanding the EMR-Related Experiences of Pregnant Japanese Women to Redesign Antenatal Care EMR Systems
by Samar Helou, Victoria Abou-Khalil, Goshiro Yamamoto, Eiji Kondoh, Hiroshi Tamura, Shusuke Hiragi, Osamu Sugiyama, Kazuya Okamoto, Masayuki Nambu and Tomohiro Kuroda
Informatics 2019, 6(2), 15; https://doi.org/10.3390/informatics6020015 - 4 Apr 2019
Cited by 4 | Viewed by 8103
Abstract
Woman-centered antenatal care necessitates Electronic Medical Record (EMR) systems that respect women’s preferences. However, women’s preferences regarding EMR systems in antenatal care remain unknown. This work aims to understand the EMR-related experiences that pregnant Japanese women want. First, we conducted a field-based observational [...] Read more.
Woman-centered antenatal care necessitates Electronic Medical Record (EMR) systems that respect women’s preferences. However, women’s preferences regarding EMR systems in antenatal care remain unknown. This work aims to understand the EMR-related experiences that pregnant Japanese women want. First, we conducted a field-based observational study at an antenatal care clinic at a Japanese university hospital. We analyzed the data following a thematic analysis approach and found multiple EMR-related experiences that pregnant women encounter during antenatal care. Based on the observations’ findings, we administered a web survey to 413 recently pregnant Japanese women to understand their attitudes regarding the EMR-related experiences. Our results show that pregnant Japanese women want accessible, exchangeable, and biopsychosocial EMRs. They also want EMR-enabled explanations and summaries. Interestingly, differences in their demographics and stages of pregnancy affected their attitudes towards some EMR-related experiences. To respect their preferences, we propose amplifying the roles of EMR systems as tools that promote communication and woman-centeredness in antenatal care. We also propose expanding the EMR design mindset from a biomedical to a biopsychosocial-oriented one. Finally, to accommodate the differences in individual needs and preferences, we propose the design of adaptable person-centered EMR systems. Full article
(This article belongs to the Special Issue Data-Driven Healthcare Research)
Show Figures

Figure 1

Figure 1
<p>Do pregnant Japanese women want online access to their EMR? The numbers shown in the chart represent the percentage of women who chose that answer.</p>
Full article ">Figure 2
<p>A heat map representing the importance of each EMR-related experience for pregnant Japanese women. The numbers shown in the map represent the percentage of women who chose that answer.</p>
Full article ">Figure A1
<p>The survey questions and their corresponding answer sets.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop