Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (561)

Search Parameters:
Journal = MTI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot
Multimodal Technol. Interact. 2023, 7(9), 85; https://doi.org/10.3390/mti7090085 - 30 Aug 2023
Viewed by 145
Abstract
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: [...] Read more.
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction - 2nd Edition)
Show Figures

Figure 1

Article
Evaluation of the Road to Birth Software to Support Obstetric Problem-Based Learning Education with a Cohort of Pre-Clinical Medical Students
Multimodal Technol. Interact. 2023, 7(8), 84; https://doi.org/10.3390/mti7080084 - 21 Aug 2023
Viewed by 320
Abstract
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, [...] Read more.
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, a novel program designed to support human maternal anatomy and physiology education. Second-year medical students at a large midwestern American university participated in a prospective, mixed-methods study. The RtB software is available as a mobile smartphone/tablet application and in immersive virtual reality. The program was integrated into problem-based learning activities across a three-week obstetrics teaching period. Student visuospatial ability, weekly program usage, weekly user satisfaction, and end-of-course focus group interview data were obtained. Survey data were analyzed and summarized using descriptive statistics. Focus group interview data were analyzed using inductive thematic analysis. Of the eligible students, 66% (19/29) consented to participate in the study with 4 students contributing to the focus group interview. Students reported incremental knowledge increases on weekly surveys (69.2% week one, 71.4% week two, and 78.6% week three). Qualitative results indicated the RtB software was perceived as a useful educational resource; however, its interactive nature could have been further optimized. Students reported increased use of portable devices over time and preferred convenient options when using technology incorporated into the curriculum. This study identifies opportunities to better integrate technology into problem-based learning practices in medical education. Further empirical research is warranted with larger and more diverse student samples. Full article
Show Figures

Figure 1

Article
Exploring a Novel Mexican Sign Language Lexicon Video Dataset
Multimodal Technol. Interact. 2023, 7(8), 83; https://doi.org/10.3390/mti7080083 - 19 Aug 2023
Viewed by 402
Abstract
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon [...] Read more.
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments. Full article
Show Figures

Figure 1

Article
Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes
Multimodal Technol. Interact. 2023, 7(8), 82; https://doi.org/10.3390/mti7080082 - 18 Aug 2023
Viewed by 504
Abstract
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed [...] Read more.
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged. Full article
Show Figures

Figure 1

Article
Creative Use of OpenAI in Education: Case Studies from Game Development
Multimodal Technol. Interact. 2023, 7(8), 81; https://doi.org/10.3390/mti7080081 - 18 Aug 2023
Viewed by 566
Abstract
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the [...] Read more.
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design. Full article
Show Figures

Figure 1

Systematic Review
“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship
Multimodal Technol. Interact. 2023, 7(8), 80; https://doi.org/10.3390/mti7080080 - 14 Aug 2023
Viewed by 430
Abstract
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into [...] Read more.
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC. Full article
Show Figures

Figure 1

Communication
Design and Research of a Sound-to-RGB Smart Acoustic Device
Multimodal Technol. Interact. 2023, 7(8), 79; https://doi.org/10.3390/mti7080079 - 13 Aug 2023
Viewed by 398
Abstract
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion [...] Read more.
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments. Full article
Show Figures

Figure 1

Article
Multimodal Interaction for Cobot Using MQTT
Multimodal Technol. Interact. 2023, 7(8), 78; https://doi.org/10.3390/mti7080078 - 03 Aug 2023
Viewed by 580
Abstract
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and [...] Read more.
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation). Full article
Show Figures

Figure 1

Article
Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model
Multimodal Technol. Interact. 2023, 7(8), 77; https://doi.org/10.3390/mti7080077 - 02 Aug 2023
Viewed by 364
Abstract
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for [...] Read more.
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%. Full article
Show Figures

Figure 1

Systematic Review
How Is Privacy Behavior Formulated? A Review of Current Research and Synthesis of Information Privacy Behavioral Factors
Multimodal Technol. Interact. 2023, 7(8), 76; https://doi.org/10.3390/mti7080076 - 29 Jul 2023
Viewed by 442
Abstract
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we [...] Read more.
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we present the results of an in-depth literature review on the factors affecting privacy behavior. We seek to investigate the underlying factors that influence individuals’ privacy-conscious behavior in the digital domain, as well as effective interventions to promote such behavior. Privacy decisions regarding the disclosure of personal information may have negative consequences on individuals’ lives, such as becoming a victim of identity theft, impersonation, etc. Moreover, third parties may exploit this information for their own benefit, such as targeted advertising practices. By identifying the factors that may affect SNS users’ privacy awareness, we can assist in creating methods for effective privacy protection and/or user-centered design. Examining the results of several research studies, we found evidence that privacy behavior is affected by a variety of factors, including individual ones (e.g., demographics) and contextual ones (e.g., financial exchanges). We synthesize a framework that aggregates the scattered factors that have been found in the literature to affect privacy behavior. Our framework can be beneficial to academics and practitioners in the private and public sectors. For example, academics can utilize our findings to create specialized information privacy courses and theoretical or laboratory modules. Full article
Show Figures

Figure 1

Article
An Enhanced Diagnosis of Monkeypox Disease Using Deep Learning and a Novel Attention Model Senet on Diversified Dataset
Multimodal Technol. Interact. 2023, 7(8), 75; https://doi.org/10.3390/mti7080075 - 27 Jul 2023
Viewed by 449
Abstract
With the widespread of Monkeypox and increase in the weekly reported number of cases, it is observed that this outbreak continues to put the human beings in risk. The early detection and reporting of this disease will help monitoring and controlling the spread [...] Read more.
With the widespread of Monkeypox and increase in the weekly reported number of cases, it is observed that this outbreak continues to put the human beings in risk. The early detection and reporting of this disease will help monitoring and controlling the spread of it and hence, supporting international coordination for the same. For this purpose, the aim of this paper is to classify three diseases viz. Monkeypox, Chikenpox and Measles based on provided image dataset using trained standalone DL models (InceptionV3, EfficientNet, VGG16) and Squeeze and Excitation Network (SENet) Attention model. The first step to implement this approach is to search, collect and aggregate (if require) verified existing dataset(s). To the best of our knowledge, this is the first paper which has proposed the use of SENet based attention models in the classification task of Monkeypox and also targets to aggregate two different datasets from distinct sources in order to improve the performance parameters. The unexplored SENet attention architecture is incorporated with the trunk branch of InceptionV3 (SENet+InceptionV3), EfficientNet (SENet+EfficientNet) and VGG16 (SENet+VGG16) and these architectures improve the accuracy of the Monkeypox classification task significantly. Comprehensive experiments on three datasets depict that the proposed work achieves considerably high results with regard to accuracy, precision, recall and F1-score and hence, improving the overall performance of classification. Thus, the proposed research work is advantageous in enhanced diagnosis and classification of Monkeypox that can be utilized further by healthcare experts and researchers to confront its outspread. Full article
Show Figures

Figure 1

Article
The Impact of Mobile Learning on Students’ Attitudes towards Learning in an Educational Technology Course
Multimodal Technol. Interact. 2023, 7(7), 74; https://doi.org/10.3390/mti7070074 - 20 Jul 2023
Viewed by 554
Abstract
As technology has explosively and globally revolutionized the teaching and learning processes at educational institutions, enormous and innovative technological developments, along with their tools and applications, have recently invaded the education system. Using mobile learning (m-learning) employs wireless technologies for thinking, communicating, learning, [...] Read more.
As technology has explosively and globally revolutionized the teaching and learning processes at educational institutions, enormous and innovative technological developments, along with their tools and applications, have recently invaded the education system. Using mobile learning (m-learning) employs wireless technologies for thinking, communicating, learning, and sharing to disseminate and exchange knowledge. Consequently, assessing the learning attitudes of students toward mobile learning is crucial, as learning attitudes impact their motivation, performance, and beliefs about mobile learning. However, mobile learning seems under-researched and may require additional efforts from researchers, especially in the context of the Middle East. Hence, this study’s contribution is enhancing our knowledge about students’ attitudes towards mobile-based learning. Therefore, the study goal was to investigate m-learning’s effect on the learning attitudes among technology education students. An explanatory sequential mixed approach was utilized to examine the attitudes of 50 students who took an educational technology class. A quasi-experiment was conducted and a phenomenological approach was adopted. Data from the experimental group and the control group were gathered. Focus group discussions with three groups and 25 semi-structured interviews were performed with students who experienced m-learning in their course. ANCOVA was conducted and revealed the impact of m-learning on the attitudes and their components. An inductive and deductive content analysis was conducted. Eleven subthemes stemmed out of three main themes. These subthemes included: personalized learning, visualization of learning motivation, less learning frustration, enhancing participation, learning on familiar devices, and social interaction, which emerged from the data. The researchers recommended that higher education institutions adhere to a set of guiding principles when creating m-learning policies. Additionally, they should customize the m-learning environment with higher levels of interactivity to meet students’ needs and learning styles to improve their attitudes towards m-learning. Full article
Show Figures

Figure 1

Review
Encoding Variables, Evaluation Criteria, and Evaluation Methods for Data Physicalisations: A Review
Multimodal Technol. Interact. 2023, 7(7), 73; https://doi.org/10.3390/mti7070073 - 18 Jul 2023
Viewed by 518
Abstract
Data physicalisations, or physical visualisations, represent data physically, using variable properties of physical media. As an emerging area, Data physicalisation research needs conceptual foundations to support thinking about and designing new physical representations of data and evaluating them. Yet, it remains unclear at [...] Read more.
Data physicalisations, or physical visualisations, represent data physically, using variable properties of physical media. As an emerging area, Data physicalisation research needs conceptual foundations to support thinking about and designing new physical representations of data and evaluating them. Yet, it remains unclear at the moment (i) what encoding variables are at the designer’s disposal during the creation of physicalisations, (ii) what evaluation criteria could be useful, and (iii) what methods can be used to evaluate physicalisations. This article addresses these three questions through a narrative review and a systematic review. The narrative review draws on the literature from Information Visualisation, HCI and Cartography to provide a holistic view of encoding variables for data. The systematic review looks closely into the evaluation criteria and methods that can be used to evaluate data physicalisations. Both reviews offer a conceptual framework for researchers and designers interested in designing and evaluating data physicalisations. The framework can be used as a common vocabulary to describe physicalisations and to identify design opportunities. We also proposed a seven-stage model for designing and evaluating physical data representations. The model can be used to guide the design of physicalisations and ideate along the stages identified. The evaluation criteria and methods extracted during the work can inform the assessment of existing and future data physicalisation artefacts. Full article
Show Figures

Figure 1

Article
Experiencing Authenticity of the House Museums in Hybrid Environments
Multimodal Technol. Interact. 2023, 7(7), 72; https://doi.org/10.3390/mti7070072 - 18 Jul 2023
Viewed by 422
Abstract
The paper presents an existing scenario related to the advanced integration of digital technologies in the field of house museums, based on the critical literature and applied experimentation. House museums are a particular type of heritage site, in which is highlighted the tension [...] Read more.
The paper presents an existing scenario related to the advanced integration of digital technologies in the field of house museums, based on the critical literature and applied experimentation. House museums are a particular type of heritage site, in which is highlighted the tension between the evocative capacity of the spaces and the requirements for preservation. In this dimension, the use of a seamless approach amplifies the atmospheric component of the space, superimposing, through hybrid digital technologies, an interactive, context-driven layer in an open dialogue between digital and physical. The methodology moves on the one hand from the literature review, framing the macro themes of research, and on the other from the overview of case studies, selected on the basis of the experiential value of the space. The analysis of the selected cases followed as criteria: the formal dimension of the technology; the narrative plot, as storytelling of socio-cultural atmosphere or identification within the intimate story; and the involvement of visitors as individual immersion or collective rituality. The paper aimed at outlining a developmental panorama in which the integration of hybrid technologies points to a new seamless awareness within application scenarios as continuous and work-in-progress challenges. Full article
(This article belongs to the Special Issue Critical Reflections on Digital Humanities and Cultural Heritage)
Show Figures

Figure 1

Article
Would You Hold My Hand? Exploring External Observers’ Perception of Artificial Hands
Multimodal Technol. Interact. 2023, 7(7), 71; https://doi.org/10.3390/mti7070071 - 17 Jul 2023
Viewed by 421
Abstract
Recent technological advances have enabled the development of sophisticated prosthetic hands, which can help their users to compensate lost motor functions. While research and development has mostly addressed the functional requirements and needs of users of these prostheses, their broader societal perception (e.g., [...] Read more.
Recent technological advances have enabled the development of sophisticated prosthetic hands, which can help their users to compensate lost motor functions. While research and development has mostly addressed the functional requirements and needs of users of these prostheses, their broader societal perception (e.g., by external observers not affected by limb loss themselves) has not yet been thoroughly explored. To fill this gap, we investigated how the physical design of artificial hands influences the perception by external observers. First, we conducted an online study (n = 42) to explore the emotional response of observers toward three different types of artificial hands. Then, we conducted a lab study (n = 14) to examine the influence of design factors and depth of interaction on perceived trust and usability. Our findings indicate that some design factors directly impact the trust individuals place in the system’s capabilities. Furthermore, engaging in deeper physical interactions leads to a more profound understanding of the underlying technology. Thus, our study shows the crucial role of the design features and interaction in shaping the emotions around, trust in, and perceived usability of artificial hands. These factors ultimately impact the overall perception of prosthetic systems and, hence, the acceptance of these technologies in society. Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
Show Figures

Figure 1

Back to TopTop