Welcome to the November 2020 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

AI teaching assistants can help ease a teacher’s workload. AI Teachers Must Be Effective, Communicate Well to Be Accepted, Study Finds
UCF Today
Robert Wells
October 30, 2020


A study by University of Central Florida (UCF) researchers found that artificial intelligence (AI)-based teaching assistants must communicate effectively and well with students to be accepted by them. UCF's Jihyun Kim said, "To use machine teachers effectively, we need to understand students' perceptions toward machine teachers, their learning experiences with them and more." The team asked students to read a news article about an AI teaching assistant used in higher education, then polled their perceptions of the technology. The survey determined that students were most likely to accept an assistant that was easy to communicate with and useful.

Full Article
Users Don't Understand Computer Explanations for Image Labeling Errors
Penn State News
Jessica Hallman
October 26, 2020


Researchers at Pennsylvania State University (Penn State) wanted to know whether having access to a saliency map could help users better understand why image classification errors occur when uploading images to online platforms. The researchers wanted to determine whether saliency maps could help explain mistakes made by the algorithm. Human participants were shown images and their correct labels, and half also were shown five saliency maps generated by different algorithms for each image. The participants were then asked to choose the incorrectly predicted label generated by the computer. The researchers found that displaying the saliency maps increased the average guessing accuracy by about 10%. Said Penn State's Ting-Hao Huang, "It doesn't always help. Actually, it might even hurt user experience or hurt users' ability to reason about your system's errors."

Full Article

Engineers wearing Hyundai Chairless Exoskeletons work on a Hyundai Medical Exoskeleton wearable robot. Exoskeleton Suits Turn Car Factory Workers Into Human Robots
Bloomberg
Kyunghee Park; Chunying Zhang
October 16, 2020


Automakers including Hyundai Motor, Ford Motor, and General Motors are using high-tech exoskeleton suits to supplement the capabilities of factory workers. The technology eases fatigue and helps prevent injury, and is especially useful for tasks that cannot be automated by robots. China’s ULS Robotics is developing three battery-powered wearable exoskeletons to help workers hold and lift heavy equipment; one unit is worn on the upper body, the second goes around the waist, and the third focuses on the lower extremities. Two exoskeletons enable wearers to lift an extra 20 kilograms (44 pounds). ULS investor Huang Mingming said the exoskeletons can address a global decline in experienced factory workers needed for product assembly.

Full Article
Gesture Recognition Technology Shrinks to Micro Size
Aalto University
October 27, 2020


Researchers at Finland's Aalto University and HitSeed have developed gesture recognition technology that can be embedded in fingertip-sized microcontrollers embedded in smart gloves. Said Aalto's Yu Xiao, "Usually, sensor data collected by gloves needs to be sent over a network to a computer that processes it and sends the information back. The deep learning-based gesture recognition algorithms we have developed are so lightweight that they can do the same locally in an embedded system like smart gloves." Researchers used HitSeed's fingertip-sized Sensor Computer to run convolutional neural networks on smart gloves. Said HitSeed's Pertti Kasanen, "We can apply the developed technology for several measurement types, like keeping separate counts for multiple gestures, for measuring motion improvements in physiotherapy, or for detecting the state of multiple machines running based on a vibration or sound spectrum."

Full Article

A subject wears eyeglass frames supplemented with sensors. Electronic Design Tool Morphs Interactive Objects
MIT News
Rachel Gordon
October 22, 2020


Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) have engineered a three-dimensional (3D) design environment that allows users to change object shapes and electronic functions in one cohesive space, in order to add sensors to prototypes. MorphSensor automatically renders electronic designs as 3D models, then allows users to modify the geometry and manipulate active sensing parts. Once a design is finalized, the "morphed sensor" can be fabricated with an inkjet printer and conductive tape, to be affixed to the object. CSAIL's Junyi Zhu said, "MorphSensor fits into my long-term vision of something called 'rapid function prototyping,' with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users."

Full Article
Research Shows the Value of Multi-User VR Remote Psychotherapy for Eating Disorders
News-Medical
Emily Henderson
October 21, 2020


A study by the U.K.'s University of Kent, the Research center on Interactive Media and Smart systems and Emerging technologies (RISE Ltd.), and the University of Cyprus found that virtual reality (VR) was useful in remotely treating those with eating disorders. Research participants and therapists were fitted with head-mounted displays and interacted with each other through avatars tailored to users' appearances. Users were "teleported" to two virtual environment interventions for discussions, with participants eventually confronting their own images in a mirror. Maria Matsangidou of RISE said, “Multi-user virtual reality is an innovative medium for psychotherapeutic interventions that allows for the physical separation of therapist and patient, providing thus more 'comfortable' openness by the patients.”

Full Article

Waiting for the bus. Want to Wait Less at the Bus Stop? Beware Real-Time Updates
Ohio State News
Jeff Grabmeier
October 13, 2020


A study by Ohio State University (OSU) researchers suggests smartphone applications that tell commuters when a bus will arrive do not reduce waiting times compared to simply using official route schedules. The investigators analyzed bus traffic from May 2018 to May 2019 on one route of the Central Ohio Transit Authority, and used real-time data employed by publicly available apps to tell riders where buses are and their likely arrival time. Many transit apps' "greedy tactic" of having commuters time their arrival at a bus stop to when an app says the bus would arrive is the worst strategy, causing users to wait about 12.5 minutes, about three times longer than they would have if they followed the schedule.

Full Article
ML Predicts How Long Museum Visitors Will Engage with Exhibits
NC State University News
Matt Shipman
October 13, 2020


Education and artificial intelligence researchers at North Carolina State University (NC State) have demonstrated a proof-of-concept machine learning (ML) model that predicts how long individual museum visitors will engage with a specific exhibit. The researchers monitored 85 museum visitors as they engaged with an interactive exhibit on environmental science, collecting data on their facial expressions, posture, where they looked on the exhibit's screen, and which screen sections they touched. Said NC State’s Jonathan Rowe, “The amount of time people spend engaging with an exhibit is used as a proxy for engagement and helps us assess the quality of learning experiences in a museum setting. It’s not like school – you can’t make visitors take a test.” NC State's Andrew Emerson said the findings could be used "to develop and implement adaptive exhibits that respond to user behavior in order to keep visitors engaged."

Full Article
Role-Playing Computer Game Helps Players Understand How Vaccines Work on a Global Scale
University of Oxford (U.K.)
October 8, 2020


Researchers at the University of Oxford's MRC Weatherall Institute of Molecular Medicine and Goldsmiths, University of London in the U.K. have created a computer game in which players must determine how to distribute limited doses of a vaccine to most effectively control a disease modeled on influenza. The Vaccination Game is based on mathematical models of how a virus spreads and the potential impact of a vaccine. Players must determine who to vaccinate in each of 99 cities across the globe with a vaccine available in limited doses per week. They receive a report after the campaign that details how many lives were saved by the vaccine. MRC's Hal Drakesmith said the game "illustrates how vaccines can work on a global scale, and shows that precisely how a vaccine is deployed across populations can be crucial to its effectiveness."

Full Article

Advances in robotic automation allow for more precise, intuitive movement. Assistive Feeding: AI Improves Control of Robot Arms
Stanford Institute for Human-Centered Artificial Intelligence
Katharine Miller
October 20, 2020


Researchers at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) have developed a more intuitive and faster artificial intelligence (AI)-powered method for controlling assistive robotic arms. The controller incorporates two AI algorithms, with one facilitating two-dimensional joystick control without mode-switching, using contextual cues to determine whether a user is reaching for an object. As the arm approaches its destination, the second algorithm enables more precise movements, with control split between human and robot. HAI's Dorsa Sadigh said the system was trained all together by incorporating a shared autonomy algorithm, adding that the research's ultimate goal is to use AI-based assistive robotics to improve the lives of people with disabilities.

Full Article
Calendar of Events

VRST '20: 25th ACM Symposium on Virtual Reality Software and Technology
Nov. 1-4
Ottawa, Canada

CHIPLAY '20: The Annual Symposium on Computer-Human Interaction in Play
Nov. 1-4
Ottawa, Canada

ISS '20: ACM International Conference on Interactive Surfaces and Spaces
Nov. 8-11
Lisbon, Portugal

TEI ’21: 15th International Conference on Tangible, Embedded, and Embodied Interaction
Feb. 14-17
Salzburg, Austria

HRI ’21: ACM/IEEE International Conference on Human-Robot Interaction
Mar. 8-11
Boulder, CO

IUI ’21: 26th International Conference on Intelligent User Interfaces
Apr. 13-17
College Station, TX

CHI ’21: ACM CHI Conference on Human Factors in Computing Systems
May 8-13
Yokohama, Japan

EICS ’21: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Jun. 8-11
Eindhoven, Netherlands

C&C ’21: Creativity and Cognition
Jun. 21-24
Venice, Italy

MobileHCI ’21: 23rd International Conference on Human-Computer Interaction with Mobile Devices and Services
Sep. 27-30
Toulouse, France

RecSys ’21: 15th ACM Conference on Recommender Systems
Sep. 27-Oct. 1
Amsterdam, Netherlands

ICMI ’21: 23rd ACM International Conference on Multimodal Interaction
Oct. 18-22
Montreal, Canada

CSCW ’21: 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing
Nov. 3-7
Toronto, Canada


About SIGCHI

SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

Association for Computing Machinery
1601 Broadway, 10th Floor
New York, NY 10019-7434
Phone: 1-800-342-6626
(U.S./Canada)

To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.