Welcome to the May 2022 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

Said Gerber, “I am truly honored to receive this reward on behalf of my colleagues, students, and community whose creativity, curiosity, and commitment made this work possible.” Elizabeth Gerber Receives SIGCHI Social Impact Award
Northwestern University McCormick School of Engineering
Brian Sandalow
April 19, 2022


Northwestern University's Elizabeth Gerber, a professor of mechanical engineering in the university’s McCormick School of Engineering and of communication studies in its School of Communication, is the recipient of the 2022 ACM Special Interest Group on Computer–Human Interaction (SIGCHI) Social Impact Award. Gerber was recognized for the design of "socio-technical systems to broaden participation in the discovery and building of solutions to expand the breadth of challenges addressed and increase the speed of discovery and quality of the solutions." She also is the faculty founder of Design for America (DFA), a student group that aims to solve community challenges using human-centered design; co-founder of the Segal Design Institute's Delta Lab; and co-director of Northwestern's Center for Human-Computer Interaction + Design.

Full Article

NASA flight surgeon, Dr. Josef Schmid gives a space greeting Oct. 8, 2021, as he is holoported on to the International Space Station. 3D Telemedicine Helps Keep Astronauts Healthy
NASA
April 8, 2022


The first use of holoportation for telemedicine on the International Space Station took place in October, when NASA flight surgeon Dr. Josef Schmid, Aexa Aerospace's Fernando De La Pena Llaca, and their teams “holoported” into a two-way conversation with European Space Agency astronaut Thomas Pesquet. Holoportation involves the reconstruction, compression, and live transmission of high-quality three-dimensional models of people in real time in any location. In combination with the Microsoft HoloLens Kinect camera, a personal computer, and custom Aexa software, the parties were able to interact as if they were in the same physical space. Said Schmid, "We'll use this for our private medical conferences, private psychiatric conferences, private family conferences, and to bring VIPs onto the space station to visit with astronauts."

Full Article

Doctoral student David Goedicke sits behind the wheel of the Fiat virtual simulation vehicle, inside the Tata Center at Cornell Tech. Mixed-Reality Driving Simulator a Low-Cost Alternative
Cornell University Chronicle
Tom Fleischman
April 28, 2022


A mixed-reality driving simulator developed by researchers at Cornell University and the Toyota Research Institute could reduce the costs of vehicle system and interface testing. The researchers built upon VR-OOM, a virtual reality on-road driving simulation program developed at Cornell, by combining real-time, real-world video with virtual objects. Using the Varjo XR-1 Mixed Reality headset and the Unity simulation environment, the researchers superimposed virtual objects and events into the view of participants operating unmodified vehicles in low-speed test areas. Participants were tested under three conditions (no headset, headset with video pass-through only, and headset with video pass-through and virtual objects), and most completed all cockpit tasks successfully.

Full Article
Innovative Israeli Technology Will Use Smart Sensors to Ensure Vaccine Safety
The Jerusalem Post (Israel)
Judy Siegel-Itzkovich
April 11, 2022


Researchers at Israel's Tel Aviv University (TAU) demonstrated that smart sensors can be used to provide an objective assessment of physiological changes in vaccine trial participants. The goal is to better assess vaccine safety without requiring participants to self-report any side effects. The researchers placed sensors from Israel's Biobeat on 160 participants over age 18 to monitor how the second dose of Pfizer's COVID-19 vaccine affected them. The sensors monitor 13 variables, including heart and breathing rate, blood oxygen saturation, temperature, and blood pressure, from one day before to three days after being vaccinated. Even in participants reporting no reaction, the researchers observed changes in almost all objective measures after vaccination. They also determined that side effects increased during the first 48 hours after being vaccinated, before returning to pre-vaccination levels.

Full Article
Adding AI to Museum Exhibits Increases Learning, Keeps Kids Engaged Longer
Carnegie Mellon University Human-Computer Interaction Institute
April 5, 2022


Researchers from Carnegie Mellon University's Human-Computer Interaction Institute (HCII), the University of Pittsburgh, and the Georgia Institute of Technology leveraged artificial intelligence (AI) to develop interactive hands-on museum exhibits that include a virtual assistant that can interact with visitors. An intelligent earthquake exhibit built by the researchers featured a camera, touchscreen, large display, and a virtual gorilla named NoRilla who guides participants through challenges and asks them questions. In tests involving elementary school students at the Carnegie Science Center, pre- and post-test surveys showed that the children learned more from the intelligent exhibit and improved their building and engineering skills compared to the traditional exhibit. Said HCII's Ken Koedinger, "The kids not only get it, they also have more fun than with usual exhibits even though more thinking is required."

Full Article
Players with Disabilities Score in Video Game World
Yahoo! News
Daniel Hoffman
April 3, 2022


Accessibility is being pushed to the forefront of the gaming industry, with Microsoft estimating about 400 million players worldwide have disabilities. NetherRealm Studios, maker of "Mortal Kombat," incorporated suggestions from Carlos Vasquez, who is blind, by adding audio cues to help blind gamers identify objects with which they can interact. Ubisoft's David Tisserand said, "The approach we have is to try to make accessibility part of the DNA of everyone in the company." Among the games honored at the second annual Video Game Accessibility Awards in March was "Forza Horizon 5," a car racing game that supports American and British sign language. However, deaf players are calling for larger type in games’ subtitles, on-screen visual cues, and captions for conversations in online multiplayer games.

Full Article

The exoskeleton helps a test subject stand. Exoskeleton Uses Machine Learning to Help Users Stand Up
RIKEN (Japan)
April 1, 2022


An exoskeleton robot developed by researchers at Japan's RIKEN combines lightweight carbon fiber with artificial intelligence (AI) to help people with mobility impairments. The exoskeleton, attached to the wearer's thighs and lower legs, uses muscle activity measurements to predict how they want to move. The researchers leveraged PU-(positive and unlabeled)-learning, a classification method that combines positively labeled data known to be correct with unlabeled data, to enable the AI to learn from ambiguous data. Using machine learning, the exoskeleton was able to guess when study participants were trying to stand up and assisted their movements. Said RIKEN's Jun Morimoto, "The key element of our research is that when controlling a robot to assist human movement, it is important to develop it based on the assumption that humans will behave in ways that are not in the learning data."

Full Article
Mind-Controlled Prosthetic Hands Grasp New Feats
IEEE Spectrum
Michelle Hampson
April 6, 2022


Researchers at the University of Minnesota and the University of Texas Southwestern Medical Center have combined a peripheral nerve interface with artificial intelligence (AI) to allow users to control an artificial limb by thinking of the movement they want to make. The interface, which involves implants within the nerves at the end of the amputated limb, detects signals from the user's brain. These signals are decoded by an AI algorithm, and the command is sent to the artificial limb. In tests of users controlling a prosthetic hand via natural thought, the AI decoder interpreted nerve signals with 97% to 98% accuracy. Minnesota's Zhi Yang said, "It is the only technology today that allows amputees to control individual finger movements."

Full Article
System Helps Motor-Impaired Individuals Type More Quickly, Accurately
MIT News
Adam Zewe
April 5, 2022


Researchers at the Massachusetts Institute of Technology and Michigan Technological University have developed a system that helps severely motor-impaired persons type faster and more accurately. The Nomon interface incorporates probabilistic reasoning to determine how users select keys for typing, adjusting itself to enhance their speed and accuracy. Nomon consists of a small analog clock placed next to every option the user can choose onscreen, by clicking their switch when the clock's hand passes a red "noon" line; after each click, the system changes the clock's phases to separate the most likely next targets, and the user clicks repeatedly until their target is chosen. In a keyboard arrangement, Nomon's machine learning algorithms try guessing the next word based on previous words, and each new letter as the user chooses.

Full Article

A robotic excavator follows the motions of a remote user to execute digging. Controlling Robotic Excavator Like Playing Videogame
Popular Science
Shi En Kim
April 11, 2022


Reuben Brewer at nonprofit research institute SRI International led a team of scientists to develop a smart excavator that can be operated remotely using augmented reality. Operators can run the digger from anywhere in the world via what Brewer calls a robotification kit. The kit can be mated to any existing manual excavator, and the researchers motorized the digger's levers and pedals to be synchronized with a handheld controller connected to the Internet. Six overhead cameras on the hood display a 360-degree view, while an Oculus headset feeds their perspectives to the wearer. Software tracks the controller's position in real time and synchronizes their movements with the excavator arm.

Full Article

DALL-E generated these images by following a command for “a teapot in the shape of an avocado.” Meet DALL-E, the AI That Draws Anything at Your Command
The New York Times
Cade Metz
April 6, 2022


Researchers at artificial intelligence (AI) research laboratory OpenAI have created DALL-E, a neural network that produces digital images in response to typed commands. DALL-E searches for patterns as it analyzes millions of digital images and text captions that describe what each image portrays, learning to identify connections between images and words. When someone describes an image, DALL-E produces a set of key features that this image might depict, then a diffusion model creates the image and generates the pixels needed to add these features. The AI can manipulate language and images, and in some cases it can grasp how they relate to each other.

Full Article
Clinical Trial Shows Benefits of Automated VR Treatment for Severe Psychological Problems
University of Oxford (U.K.)
April 6, 2022


A major clinical study led by researchers at the U.K.'s University of Oxford demonstrated the effectiveness of the gameChange automated virtual reality (VR) program in treating severe agoraphobia. Oxford's Daniel Freeman said gameChange features in-built therapy that can be supervised by a range of staff and delivered in settings that include patients' homes. Automated VR has a virtual coach to help patients try psychological interventions. Oxford's Felicity Waite said gameChange "helps patients learn by doing, practicing real-life activities such as buying a coffee or getting on a bus, which helps them develop the confidence to take on real-world challenges."

Full Article

Sports offerings via live streaming promotes activity and well-being during pandemic lockdowns. Digital Training at Home to Beat Lockdown Frustration
Goethe University (Germany)
April 6, 2022


An international team of researchers led by Germany's Goethe University Frankfurt found home interactive exercise programs can mitigate lockdown burdens and improve well-being. Half of a group of 763 healthy subjects from nine countries trained for four weeks on a live-streamed program that let them choose from a number of daily workouts; the other half of the group served as controls. After four weeks, physical activity was up to 65% higher on average in the online group than in the control cohort; motivation to engage in sports, psychological well-being, and sleep also improved, while anxiety levels declined.

Full Article
Calendar of Events

CI ’22: Collective Intelligence
Jun. 6 - 9
Santa Fe, NM

ETRA ’22: 2022 Symposium on Eye Tracking Research and Applications
Jun. 8 - 11
Seattle, WA

DIS ’22: Designing Interactive Systems
Jun. 13 - 17
Virtual

C&C ’22: Creativity and Cognition
Jun. 20 - 23
Venice, Italy

EICS ’22: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Jun. 21 - 24
Sophia Antipolis, France

IMX ’22: ACM International Conference on Interactive Media Experiences
Jun. 22 - 24
Aveiro, Portugal

AutomotiveUI ’22: 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Sep. 9 - 14
Seoul, South Korea

RecSys ’22: 16th ACM Conference on Recommender Systems
Sep. 18 - 23
Seattle, WA

MobileHCI ’22: 24th International Conference on Mobile Human-Computer Interaction
Sep. 23 - Oct. 1
Vancouver, Canada

UbiComp ’22: The 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Oct. 8 – 13
Cancun, Mexico

UIST ’22: The 35th Annual ACM Symposium on User Interface Software and Technology
Oct. 16 - 19
Bend, OR

ICMI ’22: International Conference on Multimodal Interaction
Nov. 7-11
Bangalore, India

CSCW ’22: Computer Supported Cooperative Work
Nov. 12 - 16
Taipei, Taiwan


About SIGCHI

SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

Association for Computing Machinery
1601 Broadway, 10th Floor
New York, NY 10019-7434
Phone: 1-800-342-6626
(U.S./Canada)

To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.