Welcome to the July 2020 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
U-M Research Reveals Racism Challenges in Human-Computer Interaction
University of Michigan News
June 11, 2020


A study by researchers at the University of Michigan (U-M), Northwestern University, and Carnegie Mellon University probed the application of critical race theory to human-computer interaction (HCI), using personal narratives to delve into the experiences of nonwhite participants in the field. One study author, U-M's Ihudiya Finda Ogbonnaya-Ogburu, cited the example of filter bubbles, in which online users encounter information and opinions that reinforce their own beliefs based on their search history, location, and click behavior. This can blind them to acts of racism witnessed by millions of Americans. Ogbonnaya-Ogburu recommended the HCI field should value the voices of marginalized members in order to combat racism. She said, "The HCI community can start by taking the time to listen, act, and regularly evaluate their anti-racist initiatives."

Full Article

A scene from the game. Children With ADHD Can Now Be Prescribed a Video Game
CNN
Naomi Thomas; Amy Woodyatt
June 16, 2020


The U.S. Food and Drug Administration (FDA) has approved the first video game-based treatment for attention deficit hyperactivity disorder (ADHD). It also is the first approved for any condition. The prescription-only EndeavorRx is geared toward eight- to 12-year-olds with certain types of ADHD and can be downloaded as a mobile app. FDA's Dr. Jeffrey Shuren called the game "an important example of the growing field of digital therapy and digital therapeutics." The game's creator, Akili, said children should interact with the game for 30 minutes per day, five days a week during a one-month treatment cycle. The game, which allows children to steer an avatar through a course dotted with obstacles and collect targets to earn rewards, was shown to improve attention function.

Full Article

The glove in action. Wearable-Tech Glove Translates Sign Language Into Speech in Real Time
UCLA Samueli Newsroom
June 29, 2020


Bioengineers at the University of California, Los Angeles and China's Chongqing University have designed a glove-like device that translates American Sign Language (ASL) into English in real time though a smartphone application. The system features gloves with thin, stretchable sensors fashioned from conductive yarns that run the length of each of finger, to sense hand motions and finger placements representing individual letters, numbers, words, and phrases. The device converts finger movements into electrical signals transmitted to a dollar-coin-sized circuit board on the user's wrist. The board wirelessly broadcasts the signals to a smartphone that translates them into spoken words via a custom machine learning algorithm, at the rate of about a word per second. The researchers also placed adhesive sensors on testers' faces to record expressions used in ASL.

Full Article
Bristol Innovation Challenges Regular Touchscreens with Spray-On Technique
University of Bristol News
June 24, 2020


Researchers at the University of Bristol in the U.K. and the Massachusetts Institute of Technology have developed an interactive sprayable display. The ProtoSpray technique employs conductive plastic and electroluminescent paint in conjunction with three-dimensional (3D) printing, enabling anyone to produce touch-sensitive displays in any shape. Said Bristol's Ollie Hanton, "We have liberated displays from their 2D rectangular casings by developing a process so people can build interactive objects of any shape. The process is very accessible: it allows end-users to create objects with conductive plastic and electroluminescent paint even if they don’t have expertise in these materials."

Full Article
Cuddling Robot Baby Seal Paro Proven to Make Life Less Painful
IEEE Spectrum
Evan Ackerman
June 30, 2020


A study by researchers at Israel's Ben-Gurion University of the Negev assessed how Paro, the robot baby harp seal, can alleviate the perception of pain. The team subjected 83 participants to varying levels of pain, with some allowed to see or touch the cuddly, wriggling robot, originally designed for hospitals and nursing homes. The study found that touching the soft Paro robot reduced pain perception most noticeably when the subject felt "strong" pain; on a scale of one to 10, the robot reduced the perceived amount of pain from about five to about three. The researchers suggested the robot is effective because "touching Paro enabled participants to form an emotional connection with it." Meanwhile, decreasing levels of the hormone oxytocin—which has been shown to lower pain in higher concentrations—in people who got to touch Paro implied that the robot reduced stress in those who formed a social bond with it.

Full Article
AI Reduces 'Communication Gap' for Nonverbal People by as Much as Half
University of Cambridge (UK)
June 15, 2020


Researchers at the universities of Cambridge and Dundee in the U.K. have created a context-aware artificial intelligence (AI) technique that narrows the communication gap for nonverbal people by removing 50% to 96% of the keystrokes a person must type to communicate. The system uses AI to help suggest sentences previously typed by the user that are of the greatest relevance to a specific situation. As the user types, algorithms automatically retrieve the most relevant previous sentences based on the text and the context of the conversation the person is engaged in, using context "clues" like the user's location, time of day, or identity of the user's speaking partner. The system identifies the speaking partner via a computer-vision algorithm trained to recognize human faces from a front-mounted camera. Said Cambridge’s Per Ola Kristensson, “We’ve shown it’s possible to reduce the opportunity cost of not doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.”

Full Article
Platform Empowers Users to Control Their Personal Data
Cornell Chronicle
Melanie Lefkowitz
June 11, 2020


Cornell University researchers have developed and tested a platform that enables users to restrict the types of personal data they release, and to whom. Ancile was tested at Cornell's Ithaca and Cornell Tech campuses, using four applications that accessed participants' location data for practical purposes, but with limits over how precisely or widely that data could be shared. Ancile lets users specify privacy guidelines that the system incorporates reactively rather than passively, since the manner in which information is used changes over time. Cornell's Nate Foster said Ancile's development did not incorporate judgments by researchers as to whether certain data uses were positive or negative. Said Foster, "What counts as a useful system is a very subjective decision. This is a happy medium where you can still take advantage of data, but in a way that is not infringing [on] the privacy of individuals."

Full Article
Dartmouth Research Brings Tech Tutorials to People with Visual Impairments
Dartmouth News
June 22, 2020


A study by Dartmouth College researchers detailed a three-dimensionally-printed interactive tool designed to help people with visual impairments learn computer circuit design, using audio feedback in response to being touched. TangibleCircuits employs an inexpensive practice circuit board designed to widen the inclusivity and accessibility of maker spaces and engineering classrooms by enabling instructors to develop affordable, portable, and easy-to-use tutorials. Dartmouth's Xing-Dong Yang said, "Through innovations like this, we hope that visually impaired people will no longer miss out on education opportunities and high-tech careers."

Full Article

Monitoring electroencephalograms with the help of artificial intelligence. Brain­sourcing Auto­mat­ic­ally Iden­ti­fies Hu­man Pref­er­en­ces
University of Helsinki
Aino Pekkarinen
June 17, 2020


Researchers at the University of Helsinki in Finland used artificial intelligence (AI) to analyze opinions and infer conclusions from electroencephalogram (EEG) readings of groups of people, a technique called brainsourcing. The investigators displayed 30 images of human faces to volunteers, who were directed to label the faces in their mind based on what was depicted in the images, without any additional information inputted by mouse or keyboard, as an EEG recorded brain activity. The AI algorithm learned to recognize and interpret images relevant to the task from the EEG measurements. This method can be applied to classify images or recommend content, and the researchers believe lightweight and user-friendly EEG equipment is required to achieve this.

Full Article
Deep Learning E-Skin Can Decode Complex Human Motion
Interesting Engineering
Chris Young
June 19, 2020


At South Korea's Korea Advanced Institute of Science and Technology (KAIST), researchers have developed a deep learning electronic skin sensor that records human motion from a distance. When placed on a person's wrist, the single-strain skin sensor can decode complex five-finger motions in real time with a virtual three-dimensional (3D) hand that mirrors the original movements. The deep neural network also uses rapid situation learning (RSL) to guarantee stable operation, irrespective of the sensor’s position on the surface of the skin. The system extracts signals corresponding to multiple finger motions by inducing cracks in metal nanoparticle films via laser technology; RSL facilitates whole-body tracking, enabling indirect remote measurement of human movements, which the researchers said is applicable for advanced virtual reality and augmented reality systems.

Full Article

Fiber-optic lines in the fabric. Smart Textiles Powered by Soft Transmission Lines
EPFL (Switzerland)
Valerie Geneux
June 1, 2020


Researchers at the Swiss Federal Institute of Technology, Lausanne (EPFL) School of Engineering have developed smart textiles that can measure bodily movement and other indicators using soft transmission lines. The textiles incorporate a soft, fiber-shaped sensor that can detect different kinds of fabric deformation. The transmission fibers were created by applying optical fiber fabrication to materials such as elastomers or liquid metals that function as conductors. The system measures the time between a signal's transmission and reception, using that data to ascertain the exact location, type, and intensity of deformation to the fabric. EPFL's Fabien Sorin said, "The trick was to create transmission lines made entirely of soft materials, using a simple method that can be scaled up."

Full Article

Where is your eye drawn in this image? What Jumps Out in a Photo Changes the Longer We Look
MIT News
Kim Martineau
June 17, 2020


Massachusetts Institute of Technology (MIT) researchers demonstrated that humans' attention shifts distinctively the longer they stare at an image, and these patterns are reproducible in artificial intelligence (AI) models. The team used a user interface to display photos to participants for three durations and gridded maps to indicate areas where participants last looked before the images disappeared. The researchers determined that gaze shifted over time, from faces or visually dominant objects to action-oriented features, then back to the main subject or to suggestive details. The team fed this data to a deep learning model to predict focal points of images it had never seen before, at different durations; this model outperformed state-of-the-art AI at predicting saliency across durations. MIT's Aude Oliva said, "The more we learn about how humans see and understand the world, the more we can build these insights into our AI tools to make them more useful."

Full Article
Calendar of Events

DIS '20: ACM Designing Interactive Systems 2020
July 6-10
Eindhoven, The Netherlands

UMAP '20: 28th ACM Conference on User Modeling, Adaptation and Personalization
July 14-17
Genoa, Italy

UbiComp '20: 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Sep. 12-16
Cancun, Mexico

AutomotiveUI '20: 12th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications
Sep. 20-22
Washington, DC

RecSys '20: 14th ACM Conference on Recommender Systems
Sep. 22-26
Rio de Janeiro, Brazil

MobileHCI '20: 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services
Oct. 5-8
Oldenburg, Germany

CSCW '20: 23rd ACM Conference on Computer-Supported Cooperative Work and Social Computing
Oct. 17-21
Minneapolis, MN

UIST '20: 33rd ACM User Interface Software and Technology Symposium
Oct. 20-23
Minneapolis, MN

ICMI '20: 22nd ACM International Conference on Multimodal Interaction
Oct. 25-29
Utrecht, The Netherlands

SUI '20: 8th ACM Symposium on Spatial User Interaction
Oct. 31 – Nov. 1
Ottawa, Canada

VRST '20: 25th ACM Symposium on Virtual Reality Software and Technology
Nov. 1-4
Ottawa, Canada

CHIPLAY '20: The Annual Symposium on Computer-Human Interaction in Play
Nov. 1-4
Ottawa, Canada

ISS '20: ACM International Conference on Interactive Surfaces and Spaces
Nov. 8-11
Lisbon, Portugal


About SIGCHI

SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

Association for Computing Machinery
1601 Broadway, 10th Floor
New York, NY 10019-7434
Phone: 1-800-342-6626
(U.S./Canada)
To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.