Welcome to the May 2023 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
A New Form of Human-Computer Interaction
ETH Zurich (Switzerland)
Anna Janka
April 26, 2023


The Language Model Query Language (LMQL) created by Luca Beurer-Kellner, Marc Fischer, and Martin Vechev at Switzerland's ETH Zurich enables easier, safer, and more affordable interaction with large language models. The programming language and open source platform combines natural- and programming-language's capabilities to facilitate a new form of human-computer interaction (HCI) by allowing users to directly communicate with and control computers. Vechev explained, "Decreasing the necessary exchanges with the language model also reduces the costs of interacting with the model, which can be quite expensive. Using LMQL increases the chances of getting the desired output." Beurer-Kellner said users can restrict their language model to a specifically designed framework via LMQL to better control the model's behavior.

Full Article

A view of one person's cerebral cortex. Pink areas have above-average activity; blue areas have below-average activity. Decoder Uses Brain Scans to Know What You Mean—Mostly
NPR
Jon Hamilton
May 1, 2023


University of Texas at Austin scientists combined functional magnetic resonance imaging (fMRI) scans and artificial intelligence to decode the meaning behind what a person hears or imagines from what happens in his or her brain. Participants each spent up to 16 hours in an fMRI scanner while listening to podcast audio on headphones. A computer processed the scan data to match specific brain-activity patterns with certain streams of words, then attempted to reconstruct new audio from participants’ brain activity. An early version of the ChatGPT natural language processing program helped the system assemble intelligible sentences, yielding a paraphrased model of what participants had heard.

Full Article
Reviving Office Chatter
Carnegie Mellon University School of Computer Science
Kayla Papakie
April 24, 2023


A Slack application developed by Carnegie Mellon University (CMU) researchers aims to spark casual conversations and create affinity groups among remote workers. Nooks lets users submit a topic of interest anonymously, permits other users to indicate their interest in that topic and creates a private channel for the new affinity group. The app features batched notifications to prevent disruption and prevents users from seeing who is already a member of a particular Nook. Said CMU's Shreya Bali, "Anyone interested can hop into a nook and break the ice without any preconceived notion of who is in the group. This helps to avoid social anxiety of, say, not knowing anyone in the Nook or feeling intimidated if you see it includes colleagues of a different team or higher level."

Full Article

The system permits one to feel both real and virtual objects. Haptic System Creates Finger-Touch Sensations Hardware-Free
IEEE Spectrum
Evan Ackerman
April 22, 2023


A wearable haptic system developed by University of Chicago researchers can produce tactile sensations on the bottom of the user's fingers and palms without hardware. With one electrode on top of a finger and a ground electrode near the wrist capable of stimulating individual parts of each finger, the result is a system with 11 separately controllable tactile zones spanning five fingers and the palm. Although the electrodes are positioned on the dorsal side of the hand, most study participants reported feeling more than 90% of tactile sensation on the palmar side of their hand. University of Chicago's Pedro Lopes said, "With this new level of hands-free haptics, we think we can unleash new use cases for haptics that go beyond [virtual reality/augmented reality].”

Full Article

A representation of constant approval from TikTok. How Is TikTok Affecting Our Mental Health?
University of Minnesota
April 18, 2023


A study of the mental health impacts of TikTok by computer scientists at the University of Minnesota (U of M) found that the social media platform effectively displays content that fits users' interests but can make it difficult for users seeking mental health information to escape negative content. TikTok uses a recommender system algorithm that shows videos it thinks users will like, rather than posts from accounts they follow. U of M's Ashlee Milton said, "At some point, because of the way the feed works, it's just going to keep giving you more and more of the same content. And that's when it can go from being helpful to being distressing and triggering." Study participants indicated the content shown in their feeds did not change despite hitting the "not interested" button.

Full Article

The Literacy Lounge. Sensor-Filled 'Smart' Classroom Designed to Improve Literacy
Fast Company
Elissaveta M. Brandon
April 20, 2023


Brooklyn's Academy for College Preparation & Career Exploration partnered with architect Danish Kurani on the "Literacy Lounge," a small library with soft seating, study carrels, two wall-mounted sensors, and a wall-mounted tablet dashboard. The sensors track how often and how many students speak, conversation quality, and the words used. The goal is for educators to be able to make informed decisions about teaching based on real-time literacy data. The Organisation for Economic Co-Operation and Development's Andreas Schleicher said it's important that "teachers are not the slaves of those algorithms but are the designers." Said Schleicher, "The future teacher needs to be not only a great instructor and a great coach and a great mentor, but also a good data scientist."

Full Article
Machine Learning Can Help Flag Risky Messages on Instagram
Drexel News
April 17, 2023


Researchers at Drexel University, Boston University, the Georgia Institute of Technology, and Vanderbilt University have developed a method of using machine learning to flag risky Instagram conversations while maintaining participants’ privacy. The researchers determined metadata characteristics can help identify risky conversations. They used machine learning algorithms in a layered approach to develop a metadata profile of a risky conversation. Their system was 87% accurate in detecting risky conversations using sparse and anonymous details, such as conversation length, level of participant engagement, and whether images or links are sent. Said the researchers, "Our research also paves the way for more proactive approaches to risk detection which are likely to be more translatable in the real world given their rich ecological validity."

Full Article

A liquid reservoir beneath the keyboard fills the keys with liquid. OLED Touchscreen Has Pop-Up Buttons Your Fingers Can Feel
Gizmodo
Andrew Liszewski
April 26, 2023


Carnegie Mellon University scientists have created organic light-emitting diode (OLED) touchscreens whose buttons provide tactile feedback when you touch them. The Flat Panel Haptics system generates solid-feeling pop-up buttons via embedded electroosmotic pumps 1.5 millimeters (0.05 inches) thick coupled to a liquid reservoir beneath and a flexible surface structure above. This produces a bump sufficient for the user's fingers to distinguish between the onscreen keyboard's keys. Although the shape and size of the buttons are pre-determined, any size and shape could be produced on demand once they can be rendered as tiny as the pixels on an OLED display.

Full Article
Eye-Tracking Research Peeks into Future Mobile Device Interaction
University of Glasgow (U.K.)
April 3, 2023


Experts from universities in Scotland, Germany, and Portugal recommended approaches for integrating eye tracking into mobile device interaction. The researchers evaluated participants' use of gaze interaction as they walked or sat, selecting targets from a grid of white shapes on a cellphone screen as they shifted from white to black. Gaze interaction methods included Dwell Time, where users select items by fixating on a target for 800 milliseconds; Pursuits, in which they follow and select an object spinning around the target; and Gaze Gestures, where they stare off-screen left or right to reduce the number of targets until they find the desired item. Users favored Pursuits when seated and Dwell Time when walking, which were faster than other methods in both instances. The researchers recommended using Pursuits and Dwell Time for controlling devices when seated, while Gaze Gestures offers greater accuracy when seated and moving.

Full Article

Ruidong Zhang wearing the EchoSpeech glasses. Researchers Build Sonar Glasses for Communication Without Words
Interesting Engineering
Abdul-Rahman Oladimeji Bello
April 7, 2023


A new technology developed by researchers at Cornell University uses sonar to facilitate silent communication via glasses equipped with tiny microphones and speakers. The glasses use sonar to sense the wearer's mouth movements and a deep learning algorithm for real-time analysis of echo profiles, and they can learn a user's speech patterns with only a few minutes of training. Data processing is handled wirelessly via the user's smartphone. The low-power glasses could be used to operate music playback controls, dictate messages, or speak words aloud for people with speech disabilities via a voice synthesizer. Cornell's Cheng Zhang said, "Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible. We're moving sonar onto the body."

Full Article
Robots Predict Human Intention for Faster Builds
USC Viterbi School of Engineering
Caitlin Dawson
April 3, 2023


University of Southern California (USC) researchers have trained robots to predict human preferences in assembly tasks. The researchers drew similarities in how an individual will build different products, producing a small assembly or "canonical" task to construct parts of a model airplane as the robot observed via camera. Quick response code-like AprilTags affixed to parts enabled the system to detect components operated by humans, then machine learning determined a person's preference based on their series of actions in the canonical task. The system predicted human actions with approximately 82% accuracy. USC's Heramb Nemlekar said, “By helping each person in their preferred way, robots can reduce their work, save time, and even build trust with them."

Full Article

Social media features intended to enhance interactions often had the opposite effect for autistic users. Social Media Platforms Letting Down Autistic Users
Queen Mary University of London (U.K.)
April 26, 2023


U.K.-based researchers found social media platforms need to be improved to support the inclusion of autistic users. Queen Mary University of London's Nelya Koteyko said, "Our participatory research aims to show how looking at social media from an autistic perspective can help identify [non-autistic] biases and work towards addressing them." The researchers’ recommendations include having social media platform designers invent more powerful tools to engage in interest-based sociality and to create connections founded on mutual interests; enhancing filters to delete upsetting, impolite, negative, and confrontational content; facilitating additional configuration and customization options for algorithmic content feeds, and producing multimodal tools that support autistic adults by understanding tone and meaning in other user-generated content.

Full Article

Seniors chat over an Alexa. 'Alexa, Set the Alarm for Me to Take My Medication'
University of Michigan News
April 21, 2023


Researchers from the University of Michigan (U-M), Cornell University, and the University of Maryland reviewed older adults' long-term use of voice assistant devices to supplement their daily routines. The researchers focused on residents of a long-term care community who used Alexa devices for at least one year, learning to use them via a training program. Positioning training flyers near the devices gave participants time to probe new skills corresponding to their daily lives, and all participants used the voice assistant at least twice daily. Alexa was found to complement their daily routines, enhance their moods, partake in cognitively stimulating activities, and help them socialize with others.

Full Article
Prostheses in Lab Sense Objects, Allow Users to Feel
EE Times
Sunny Bains
April 6, 2023


Johns Hopkins University researchers were named to receive the Mahowald Prize for neuromorphic engineering for their development of low-power electronic skins that enable users to perceive objects tactilely. To prevent damage, the neuromorphic prostheses incorporate a circuit with three neurons that determines whether a stimulus will likely be harmful. The neurons measure force, area, and pressure, respectively, rapidly transmitting spiking signals to the spinal cord to impel a reflex action. The researchers have been using targeted transcutaneous electrical nerve stimulation to channel spikes from the prosthesis into the brain via an amputee's skin. They demonstrated the brain can use this information when the simulation matches what the brain would be expecting from the missing hand, which "leads to faster information transfer and increased number of functional connection paths among somatosensory, motor, and multisensory processing systems."

Full Article
Calendar of Events

ETRA ’23: ACM Symposium of Eye Tracking Research & Applications
May 30 – Jun. 2
Tubingen, Germany

IMX ’23: ACM International Conference on Interactive Media Experiences
Jun. 13 – 15
Nantes, France

C&C ’23: Creativity and Cognition
Jun. 19 – 21
Online

IDC ’23: Interaction Design and Children
Jun. 19 – 23
Evanston, IL

UMAP ’23: 31st ACM Conference on User Modeling, Adaptation and Personalization
Jun. 26 – 29
Limassol, Cyprus

EICS ’23: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Jun. 27 – 30
Swansea, UK

DIS ’23: ACM Designing Interactive Systems Conference
Jul. 10 – 14
Pittsburgh, PA

COMPASS ’23: ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies
Aug. 16 – 19
Cape Town, South Africa

AutomotiveUI ’23: 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Sep. 18 – 21
Ingolstadt, Germany

RecSys ’23: 17th ACM Conference on Recommender Systems
Sep. 18 – 22
Singapore

MobileHCI ’23: 25th International Conference on Mobile Human-Computer Interaction
Sep. 26 – 29
Athens, Greece

UbiComp ’23: Ubiquitous Computing
Oct. 8 – 12
Cancun, Mexico

ICMI ’23: 25th ACM International Conference on Multimodal Interaction
Oct. 9 – 13
Paris, France

VRST ’23: The ACM Symposium on Virtual Reality Software and Technology
Oct. 9 – 11
Christchurch, New Zealand

CHI PLAY ’23: The Annual Symposium on Computer-Human Interaction in Play
Oct. 10 – 13
Stratford, Ontario

CSCW ’23: The 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing
Oct. 14 – 18
Minneapolis, MN

SUI ’23: ACM Spatial User Interaction
Oct. 13 – 15
Sydney, NSW, Australia

UIST ’23: The 35th Annual ACM Symposium on User Interface Software and Technology
Oct. 29 – Nov. 1
San Francisco, CA

CI ’23: ACM Collective Intelligence Conference
Nov. 6 – 10
Delft, Netherlands

ISS ’23: Interactive Surfaces and Spaces
Nov. 5 – 8
Pittsburgh, PA


About SIGCHI

SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

Association for Computing Machinery
1601 Broadway, 10th Floor
New York, NY 10019-7434
Phone: 1-800-342-6626
(U.S./Canada)

To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.