ACM SIGCHI Banner
Welcome to the April 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Beyond Screens: What's Next in Voice, Gesture, and Haptics
Wareable (03/22/16) Sophie Charara

South by Southwest (SXSW) 2016 showcased numerous user interfaces illustrating how the technology has progressed in recent years and how far it has to go. In the domain of voice-controlled wearables, highlights included the new Viv virtual assistant, which promises to be smarter and more helpful than any other voice-controlled artificial intelligence by gaining knowledge from any third party that wants to educate it about the world. Sony's Future Lab demonstrated its Concept N wearable neckband and open ear earbuds. Sony's Naoya Okamoto says, "audio seems to have more possibility to change the situation because you don't need to focus on the screen, it can be hands-free or even eyes-free. The one missing piece is how to make it ears-free--and our open earphones are the answer." Haptic interface concepts on display included Disney Research's work on using haptics to augment children's stories. "We found that the story comprehension as well as memory was improved when the haptic feedback was provided to kids who had difficulty listening or reading stories," says Disney researcher Ali Israr. Meanwhile, the University of Sussex's Marianna Obrist is working with manipulating emotions via haptics by stimulating different areas of the hand with tactile sensations. A third interface area showcased at SXSW was gesture control, with examples including smartwatches, gloves, and tabletop projectors, while the key challenge in this area is pairing human and machine precision.


Smartwatches Can Now Track Your Finger in Mid-Air Using Sonar
UW Today (03/15/16) Jennifer Langston

The FingerIO method developed by University of Washington (UW) researchers uses sonar to enable users to interact with mobile devices by writing or gesturing on any input surface in close proximity. The technique tracks fine finger movements by transforming a smartphone or smart watch into an active sonar system with the device's own microphones and speakers, which are triggered to emit an inaudible sound wave. The signal bounces off the finger, and such "echoes" are captured by the device's microphones and used to calculate the finger's spatial location. The researchers say FingerIO can precisely track two-dimensional finger movements to within 8 mm. "Acoustic signals are great--because sound waves travel much slower than the radio waves used in radar, you don't need as much processing bandwidth so everything is simpler," notes UW professor Shyam Gollakota. The UW researchers utilized Orthogonal Frequency Division Multiplexing signals to facilitate high-resolution finger tracking with sound. The team next plans to show how FingerIO can be used to track multiple fingers moving simultaneously, and extend its tracking abilities into three dimensions by equipping the devices with additional microphones. The researchers will demonstrate FingerIO at ACM CHI 2016 conference in San Jose, CA, in May.


A Sensitive Subject
The UCSB Current (03/28/16) Sonia Fernandez

University of California, Santa Barbara (UCSB) researchers are gaining deep insights into the mechanisms of tactile sensation by cataloging for the first time patterns of vibration in the skin of the human hand. For example, UCSB professor Yon Visell notes people whose fingers have been anesthetized can still feel fine surface detail. "The way they seem to be able to do this is by using...vibrations, that travel beyond the fingers, farther up the arm," Visell says. "The hand has specialized sensory end organs distributed widely in it that can capture such mechanical vibrations at a distance." Using an array of minuscule accelerometers worn on the sides and backs of the fingers and hands, the researchers captured, cataloged, and analyzed vibration patterns in the skin of the entire hand generated during active touch. "It is possible that the hand, like the ear, is able to use vibrations produced through contact in order to infer what is being touched, and how the hand is touching it," Visell says. The study found vibrations produced via touch, and their distribution in the hand, depend closely on the type of action and the object being handled. Potential applications for this research include virtual-reality technologies such as wearables that let users feel objects in virtual environments.


New Scrolling Technique Accelerates Skim Reading
Aalto University (03/29/16)

Aalto University researchers say they have devised a prototype scrolling method that better supports data processing in three unique ways. "Browsing of long texts speeds up by 60 percent and less than half as much time is spent locating the desired locations in the text," reports postdoctoral researcher Byungjoo Lee. "In addition, the probability of noticing points of interest in the text is increased by 210 percent compared to normal scrolling technique." The researchers named the new method Spotlights after the spotlight metaphor of human visual attention. "The new technique locates on each Web page, whether it is a PDF document, video, or Web document, the visually important elements and presents them using a transparent layer than appears on top of the text," Lee notes. "The elements can be, for example, pictures, tables, or headlines. It chooses what you should focus [on] and allows you enough time to do that." Spotlights co-developer and Aalto professor Antti Oulasvirta says empirical assessment found people can scroll through up to 20 pages a second and still retain the browsed information. "Our technique is the first to try to maximize the amount of the information on the screen for human visual attention," he says.


Gearing Up for Ambient Intelligence
InformationWeek (03/14/16) Lisa Morgan

Some efforts to develop ambient intelligence-based (AmI) user-interface technologies are based on environments that adapt themselves to individuals contextually, while others mimic virtual personal assistants that interface with an interconnected world of things. Innovators are tackling the AmI challenge using various approaches, which range from artificial intelligence (AI) to the Internet of Things to the cloud to natural language processing (NLP). The classic AmI use case is the smart home, with developers aiming to create systems and appliances that proactively communicate with users, manufacturers, and each other in a post-app ecosystem. If deployed as planned, AmI-enabled systems will be so transparent the services they supply will become a routine expectation of daily life. However, this requires the replacement of rule-based systems by adaptive systems that leverage AI, machine learning, NLP, and other technologies. Onstream and various manufacturers are collaborating on smart commercial and industrial buildings, and Onstream CEO Craig Macy anticipates demand for such buildings will be fueled by employee expectations and competitiveness. Meanwhile, Nara Logics CEO Jana Eggers envisions seamless, adaptable AmI experiences as people move from their work lives to their home lives and vice versa.


New Discoveries and Enhanced Visual Experiences Through Gaze-Contingent Displays
CORDIS News (03/09/16)

Researchers with the European Union-funded DEEPVIEW project are developing enhanced displays that enable users to present information in new ways and perform other actions via augmented depth perception using gaze-tracking technology. The first software app from DEEPVIEW is GAZER, which works in tandem with eye-tracking devices so photographers taking pictures with light-field cameras can explore images by automatically focusing on objects using their eyes. "Instead of moving a cursor around to focus, the gaze-contingent display [GCD] does it automatically through the position of the user's gaze," notes DEEPVIEW coordinator Miguel Nacenta. "This creates a sensation of depth...a richer, more salient and natural way of seeing that is meant to enhance the viewer's experience." A GCD modifies the user-gaze information collected by the eye tracker to produce a holistically changed expression of the display, without the user perceiving the system's reactions to their gaze. DEEPVIEW also is exploring how to use GCDs to enhance color and contrast perception, as well as for multimedia applications. "Using gaze-perception technology promises to be less disruptive than pointing and clicking on a cursor," Nacenta says. "You can take advantage of the natural behavior of people looking at things, rather than asking them to interact explicitly with the system."


Stanford Scholar Explores the Changing Gestures of the Digital Age
Stanford Report (03/25/16) Tom Winterbottom

Stanford University researcher Vanessa Chang says the transition from pen and paper and other traditional tools for expression to digital technologies is reshaping gestures and their roles in interactions with objects, especially in the artistic and aesthetic domain. "I am interested in what a gesture contains and what it involves, as well as how those movements mediate the interaction between people and things, between subjects and machines," she says. Chang notes a person's interactions with digital devices are facilitated by a new series of gestures that are becoming basic to modern existence. "Along with...traditional forms of artistic expression, a new poetics of gestures and movement creates a sense of reinvented authenticity in the digital age, in which technology plays a defining role," Chang says. She cites musical performances as representing a unique and liberating new paradigm for expression enabled by technological advances. "In recent years, there has been a decoupling of human bodies and traditional instruments in many emerging music performances," Chang points out. "Electronic instruments, laptops, and nascent technologies have quickly become very prominent in the live setting." She says this has fostered a redefinition of gestures, and notes musicians are using these new gestures as a result of their move to digitized instruments.


Jokebox Aims for Eye Contact in a City Full of Screens
Alphr (03/24/16) Thomas McMullan

In collaboration with British and Mexican researchers, Ideas for Change research director Mara Balestrini is developing the Jokebox, an installation in which separate wooden boxes outfitted with speakers, sensors, and buttons tell a joke when two people activate them at the same time. "The Jokebox is an ice-breaker, an excuse to get strangers to talk to each other or to share a laugh in public spaces," Balestrini says. "It is also a technology prototype that can help us understand how to design novel interfaces to foster social connectedness in urban settings by encouraging eye contact and cooperation between strangers." The researchers installed Jokeboxes in a park, a bus stop, and a shopping center in a Mexican city, and found people tended to react differently according to the environment. For example, children and parents were more likely to play with the boxes in the park, while teenagers tended to gravitate to the boxes at the shopping mall. Balestrini says in the future cities will integrate various technologies ranging from complete automation to those that attempt to cultivate shared social encounters. She notes her research is not intended to undermine smart devices. Still, "enabling social interactions is as important as making city processes efficient and...opportunities for strengthening the social fabric should not be neglected," Balestrini says.


How Eye Tracking Gives Players a New Experience in Video Games
The Conversation (03/29/16) Eduardo Velloso

A team at the University of Melbourne's Microsoft Research Center for Social Natural User Interfaces is exploring the application of eye tracking to gaming, which they envision as a compelling use case for the technology. With a combination of cameras, light-emitting diodes, and algorithms, eye-tracking devices can supply gaze mechanics that are already being incorporated within certain games to enhance play. A particularly interesting aspect of gaze input is its ability to extend far beyond issuing explicit commands to games. For example, the eyes' role in social interactions can be fed into games by provoking specific reactions from game characters when players look away from them while they are talking, or triggering a conversation when players first notice them. In addition, the game's artificial intelligence can read a player's eyes to interpret their cognitive state, and adapt gameplay accordingly. Moreover, eye trackers offer players an analytics tool for them to assess and modify their performance. In one example, players can record their eye movements during the game session and later observe the video to see whether they were paying attention to the right areas at the right times. A heatmap of gaze points also can provide players with a picture of their visual focus and call attention to areas requiring more concentration.


A Robotic Home That Knows When You're Hungover
Technology Review (03/10/16) Will Knight

The Brain of Things startup recently announced it is developing "robot homes" at three California sites, equipped with sensors, automated appliances, and the ability to learn and adapt to the habits and preferences of residents. A robot home's ability to adapt is enabled by computer servers that cull data and construct behavioral models with machine-learning algorithms. "The house knows the context, whether [its occupants] were watching a movie, or sleeping, or whatever," notes Brain of Things founder and Stanford University fellow Ashutosh Saxena. "As they are walking around the house, our house follows how they are acting, and it can know a lot." Saxena's research focus involves how robots can learn and share information, and he thinks automated homes could be even more valuable to people than automated cars. The homes Brain of Things is developing are outfitted with about 20 motion sensors, and the lights, heating, plumbing, entertainment systems, and air conditioning are linked and automated. Residents can operate the house's systems by switch or via voice commands or a smartphone app. Saxena says over time, the homes will learn a person's preferences and behavior and attempt to anticipate them.


This Necklace 'Hears' What You Eat
University at Buffalo News (03/16/16) Cory Nealon

University at Buffalo professor Wenyao Xu is compiling a library that catalogs the unique sounds foods make as people eat them, as part of a software program that supports a food-tracking necklace Xu and researchers at China's Northeastern University are developing. The wearable AutoDietary device tracks caloric intake, wrapping around the back of the neck like a choker. A small high-fidelity microphone records the sounds made during chewing and swallowing, and that data is sent to a smartphone via Bluetooth, where food types are identified. AutoDietary can accurately recognize what food and drink the wearer is consuming 85 percent of the time, according to a study involving 12 test subjects who were drinking water and eating six types of food. Xu thinks the necklace could one day help people suffering from diabetes, obesity, bowel disorders, and other ailments by enabling them to better monitor their food intake and improve their dietary management. His future plans include refining the algorithms used to distinguish between foods to enhance AutoDietary's ability to recognize what is being consumed. The device cannot differentiate similar foods or the ingredients of complex foods, and Xu aims to resolve this problem with a complementary biomonitoring device that determines the food's nutritional value via blood sugar levels and other measurements.


What Is a Robot?
The Atlantic (03/22/16) Adrienne LaFrance

Answering the question of what defines a robot is critical to the progression and proliferation of ubiquitous computing and automation and the evolving human-machine relationship. "As we design these machines, what does it do to the human if we have a class of slaves which are not human but that we treat as human?" asks New York Times reporter John Markoff. "We're creating this world in which most of our interactions are with anthropomorphized proxies." Roboticists must contend with the culturally pervasive, media-espoused perception of robots as adversaries, and Carnegie Mellon University professor Christopher Atkeson says this is an unfair representation. But the application of the term "robot" is very broad today, and used to erroneously identify various automated tasks in computing, according to experts. Many roboticists only recognize robots as embodied machines, with Cornell University professor Hadas Kress-Gazit defining them as things that can create physical motion in their environment. People's views and treatment of robots will likely be shaped by the tension between the machines' ability to improve quality of life and their competition with and replacement of humans in various industries. Making robots likable or empathetic is seen by some as negative, as it could give rise to the misguided notion they have free will; this leads to a debate about whether humans may ultimately lose control over robots and the processes they increasingly govern.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.