ACM TechNews SIGCHI Edition
Welcome to the September 2016 SIGCHI edition of ACM TechNews.

ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.


Smartphone Speech Recognition Can Write Text Messages Three Times Faster Than Human Typing
Stanford News (08/24/16) Bjorn Carey

Stanford University researchers conducted an experiment whose results suggest smartphone speech-recognition software can be used to compose text messages with greater speed and accuracy than manual typing. "We were noticing that in the past two to three years, speech recognition was...benefiting from big data and deep learning to train its neural networks to produce faster, more accurate results," says Stanford professor James Landay. In the experiment, researchers from Stanford, Baidu, and the University of Washington compared the performance of Baidu's Deep Speech 2 cloud-based speech-recognition software to that of 32 seasoned texters using an Apple iPhone's built-in keyboard. Participants took turns typing or speaking about 100 phrases sourced from a standard library of everyday phrases used in text-based research while the testing app recorded their times and accuracy rates. Fifty percent of the subjects performed the task in English with the QWERTY keyboard, and the rest executed the test in their native Mandarin using iOS' Pinyin keyboard. Speech recognition was three times faster than typing for English, and the error rate was 20.4 percent lower. For Mandarin, speech was 2.8 times faster, with an error rate 63.4 percent lower than typing. Landay says these results should encourage engineers to design user interfaces that better leverage speech recognition's capabilities. "You could imagine an interface where you use speech to start and then it switches to a graphical interface that you can touch and control with your finger," he says.

Friends Are No Better Than Strangers in Accurately Identifying Emotion in Emails
ScienceDaily (08/30/16)

Researchers at Chatham University conducted three studies to determine friends are no better than total strangers at correctly interpreting email messages' emotional intent. The first two studies involved writing two emails signaling the presence or absence of eight distinct emotions, with one message based on a predetermined scenario, and the other freely written. Strangers read and rated each email for those same eight emotions, while the third study was relationship-based. People wrote two emails following the same criteria, then sent them to both friends and strangers for ratings and written responses. The studies found writers have greater confidence their friends can correctly interpret their emails than strangers, and readers are more confident in interpreting emails from friends than strangers. However, the researchers say this conviction bore no relationship to true accuracy--nor did verbal and nonverbal cues have a positive impact on accuracy. "As email, text messaging, and other forms of computer-mediated communication become more dominant forms of interaction, the communication of affect becomes more difficult, primarily because facial expressions, gestures, vocal intonation, and other forms of expressing emotion are lost," says Chatham professor Monica A. Riordan. "It is clear from this study that readers can determine that we are angry, but cannot determine HOW angry."

The Science of Waiting...and Waiting...for Your Page to Load
Wired (08/25/16) Bryan Gardiner

Researchers are exploring the perceived passage of time waiting for Web pages to load, and experimenting with methods designed to manipulate that perception so the wait does not seem as long or as stressful. For example, Chris Harrison at Carnegie Mellon University's Human-Computer Interaction Institute found progress bars enhanced with animation can alter perceived wait times. He and his colleagues found bars with backwards-flowing ribbing made waiting durations seem 11 percent faster than they actually were. "I was always fascinated by this general hypothetical question: would you rather have a computer that feels fast, but is actually slow in reality," Harrison says. "Or a computer that just feels slow, but is actually very fast? Luckily, with computing, we can have both: a computer that is fast, but feels even faster with good design." Other examples Harrison cites of visual expression speeding up wait times are dynamic content placeholders used by Google and Facebook. However, he says new research suggests with faster loading times of typically five seconds or less, visual content can make wait times appear longer.

What If Smart Homes Were Designed for Seniors, Instead?
Co.Design (08/23/16) John Brownlee

Smart home technologies may be more beneficial to seniors than millennials overall, according to Kevin Gaunt with the Umeå Institute of Design's Interaction Design Program. As part of his graduate project, Gaunt conceived of smart home bots for helping the elderly. "That led me to think about what if a future smart home had multiple [assistants] that each focused on a narrow set of tasks, like online shopping, managing the daily budget, or spying on the neighbors' whereabouts," he says. Gaunt envisions a kit of bots, each denoted by basic symbols, which can network their functionality to make seniors' lives less dull. Among the bots he describes is a surprise bot that keeps track of the household budget, and orders the senior a surprise gift within their budget when they are feeling low. The second bot is a neighborhood-surveillance program that watches for nosy neighbors and warns the resident when they are at the door. The third bot mimics the behavior patterns of a dead spouse, playing their music, ordering their food, and even cracking some of their jokes. "As these bots ultimately try and quite possibly sometimes fail to do the 'right' thing, I see our relationship with this technology changing to something more alike having pets at home," Gaunt says.

Designing Better Ways to Let Go of Digital Memories Than 'Delete'
Lancaster University (08/18/16)

Helping grieving people let go of emotionally-charged digital memories of departed loved ones or ended relationships without deleting them is investigated in a paper by researchers at Britain's Lancaster University and published in ACM Transactions on Computer-Human Interaction. Through interviews with psychotherapists, the researchers observed three distinct themes of letting go--dynamic disposal or breakage of artifacts, a gentler open disposal to communicate sadness or regret, and covert disposal in which the artifact is slowly transformed by dissolution or decomposition. Lancaster researcher Corina Sas says the traditional function of digital storage is the retention of content, but new ways of releasing it could be considered in keeping with the above rituals of letting go. "Containers could display digital possessions such as text or images or sounds one at a time before they appear to drift away, never to be found or seen again," she notes. "We could also explore ways to encourage destruction and transformation of digital possessions through shaking, breaking, or throwing so that the fragmentation of the content can be seen but take longer to disappear than by pressing delete." The researchers also suggest deliberately designing fragile storage devices from self-dissolving or biodegradable electronics, or the slow fragmentation of abstract content representations over time.

Ehsan Hoque: MIT Technology Review 'Innovator Under 35'
University of Rochester NewsCenter (08/23/16) Peter Iglinski

The editors of Massachusetts Institute of Technology (MIT) Technology Review have named University of Rochester professor Ehsan Hoque one of 2016's "innovators under 35" for his research in human-computer interaction. "Much of my work involves developing systems that allow computers to engage in communication in a very natural way," Hoque says. He developed the My Automated Conversation coach and Live Interactive Social Skills Assistant to provide real-time feedback on a person's social speaking skills. "Taking care of my [non-verbal] brother gave me an insight that individuals with special needs do not necessarily have to become nondisabled," Hoque says. "Instead, effort and awareness should go in helping them to live effectively as people with needs." Hoque also devised ROCSpeak, a social training module accessible via a computer browser, which analyzes word choice and body language. Hoque and his team also developed Rhema, an intelligent user interface for "smart glasses," which provides real-time feedback to the speaker; Rhema can record a speaker, relay the audio to a server that analyzes the volume and speaking rate, and then display the data to the speaker in real time. Hoque also is spearheading a $2.5-million project by the U.S. Department of Defense to tap human-to-human communication to make computers actual collaborators.

MIT and Microsoft Create Tattoos That Turn Your Skin Into a UI for Your Phone
TechRepublic (08/15/16) Alison DeNisco

Researchers from the Massachusetts Institute of Technology's Media Lab and Microsoft Research have developed temporary tattoos that people can use to control mobile devices, display information, and store data on their skin. Conductive DuoSkin tattoos use gold metal leaf as the basis for three types of on-skin interfaces--sensing touch input, displaying output, and wireless communication. The gold leaf layer provides conductivity to stencils, which in combination with surface-mount electronics form the DuoSkin. "We believe that skin serves as the bridge between the physical and digital realms, enabling users to leverage the personal aesthetic principle that is often missing in today's wearable tech," says the research team. A study on DuoSkin's applications found participants could control music on a smartphone with near-field communication via a chip on the tattoos. Researchers also designed an app called Couple Harmony, in which one person wears a thermochromatic fire tattoo that shows white when the other person signals they are angry by pressing a "mood button" on their forearm. Tufts University professor Rob Jacob believes a temporary tattoo user interface could be especially useful as devices shrink. "I think human-computer interaction is all about improving the communication bandwidth between the user and the computer, and this provides a new way to add more room for communication," he says.

Virtual Peer Pressure Works Just as Well as the Real Thing
New York University (08/25/16)

Virtual pressure from a computer-simulated peer is just as motivating as real peer pressure, according to researchers at New York University's Tandon School of Engineering. Moreover, the researchers say "fake" competition can be used for the good of science. The team formulated a mathematical model of human behavior that successfully predicted group responses across conditions. The researchers then designed an experiment to test whether virtual peer pressure could boost individual participation in a citizen science project. They reworked the interface of a citizen science project in which users view and tag images, adding an indicator bar at the top of the screen to show the number of times another participant had tagged the same image. This was the performance of the virtual peer, and the researchers developed five different scenarios for the virtual peer's performance. They say the highest-performing group of real participants were those who saw a virtual peer that consistently outperformed them. Conversely, the group who saw a virtual peer that underperformed them contributed less than any other group. In addition, the group whose virtual peer matched their own level of activity also outperformed the control group, potentially indicating the mere presence of a peer leads to increased performance.

'Tech for Others'
BC News (08/30/16) Rosanne Pellegrini

Boston College's (BC) free Camera Mouse software, designed to help disabled people use computers, has been downloaded more than 3 million times since it became available nine years ago. Camera Mouse enables people with little hand control to use their head movements to manipulate a mouse pointer on a Windows computer screen. The technology was created as a spinoff of the Boston College EagleEyes Project, which employs strategically placed electrodes to relay eye movement to a computer, which interprets the input as it would the movement of a mouse. BC professor James Gips says EagleEyes is designed for "people who are completely locked in except for voluntary eye movement," while Camera Mouse benefits a wider population and is easy for people to understand, download, install, and use. "We are helping people who are complete strangers, people who are in difficult life circumstances, people who have cerebral palsy or ALS or have suffered a stroke," Gips says. "To hear from these people and their...caregivers that Camera Mouse is helping them, even changing their lives, is gratifying beyond words."

New Methods Make Smartwatches Easier to Use
University of St. Andrews (08/12/16)

Researchers at Britain's University of St. Andrews have developed WatchMI, which they say is an easier-to-use interface for smartwatches and fitness bands. The WatchMI interface enables wearers to access watch functions using a broader range of actions, such as twisting the watch face to raise the volume, applying pressure to the screen to select text characters, or panning the watch right or left to scroll between menus or across maps. "Direct input with our smartphones or smartwatches allows many forms of interaction, however with small diminutive devices our fingers and hands get in the way, blocking our view of what is happening," says St. Andrews professor Aaron Quigley. "WatchMI overcomes this problem and allows us to wear and interact with all the pixels on our body-worn devices, not just the ones our fingers aren't blocking." Quigley says WatchMI employs the in-built gyroscope and accelerometer most smartwatches have to facilitate its functions. "I believe this could transform the way smartwatches are viewed and used because our technique could be applied to most of the smartwatches and fitness trackers in the market without adding to the cost," notes St. Andrews postgraduate researcher Hui-Shyong Yeo.

Now You Can Play Angry Birds Using a Touch-Sensitive Second Skin
New Scientist (08/11/16) Paul Marks

Researchers at South Korea's Seoul National University (SNU) have developed a transparent and stretchable touchpad, designed to be worn on the forearm, that enables users to control apps and games on a separate screen. The device is made from a water-rich polyacrylamide hydrogel and lithium chloride, and the researchers applied a voltage across the device and designed a circuit that could tell precisely where the surface is being touched. The touchpad is biocompatible, which means it can be worn snugly against the skin for long periods of time without a toxic reaction. In demonstrations, the team used the device to draw pictures, write, play music, and play games. The material also can stretch to 10 times its original surface area without affecting its function. The researchers plan to add multi-touch capabilities, which would let users complete "pinch-to-zoom" actions that are common on smartphones, says SNU's Jeong-Yun Sun. Separately, a team at the University of Tokyo is developing a paper-thin polymer skin that can turn the back of a human hand into a digital display. Meanwhile, other researchers are developing devices to be worn on the body or wrist that project an interface onto the skin.

Breaking the Fourth Wall in Human-Computer Interaction: Really Talking to Each Other
The Conversation (08/15/16) Ivan Gris

University of Texas at El Paso postdoctoral fellow Ivan Gris and colleagues seek to break the fourth wall in human-computer interfaces. "Our goal was to help people build rapport with virtual characters and analyze the importance of 'natural interaction'--without controllers, keyboard, mouse, text, or additional screens," Gris notes. The researchers combined IBM's Watson artificial intelligence system and their own software to create a Harry Potter "clone," which users can question via a microphone. Gris says the clone can answer any question as long as there is a reference for it in one of the Harry Potter books. He notes his group's most advanced development is a survival scenario in which users must converse, gesture, and interact with a virtual character to survive on a deserted island. "While people interact, we analyze how they behave, and look for different reactions to controlled characters' personality changes, gestures, speech tones, and rhythms, and even small things like breathing, blinking, and gaze movement," Gris says. "The next steps are clearly bringing these characters outside of their flat screens and virtual worlds, either to have people join them in their virtual environments through virtual reality, or to have the characters appear present in the real world through augmented reality."

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.