ACM TechNews SIGCHI Edition
Welcome to the December 2016 SIGCHI edition of ACM TechNews.

ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.


Prototype Smart Cane Could Transform Lives of the Blind and Visually Impaired
University of Manchester (11/23/16) Joe Paxton

Doctoral student Vasileios Tsormpatzoudis at the U.K.'s University of Manchester has embedded a low-cost computer in a prototype white cane as a tool to help the visually impaired better navigate their surroundings. "MySmartCane allows visual impaired people to sense their environment beyond the physical length of their cane," Tsormpatzoudis says. "The user is alerted to approaching objects using gentle audio, rather than waiting for the cane to physically bump into the object. Navigation is therefore easier and much faster." Tsormpatzoudis says he used three-dimensional printing and inexpensive sensors to produce an ultrasonic memory ball that attaches to the bottom of mySmartCane. The ball measures the distance to approaching objects and turns this data into an audio signal, enabling users to gauge object distance from the frequency of the sound, which they receive either via a single headphone or a pair of bone-conducting headphones. "I want to add an additional sensor to detect overhead obstacles such as sign-posts or doorways which could cause injury and are impossible to detect with a normal white-cane," Tsormpatzoudis says. "Another innovation could be using vibration rather sound."

UCF Research Finds Google Glass Technology May Slow Down Response Time
UCF Today (11/22/16) Rachel Williams

Research by University of Central Florida (UCF) doctoral student Joanna Lewis indicates heads-up displays such as Google Glass could slow response times in critical situations, such as driving. "What our data suggests is secondary information presented on a heads-up display is likely to interfere, and if that happens while driving, it may be distracting and dangerous," says UCF professor Mark Neider. The findings are the result of a three-year project conducted by Lewis at UCF's Applied Cognition and Aging Laboratory. In one experiment, Lewis had 363 student participants complete a "Where's Waldo?"-like primary task on a computer while either wearing or not wearing a Google Glass display. Some students were told to heed the display's secondary instructions while others were told to disregard them. All participants wearing the display took longer to complete their primary task. Lewis theorizes this resulted from two visual stimuli vying for the visual-processing capabilities in the student's brains. "The goal here is to make the case that we should be careful, and just because we can [integrate heads-up display technology into everyday activities] doesn't mean we should," Lewis says.

Grant for Tool to Help Children With Autism Communicate
Scoop (New Zealand) (11/18/16)

The KiwiNet Emerging Innovator Fund has awarded a $25,000 grant to a New Zealand scientist to build on the development of a software-based communication training platform for autistic children. Callaghan Innovation researcher Swati Gupta's Talk With Me platform helps children with autism learn cooperative skills via turn-taking dialogues based on visual symbols, text, and sound. "Unlocking the world of a child with special needs is profoundly exciting," Gupta says. "Seeing children who have difficulty in social interaction experience the joy of connecting with others is an absolute highlight of my work." Talk With Me trains children to follow fundamental rules of social interaction and communicate with each other. Caregivers can use a customizable interface to create conversations they want children to practice. Gupta says the interface can be optimized for an individual or group's level, age, condition, culture, and language. "The Emerging Innovator funding is being used to help understand the market and further development of the current [Talk With Me] prototype," she notes. Hilmorton High School speech language therapist Sarah Powell says the platform enabled students to communicate for 10 to 15 minutes without the intervention of teacher aides. "And these were children who, previously, if left alone without any teacher aide, wouldn't communicate at all," Powell notes.

The Scent of an Apple Could Replace Your Smartphone's LOL Face
Quartz (11/17/16) Livia Albeck-Ripka

A recent study explored whether scent could be used to express emotions during digital communication, a concept described by researchers as "odor emoticon." Separate cohorts of participants selected, reviewed, and corresponded with each another via scents. The researchers found smell-enhanced chatting was intuitive, encouraged more communication, and helped participants sense and articulate their feelings. Pamela Dalton at the Monell Chemical Senses Center says incorporating smells into mainstream technology is tricky, given the effort needed to reproduce a vast array of odors. However, Massachusetts Institute of Technology (MIT) professor Michael Bove says the main challenge is that olfaction has been "less researched in human-computer interaction than it deserves." The study, a collaboration between researchers from MIT and China's Zhejiang University, devised nine odor emoticons, such as "apple" for "happiness," "vinegar" for "envy," and "rose" for "love." Fifty-four participants heard pre-recorded voice mails while a prototype "olfaction machine" emitted pre-selected odors from the Demeter Fragrance Library. The participants then were asked to textually communicate on pre-selected topics, and could click a button with a listed emotion that triggered the machine to release the matching scent; most subjects said they found the method easy to incorporate into dialogues.

Website Design Potentially as Influential as the Message: Study
SFU News (11/09/16)

A new study from Canada's Simon Fraser University (SFU) demonstrates the design of a website may wield as much influence on a user group's opinions as its messaging. The findings "could have significant implications for sites related to politics, charity, sustainability, or e-commerce," says SFU professor Dianne Cyr. The study team asked nearly 400 volunteers to complete a survey about the Keystone XL pipeline project's website design while also assessing their knowledge of the project before they visited the site. The researchers evaluated three design factors--image appeal, site navigation, and connectedness, or how much other visitors provided opinions about the site's subject matter. The study found that less-informed users were more likely to have their opinion of Keystone XL shaped by the site's design. Meanwhile, users with preexisting knowledge were less likely to be influenced by the design. By accounting for contributing factors when designing a website, Cyr says it could be possible to influence visitors in alignment with the site owners' desire. "Offering a platform for users to share their opinions, for example, gives them a sense of connectedness that makes it more convincing to go along with the topic of persuasion," she notes.

A&M Group Seeks New Avenues for Anti-Cyberbullying Research
The Battalion (TX) (11/29/16) Pranav Kannan

The U.S. National Science Foundation has awarded the Early-concept Grant for Exploratory Research (EAGER) to a team at Texas A&M University's Sketch Recognition Lab to further their anti-cyberbullying research. The researchers are using the grant to develop KidGab, a social network for children that enables scientists to glean data and design safe online environments for children, as well as for artificial intelligence-based protection systems. "Children are particularly interesting, and the way that they text is very different from the way adults text or chat online," says A&M professor Tracy Hammond, director of the Sketch Recognition Lab. "That was a very intriguing problem for me, and I thought it aligns very well with our lab." Hammond says the goals of the grant are to develop a sustainable network for engaging children so researchers can study their interactions, while training the children to practice safe social network behavior. Hammond notes the EAGER grant partly seeks to acquire preliminary data on how conformity operates in children under 13. The team aims to gain more insights on bullying by observing interaction between children. "A lot of cyberbullying is about...groupthink and following the leader," Hammond says.

Massive VR System Links Merced to the World
UC Merced News (11/23/16) James Leonard

Researchers at the University of California, Merced (UC Merced) are using the Wide-Area Visualization Environment (WAVE) system to immerse themselves in virtual reality (VR) displayed in a half-pipe of 20 three-dimensional monitors. "The WAVE can reduce the gap that still exists between the processes of data collection and interpretation, and the dissemination of that data to researchers and the public," says UC Merced professor Nicola Lercari. The WAVE supports 10 to 20 times the resolution of an IMAX theater thanks to its 4K screens and custom-built personal computers with gamer graphics cards. The modular, upgradeable system uses mainly open source software. Its relatively inexpensive integration due to its mass-produced components enables designers to add content creation tools to support a broad user base. The VR environment forms part of the Pacific Research Platform linking the 10 UC campuses with other research institutions into a high-speed network for collaboration. UC Merced professor Michael Spivey says the WAVE will be used for cognitive science research in such fields as human-computer and human-robot interaction. Jeffrey Weekley, director of Cyberinfrastructure and Research Computing in the Office of Information Technology, says the WAVE "is unprecedented in its scale and resolution, but there are many more like it to come, and we will be connected to many of them."

Scientists Believe They've Nailed the Combination That Could Help Robots Feel Love
Quartz (11/16/16) Jacek Krywko

A "lovotics" robot developed by Hooman Samani from the Artificial Intelligence and Robotics Technology Laboratory at Taiwan's National Taipei University is capable of responding emotionally to how people treat it. The furry, wheeled robot is equipped with cameras, microphones, tactile sensors, speakers, and colorful diodes. Samani says the robot also is outfitted with digital hormones to mimic the human endocrine system and give it an internal experience of love and affection. The digital equivalent of the biological hormone ghrelin, which provokes hunger, triggers the robot's desire to charge its battery, for example. At the same time, the robot uses simulated oxytocin, an emotional hormone that, along with the digital ghrelin, is structured into dynamic Bayesian networks. Samani says the machine processes visual, auditory, and tactile input to determine a user's attitude toward it, and attempts to categorize those attitudes into behaviors that psychologists have labeled as signifying love or the lack of it--proximity, attachment, repeated exposure, and mirroring. The robot then releases the optimal hormone combination to adjust its internal state in response, which Samani says incentivizes it to behave in a certain way. People who interacted with the robot found their engagement most often reflected "ludus love," or a relationship fostered by having fun together.

Tiny Fingertip Camera Helps Blind People Read Without Braille
New Scientist (11/09/16) Aviva Rutkin

A miniature camera worn on the fingertip enables vision-impaired users to read text without braille. The HandSight device developed by Jon Froehlich and colleagues at the University of Maryland employs a one-millimeter-long camera that sits on the tip of the finger while the rest of the device grips the finger and wrist. As the user follows a line of text, a computer reads aloud, and the user can navigate through the text via audio cues or haptic buzzes. A recent study found HandSight enabled 19 blind users to read between 63 and 81 words a minute. The median reading speed for an expert braille reader is about 90 to 115 words a minute, while that of a sighted individual is about 200 words a minute. The American Foundation for the Blind's Matthew Janusauskas thinks the technology could help the blind read printed material where layout impacts understanding, such as a page with multiple columns of text. The HandSight team envisions a smartwatch-like device that can discern colors and patterns in addition to reading text. Froehlich notes his group wants to "augment the fingers with vision to allow blind or severely visually-impaired users to get a sense of the non-tactile world."

How Cybathletes Are Racing Past the Peak of Human Performance (11/09/16) Greg Williams

Athletes with bionic augmentations are using advanced prosthetics to outperform their able-bodied peers. Switzerland's ETH Zurich in October hosted the first Cybathlon to highlight the potential for bionic enhancement in six disciplines. A Cybathlon rehearsal featured participant Kevin Evison demonstrating the manual dexterity of his prosthetic hand by beating non-handicapped users in guiding a loop of wire around another piece without touching it. "Prosthetic designers are starting to face questions like, is it OK to have a hand that can spin, and spin and spin?" Evison notes. The Cybathlon's brain-computer interface (BCI) race had paralyzed participants direct a thought-controlled avatar around a course using four separate controls for running, jumping, and steering. A new BCI system from Imperial College London professor Aldo Faisal is designed to plot the location of the user's gaze onto a landscape, and guide their powered wheelchair to that direction with a relatively inexpensive combination of a front-facing eye-tracking camera and a forward-facing Kinect device. "If a device can control a wheelchair, then it should be able to help me drive a car," Faisal says. He imagines a future in which disabled as well as able-bodied people can use noninvasive prosthetics to gain superhuman abilities.

The Human Side of Technology
Queen's University Journal (11/04/16) Mikayla Wronko

Researchers in the Human Media Lab (HML) at Canada's Queen's University have been focused on the development of human-computer interaction technologies since 2000. HML's initial milestones included PaperPhone, the first computer with an interactive flexible electronic display, and ReFlex, a flexible smartphone integrating multi-touch with bend inputs. "By having a single theme and by basically advising all students to work within that theme...we as a group can own that theme and can do more work than anyone else on that one theme," says HML founder and director Roel Vertegaal. The lab's first theme was attentive user interfaces, which Vertegaal says entailed striking a balance concerning how a user's cognitive resources are utilized. The lab then concentrated on the theme of non-flat or organic user interfaces. "We have only used two dimensions in human-computer input and human-computer output," Vertegaal notes. "There is a lot of opportunity there." Vertegaal says HML's emphasis on addressing obvious challenges with simple solutions sets it apart from other labs, and his group's latest theme encompasses three-dimensional (3D) technology such as holograms. In May, the lab unveiled HoloFlex, a flexible smartphone that can engage with 3D video without a hardware component.

Making Sure Siri Ages Gracefully
UBC Science (11/07/16) Silvia Moreno-Garcia

In an interview, University of British Columbia (UBC) professor Joanna McGrenere, co-founder of HCI@UBC, discusses the evolution of human-computer interaction (HCI) with the adoption of devices by a broader range of users. She notes in the 1980s such devices were mainly geared for office workers, "but now toddlers are using iPads and 95-year-olds are using smartphones. Our definition of who a user is has expanded dramatically." McGrenere anticipates the eventual incorporation of HCI technology into virtually all devices, which means "the need to understand and design effective technology is more important than ever." McGrenere says HCI@UBC seeks to address this challenge by fostering collaboration between researchers in multiple disciplines. She also says some of her research emphasizes the older adult population, because the contention that older people will become more computer-savvy over time often overlooks the fact that HCI can be impacted by physical aging and the deterioration of motor abilities and short-term memory. Among the research challenges McGrenere cites is finding participants for studies concerning specific user populations, such as people suffering from aphasia. "Even with the small study we were running it was hard to get the people we needed," she notes.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.