ACM TechNews SIGCHI Edition
Welcome to the June 2016 SIGCHI edition of ACM TechNews.

ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.


Design Ethnography Pioneer Brigitte Jordan Dies at 78
Society for Applied Anthropology (06/06/16) Melissa Cefkin

Pioneering anthropologist Brigitte (Gitti) Jordan, who blazed a trail in the field of design ethnography, passed away on May 24 at the age of 78. Design ethnography is a strategy for basing design, particularly technical design, on an understanding of people's everyday landscapes, needs, values, behaviors, and surroundings. Jordan's 1971 Masters' thesis at Sacramento State College investigated how computer models might be better leveraged by anthropologists, and her postdoctoral work at the University of California, Irvine involved deep interaction with ethnomethodology and conversational analysis, emerging thinking in cognition, learning theory, and more. Jordan later was a researcher at the Xerox Palo Alto Research Center, working with other trailblazers to further the work of anthropological and ethnographic study of complex technology. As a senior scientist at the Institute for Research on Learning, Jordan made key contributions to establishing the institute's depth of focus and understanding of social learning processes. She also played a crucial role in institutionalizing the business of corporate anthropology. In her final years, Jordan was consultant to the Nissan Research Center in Silicon Valley, where artificial intelligence experts and roboticists are striving to create autonomous vehicles. In this capacity, Jordan pushed for researchers to devote attention to the technology's human ramifications.

8 Incredible Prototypes That Show the Future of Human-Computer Interaction
Co.Design (05/13/16) John Brownlee

ACM's CHI 2016 conference in San Jose, CA, showcased many prototypes for future human-computer interaction, including a "haptic retargeting" system from Microsoft Research to enhance virtual reality (VR) with tactile sensations by enabling a small number of physical props to stand in for an almost limitless number of virtual objects. Another Microsoft invention was "pre-touch sensing," a smartphone with the potential for various user-interface advances via its ability to detect how it is being grasped and when a finger is nearing the screen. Also highlighted were Dexta Robotics' haptic gloves, which simulate varying levels of force feedback in VR. The University of Washington, Disney Research, and Carnegie Mellon University displayed PaperID, a technology with a printable antenna designed to make paper an interactive touchscreen-like interface with environmental awareness and gestural responsiveness. Meanwhile, SkinTrack from the Human-Computer Interaction Institute's Future Interfaces Group can expand a smartwatch's tactile interface over the wearer's hand and arm. The Massachusetts Institute of Technology presented Materiable, a physical interface that mimics the tactile characteristics of various materials on a shapeshifting display. Other standout prototypes included Holoflex, a flexible, bendable smartphone enabling holographic displays with proper perspective, and Microsoft's SparseLight, a technology for eliminating the tunnel-vision effect in VR and augmented reality by using light-emitting diodes in the user's peripheral vision.

'On-the-Fly Print' Lets CAD Designers Modify in Progress
Cornell Chronicle (05/26/16) Bill Steele

Cornell University researchers have developed the On-the-Fly-Print system, an interactive three-dimensional (3D) prototyping device that prints designs as they are being designed. Using an augmented iteration of the WirePrint printer developed in a Cornell/Hasso Platner Institute project, the printer extrudes a rope of quick-hardening plastic to produce a wire frame representing the surface of the solid object defined in a computer-aided design (CAD) file, while the designer can make changes as printing proceeds. Although the print nozzle only works vertically, the printer's stage can be spun to present any face of the model facing up. A cutter also can be used to remove parts of the model, and the nozzle has been extended to reach through the wire mesh to make changes from within. A removable base aligned by magnets lets the operator take the model out of the printer to measure or test to see if it fits where it is supposed to go, then replace it in the original position to resume printing. The wire frame is designed by software, which sends instructions to the printer, permitting interruptions. The researchers presented the On-the-Fly-Print system last month at the ACM CHI 2016 conference in San Jose, CA.

Ultra-High-Resolution Walls for Visualizing Very Large Datasets
SPIE (05/28/16) Emmanuel Pietriga; Fernando del Campo; Amanda Ibsen; et al.

An exploration by SPIE researchers into the design, engineering, and evaluation of ultra-high-resolution wall displays concentrates on developing and empirically assessing unique interaction methods specifically designed for wall-display environments. For example, the researchers devised high-precision remote-pointing techniques enabling users to interact with the wall even when they are out of reach of the display, as well as panning and zooming techniques for navigating maps, images, and datasets that exceed the walls' display range. One lab installation is equipped with a touch-sensitive frame and narrower screen bezels to enable data representation with a high level of detail and simultaneous context retention, so users can switch from an overview of the data to a detailed view by moving in front of the display. The difficulties wall displays usually have with data sharing and graphics rendering stem from such operations being handled by computer clusters, and the SPIE displays utilize software toolkits to simplify rapid prototyping and the development of advanced interactive visualizations operating cluster-driven display surfaces. The latest wall display facilitates astronomers' visualization/interaction with large flexible image transport system images and image collections, including database querying and in-situ query outcome visualization. Interaction methods include direct manipulation and gestures on the display's surface or on portable tablets. The researchers note fields in which such advances are applicable include crisis management and scientific data analysis.

Why Siri Won't Listen to Millions of People With Disabilities
Scientific American (05/27/16) Emily Mullin

Siri and similar voice interfaces are unusable for more than 9 million people in the U.S. who suffer from vocal handicaps and other ailments the technology is incapable of understanding. In addition, despite the growing accuracy of voice recognition, it still cannot identify with sufficient precision many atypical voices or speech patterns such as stuttering, according to experts. "Speech recognizers are targeted at the vast majority of people at that center point on a bell curve," says Sensory CEO Todd Mozer. "Everyone else is on the edges." The National Institute on Deafness and Other Communication Disorders estimates the number of people beyond the help of current voice-recognition technology include about 4 percent of the U.S. population that had difficulty using their voices for one week or longer during the past year due to a speech, language, or vocal problem. A 2011 study using a conventional automatic speech-recognition system for comparing the accuracy of normal voices and those with six different vocal disorders found the product was 100-percent accurate at recognizing the normal subjects' speech, but 56-percent to 82.5-percent correct for patients with different types of voice ailments. A key problem is speech and vocal impediments, by their nature, generate random and unpredictable voices that voice-recognition systems cannot identify patterns on which to train. Mozer thinks better-designed neural networks could help address the problem once enough data is collated.

Making Cities Smarter
MIT News (06/01/16) Jennifer Formichelli

Smart cities are anchored by various interconnected social networks and systems, an understanding of which enables data-driven urban planning that promises to greatly enhance the quality of urban life. "When data is made comprehensible to a large number of people, it is well-positioned to drive social change," says Massachusetts Institute of Technology (MIT) professor Sarah Williams at the Institute for Data, Systems, and Society (IDSS). "Creating tools that synthesize and collect data transforms how we see the world, at one time showing us the effects of policies while also providing essential information to develop new urban strategies." IDSS researchers are developing smart technologies to power future cities, including autonomous vehicles and smart energy meters, by following a systems approach to build effective solutions. IDSS' Emilio Frazzoli currently is working on street-ready self-driving vehicle technology. He also has contributed to a mathematical model for an autonomous, or slot-based, intersection that makes traffic lights redundant via communication between autonomous cars. Meanwhile, MIT professors Jessika Trancik and Moshe Ben-Akiva are leading the development of Mobility Electronic Market for Optimized Travel, which rewards people for more efficient transportation behavior by providing them with real-time information and feedback. Another project from IDSS director Munther Dahleh and colleagues applies control theory to a feedback loop for energy pricing that would enable consumer pricing response while ameliorating excessive fluctuations in demand.

Force-Feeling Phone: Software Lets Mobile Devices Sense Pressure
University of Michigan News Service (05/26/16) Nicole Casal Moore; Steve Crang

University of Michigan (U-M) researchers have developed ForcePhone, software that could imbue any smartphone with the ability to sense force or pressure on its body or screen and execute associated instructions. The software enables the phone's speaker to discharge an inaudible tone at a frequency higher than 18 kHz, and the device's microphone detects the vibration caused by the sound. The act of pressing on the screen or squeezing the phone's body causes the tone to change, which the software converts into commands. "We've augmented the user interface without requiring any special built-in sensors," notes U-M professor Kang Shin. "ForcePhone increases the vocabulary between the phone and the user." ForcePhone co-developer Yu-Chih Tung says the software enables a natural interface, and he describes it as "the next step forward from a basic touch interface and it can complement other gestured communication channels and voice." Shin and Tung were partly inspired to create ForcePhone by the 2008 Batman movie "The Dark Knight," in which the hero uses smartphones as a sonar system to track down his enemy. The researchers will present ForcePhone this month at the ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2016) in Singapore.

Using Virtual Users to Develop Accessible ICT-Based Applications
Technical University of Madrid (Spain) (05/26/16)

Researchers in the Technical University of Madrid's Life Supporting Technologies group have produced parametric cognitive virtual models of disabled users, which can be used to model user interaction with information and communications technology (ICT) applications. The researchers initially identified the primary cognitive functions and their corresponding parameters that relate to each type of cognitive impairment of interest, which included Alzheimer's disease, Parkinson's disease, and visual, hearing, and speech impairments. The data then was classified using ACT-R cognitive architecture to interrelate how people execute tasks with the parameters defining their cognitive functions. Incorporated into the models were specific needs of end users and recommendations, while the researchers confirmed the models via the accessibility evaluation of an actual health-monitoring system. Three potential user groups--those with age-related cognitive decline, with visual impairments, and with motor disabilities--were modeled. Outcomes demonstrated the parametric cognitive virtual users are a virtual representation similar to real users affected by functional limitations, age-related cognitive decline, and disabilities. The models also were proven to help assess ICT applications' usability across the development cycle to ensure the maximum degree of accessibility and interaction, and to perform improvements before testing with real and potential users in real or simulated settings.

MIT's New Shapeshifting Interface Can Mimic the Behavior of Water
Wired (05/25/16) Liz Stinson

Researchers at the Massachusetts Institute of Technology (MIT) Media Lab's Tangible Media Group are developing a project called Materiable. The project involves a three-dimensional display of white plastic pins that are programmed to mimic flexibility, elasticity, and viscosity so they can replicate the characteristics of shape-shifting materials such as water, rubber, and clay. Underneath the pins are sensors that register the amount of pressure being applied to each pin or pixel, while actuators inside the pins control their response to that pressure. The researchers say these mechanisms create a "pseudo-haptic effect" combining a simultaneous visual and tactile experience when a person interacts with the display. MIT researcher Luke Vink sees technologies such as Materiable exploiting the properties of materiality to create a tangible link between digital and physical interfaces. "There's the computer approach to the way we do things and this physical one," he says. "When you start to combine those two worlds you start to get in some ways very confused. It's a very complex problem." Vink notes the metaphors that work in the digital world do not always translate to the physical world. Materiable is similar to InFORM and Transform, MIT projects in which pins rendered information in three-dimensional shapes and reacted to human gestures, respectively.

In Automaton We Trust
Harvard University (05/25/16) Adam Zewe

For her senior thesis project, Harvard University's Serena Booth studied the concept of over-trusting robotic systems by performing a human-robot interaction study on the Harvard campus. Booth set up a wheeled robot outside dormitories and controlled it remotely, monitoring its interactions with individuals and groups of students as it asked to be admitted into the dorms. People the robot approached individually helped it enter the building in 19 percent of trials. When Booth placed the robot within the building, and it approached individuals requesting help exiting, they agreed to help 40 percent of the time. The significance is that people may feel safety in numbers when engaging with robots, as the device was admitted into the building in 71 percent of cases when it approached groups. However, individuals were far more likely to let the machine inside when it was disguised as a deliverer of cookies for a fictional startup. Booth assumed individuals perceiving the robot as a threat would not let it in inside, but they were just as likely to admit it; she is worried only one person stopped to consider whether the robot had authorization to enter the dorm. "We are putting ourselves in a position where, as we allow robots to become more integrated into society, we could be setting ourselves up for some really bad outcomes," Booth says.

Will This Augmented Reality Machine Really Replace Your PC?
Bloomberg (05/24/16) Selina Wang

Meta CEO Meron Gribetz says his company has developed augmented reality technology that could eventually replace personal computers. "My vision is to build an [operating system] that's 100 times easier to use than a Macintosh," he says. "We're excited to remove the start menu--all of these metaphors and buttons and icons that take your brain extra steps to decode, and that are making my grandmother's job of using computers much harder." The Meta 2 headset enables the wearer to control three-dimensional content by hand using a multi-holographic screen overlay, and all of the startup's employees will soon use the headset exclusively for work. Stefano Baldassi, Meta's user research director, says the interface design is a collaborative effort between neuroscientists and engineers to ensure the human brain does not reject the interface. "There are intimate and nontrivial connections between the human senses and this computer," Baldassi notes. "They need to be studied in incredible depth for the product to succeed and to scale to the masses." Among the challenges researchers must contend with is maximizing the area in front of the user for containing virtual objects. Gribetz expects Meta will be able to streamline the headset into a strip of glass that projects holograms on the wearer's eye within five years.

Assistive Tech to Tackle Dementia Isolation
Manchester Metropolitan University News (05/19/16) Chris Morris

Researchers at Manchester Metropolitan University (MMU), in association with Stockport Memory Clinic and KMS Solutions, are exploring assistive technologies to support people suffering from dementia and their caregivers. The technology options include wearable technology, satellite tracking, and mobile phone applications. "The project will analyze the potential of these technologies to reduce social isolation and improve health outcomes," says MMU professor Josie Tetley. KMS Solutions managing director John Hearns says the technologies developed by his organization "can support independent living in the community by enabling the person living with dementia to move independently in safe areas, the carer to locate them using [global-positioning system] tracking, and the person with dementia or their carer to contact each other in case of an emergency." The project is part of an array of initiatives at the university designed to mitigate loneliness and isolation with new technologies. Stockport Memory Clinic's Carol Rushton says memory issues and confusion in dementia sufferers can lead to people becoming lost or disoriented even when taking a simple walk. "These distressing experiences for some can result in reduced activity, increased social isolation, and increased carer stress, so any form of technology that can support people [to] get out and about more safely and confidently would be a great help," she says.

'Virtual Partner' Elicits Emotional Responses From a Human
Florida Atlantic University (05/17/16) Gisele Galoustian

Researchers at Florida Atlantic University (FAU) sought to investigate the feelings people experience when interacting behaviorally with a machine by developing a "virtual partner." The researchers designed the virtual partner to be governed by mathematical models of human-to-human interactions in a manner that enables humans to engage with the mathematical description of their social personae. "Humans exhibited greater emotional arousal when they thought the virtual partner was a human and not a machine, even though in all cases, it was a machine that they were interacting with," notes postdoctoral researcher Mengsen Zhang in FAU's Center for Complex Systems and Brain Sciences (CCSBS). "Maybe we can think of intelligence in terms of coordinated motion within and between brains." FAU's Human Dynamic Clamp human-machine interface technology employs a virtual partner capable of eliciting real-time emotional responses from its human partner. Upon receiving input from human movement, the model drives an image of a moving hand displayed onscreen, while a subject observes and coordinates with the image as if it were a person monitored via a video circuit. The "surrogate" can be finely tuned and controlled, both by the experimenter and by input from the subject. CCSBS founder J.A. Scott Kelso says the Human Dynamic Clamp represents progress in research into complex social behavior that traditional designs cannot encompass.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.