ACM SIGCHI Banner
Welcome to the July 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Social Interaction Drives Language Learning Game
Cornell Chronicle (06/21/16) Bill Steele

Crystallize, a language-learning role-playing game developed by researchers at Cornell University, encourages the use of social interaction as a way to boost player knowledge and enjoyment. Crystallize requires players to guide avatars through a virtual environment where all of the characters speak the target language, so they can learn to communicate to make friends and secure employment. Players are sent on "quests" to learn new words by watching game characters converse. The prototype game teaches Japanese, but its creators say versions could be made for any language. The researchers enlisted 48 students to play the game in a lab, and assigned them to work with partners with whom they communicated via a chat interface. Before-and-after tests indicated players in a "high interdependence" group learned more words. The researchers suggest immersion in a virtual world offers visual and situational context, and gaming elements supply motivation to learn. The designers say their future plans will focus on encouraging long-term interaction, and integrating Crystallize with other language-learning software. "We hope to design effective experiences that clearly demonstrate to learners not just how to say things in a foreign language, but when and why they should say them," they say. The researchers presented their study at the ACM CHI 2016 conference in San Jose, CA.


New Tool for Virtual and Augmented Reality Uses 'Deep Learning'
Purdue University News (06/22/16) Emil Venere

Researchers at Purdue University's C Design Lab have developed DeepHand, a virtual and augmented reality system that uses deep learning to comprehend the hand's almost infinite complexity of joint angles and contortions. The researchers say the technology is an essential element for future systems that enable human engagement with virtual environments. "We figure out where your hands are and where your fingers are and all the motions of the hands and fingers in real time," says C Design Lab director Karthik Ramani. He says DeepHand employs a depth-sensing camera to capture the motion of the user's hand, and algorithms then interpret hand movements. "It's called a spatial user interface because you are interfacing with the computer in space instead of on a touchscreen or keyboard," Ramani notes. "Say the user wants to pick up items from a virtual desktop, drive a virtual car, or produce virtual pottery. The hands are obviously key." DeepHand was trained on a database of several million hand poses and configurations, with the positions of finger joints assigned specific "feature vectors" that can be rapidly retrieved. "We identify key angles in the hand, and we look at how these angles change, and these configurations are represented by a set of numbers," says Purdue doctoral student Ayan Sinha. DeepHand then chooses from the database the ones that best fit what the camera observes.


Tasting Victory: Why Gamers Are Hacking Taste and Smell
Technology Review (06/28/16) Christina Couch

Developers and researchers are seeking to expand the number of senses, including taste and smell, that can be utilized safely and practically into games in order to create a more immersive player experience. One example is Planet Licker, which incorporates differently colored and flavored ice pops into a game controller so players command the movements of an onscreen character with their tongue. Other innovations in a similar vein include the National University of Singapore's "electronic lollipop," which transmits signals to the user's tongue to evoke various taste sensations. Carnegie Mellon University professor Heather Kelley envisions such technologies being applied beyond games to include health and rehabilitation. "If we only think of 'games' as software you play at home on a screen, you ignore a huge swath of the creative and commercial opportunity here," she says. For example, researchers at Sweden's Malmo University are using Nosewire, a scent-based game, to see if cognitive degeneration can be retarded by stimulating a person's olfactory senses and the memories associated with certain smells. Kelley says advances such as Planet Licker are positive, but it will be some time before designers determine how to use taste and smell to advance game narratives as well as visuals and sounds.


Head-Up Displays, Haptics Top List of Future Interfaces
Design News (06/23/16) Charles Murray

Design engineers should ready themselves for a time when multiple sense-based user interfaces will be incorporated into consumer electronic products, according to a new Lux Research study. "There's not going to be just one way that a device communicates with you and there's not going to be one way that you communicate with your device," predicts Lux Research analyst Jon Melnick. The study predicts touch and speech interfaces will be particularly desired, as well as gesture control and haptics. Given consumers increasingly want to use all five senses when they communicate with their devices, haptic technology could experience the most growth in the consumer space over the next several decades, according to Melnick. He also expects a surge in the development of head-up displays in the automotive industry, fueled by the need to address driver distraction. Meanwhile, smart glasses could find wide use on factory floors in the next few years, thanks to their ability to enable workers to perform tasks hands-free. "In the consumer space, there was this weird creepy thing, where you might be using smart glasses to videotape someone," Melnick notes. "So it fell off the public's radar screen, but it's been doing quite well in manufacturing."


Next-Gen Light Bulb Could Turn Any Surface Into a Computer Screen
TechRepublic (06/20/16) Teena Maddox

Carnegie Mellon University professor Chris Harrison has developed a light bulb that emits information, turning any physical surface into an app-enabled desktop computer. Harrison says the bulb would be usable in any fixture to expand the immediate area into a computer. "It's a projector and a computer," he says. "It's not unlike a smartphone, but we've taken out the screen. We don't even need a battery." Harrison's concept is to screw the "info bulb" into a convenient light socket, where photons can strike a target surface. "We can project interactive applications onto your tabletop and it can read the information on it, and find relevant keywords," he says. "It can do a Google search, it can tag it. Even a written note, it can superimpose on the note to check your spelling, check your math, and augment the world." Harrison says the ubiquity of light bulbs in residences and offices benefits the idea of using bulbs as extensions of the computer, and it enables users to share their computer with a group without detracting from the social experience. "You can share your laptop screen and people can sit around and interact simultaneously," he says.


Computer Sketches Set to Make Online Shopping Much Easier
Queen Mary, University of London (06/24/16) Neha Okhandiar

Researchers at Queen Mary University of London (QMUL) say they have developed software that recognizes sketches and could help consumers shop more efficiently. They say the proliferation of touchscreens enables more users to effectively sketch accurate depictions of objects, a new technique that could be more effective than text-based or photo searches. "What's great about our system is that the user doesn't have to be an artist for the sketch to be accurate, yet is able to retrieve images in a more precise manner than text," says QMUL researcher Yi-Zhe Song. The program, called fine-grained sketch-based image retrieval (SBIR), overcomes problems with using text to describe visual objects in words, especially when dealing with precise details, and with using photos, which can make the search far too narrow. "For the first time users are presented with a system that can perform fine-grained retrieval of images--this leap forward unlocks the commercial adaptation of image search for companies," says QMUL's Timothy Hospedales. SBIR is designed to mimic the human brain's processing through arrays of simulated neurons. The system was trained to match sketches to photos based on about 30,000 sketch-photo comparisons, learning how to interpret details of photos and how people try to depict them in hand drawing.


Can Google Glass Help Autistic Children Read Faces?
Associated Press (06/23/16) Terence Chea

Stanford University's "autism glass" facial-recognition software runs on the Google Glass headset and records and analyzes faces in real time and alerts the wearer to the emotions they are expressing. The device is being tested with about 100 autistic children to determine if it can improve their ability to read facial expressions. "The autism glass program is meant to teach children with autism how to understand what a face is telling them. And we believe that when that happens they will become more socially engaged," says Stanford professor Dennis Wall. When the autism glass' camera detects an emotion such as happiness or sadness, the word "happy" or "sad"--or a corresponding emoji--flashes on the glass display. Wall says if the experiment yields positive outcomes, the autism glass technology could become commercially available within a few years. The potential of such technologies excites autism advocates. "Glass and wearable technology are the future," says Autism Speaks' Robert Ring. "They're going to play a pivotal role in how we understand, manage, and diagnose disorders like autism." Stanford student Catalin Voss and researcher Nick Haber developed the technology to track faces and detect emotions in a wide range of people and settings. "We had the idea of basically creating a behavioral aide that would recognize the expressions and faces for you and then give you social cues according to those," Voss says.


Need Hair? Press 'Print'
MIT News (06/16/16) Jennifer Chu

Researchers in the Massachusetts Institute of Technology (MIT) Media Lab have developed a technique for rapidly and efficiently modeling and three-dimensionally (3D) printing thousands of hair-like structures, skipping the step of using computer-aided design software. With the "Cillia" software platform, users can define the hairs' angle, thickness, density, and height in minutes by changing pixel configurations in the cone representing one hair. The researchers have designed and printed arrays of hair-like structures with a resolution of 50 microns. They also have produced hair arrays of varying textures on flat and curved surfaces with a conventional 3D printer. The researchers say the goal of the experiments is to explore how 3D-printed hair could perform tasks such as sensing, adhesion, and actuation. Examples of their work include printing a furry rabbit figure with light-emitting diodes that illuminate when someone strokes the toy, and an actuating mechanism designed so the hairs move and sort coins in conjunction with vibrations. MIT graduate student Jifei Ou says the research was inspired by the hair-like structures in nature. "We're just trying to think how can we fully utilize the potential of 3D printing, and create new functional materials whose properties are easily tunable and controllable," Ou says. The researchers presented their work at the ACM CHI 2016 conference in Santa Barbara, CA.


The Advent of Virtual Humans
CNet (06/29/16) Jon Skillings

Advances in artificial intelligence (AI) are bringing the concept of virtual humans that develop a rapport with real people closer to reality. One iteration of this technology is the "socially aware robot assistant" (Sara) developed at Carnegie Mellon University. Sara engages with humans via a conversational interface while appearing as a life-size animated face and torso on screen. "In general, AI is moving into more artificial social intelligence," says Jonathan Gratch at the University of Southern California's (USC) Institute for Creative Technologies. He describes social intelligence as the ability "to understand people, how they think, how to communicate with them, what their emotional state is." More advanced and autonomous systems capable of self-action could read humans' emotions from their gaze or other body language, and respond to their needs accordingly. An example is USC's SimSensei, a program that records, measures, and analyzes a person's behavior and gets to know them better through dialogue. Critical to the success of virtual humans is AI's ability to continuously learn from its interactions, and from both the right and the wrong actions it follows. "In that sense, the challenges are just as much social as they are technical," says Microsoft Research's Peter Lee.


New App Is Mapping the Accessible City
Next City (06/20/16) Josh Cohen

Digitally mapping out cities to help mobility-challenged people is the goal of AccessMap Seattle, a website/mobile accessibility app currently undergoing beta testing. AccessMap grew out of a hackathon held in March 2015 challenging developers to produce data-driven transportation software solutions. The app displays color-coded street steepness, curb ramp locales, elevators, bus stops, and construction sites. Beta testers are using a version that features end-to-end route finding with an option for users to enter problem areas such as broken sidewalks or missing curb ramps. A key challenge for AccessMap is the accuracy of data, given deficits in the Seattle Department of Transportation's information on sidewalks. "It's really hard for the city to collect data on temporary problems such as construction blocking the sidewalk more than it's supposed to or bad pavement quality," notes AccessMap co-developer Nick Bolten. "We want to be able to crowdsource that information, similar to how Waze does it." Alliance of People with disAbilities board president Steve Lewis says AccessMap could be very useful for navigation, and it could help emphasize the everyday obstacles mobility-challenged people encounter. The developers next plan to extend AccessMap to Denver and Savannah, GA, and eventually want it to become a nationwide resource.


How Do You Teach Human Interaction to a Robot? Lots of TV
Associated Press (06/23/16) William J. Kole

Researchers at the Massachusetts Institute of Technology (MIT) believe advances in the function of next-generation artificial intelligence could be aided by a breakthrough in which a computer binge-watching YouTube videos and TV shows can predict human behavior. "It could help a robot move more fluidly through your living space," says lead MIT researcher Carl Vondrick. "The robot won't want to start pouring milk if it thinks you're about to pull the glass away." The research is the result of a two-year effort at MIT's Computer Science and Artificial Intelligence Laboratory to develop an algorithm capable of mimicking human intuition in predicting what will happen next after two people meet. The research required showing the machine-learning system 600 hours of random YouTube videos of humans greeting one another. The videos were downloaded and rendered into visual representations that the algorithm could read and mine for complex patterns, and then the computer was screened clips from TV sitcoms it had never previously viewed. The system was able to predict human interactions between sitcom characters with 43-percent accuracy, and that precision will likely improve with more binge-watching, according to the researchers. "To have robots interact with humans seamlessly, the robot should be able to reason about the immediate future of our actions," says former MIT researcher Hamed Pirsiavash.


Large Screen vs. Compact Device: Bendable Phones Could Be the Answer
San Francisco Chronicle (06/20/16) Wendy Lee

Foldable or bendable phones and tablets could become part of the next wave of electronics, once such products' scientific challenges are overcome. Lenovo says it is tackling the challenge with two concepts, one being an Android smartphone called the CPlus with more than 20 bend points that enables it to be folded into a bracelet. The second concept, the Folio, is a slightly larger Android phone that unfolds into a 7.8-inch tablet. Lenovo courted consumer feedback during the bendable phone project's development phase, noting people desired a sufficiently large display with a pocket-sized form factor. Among the technical details that need to be ironed out for such products to be commercially ready are engineering them so they do not break as a user checks their phone and regularly folds it into a bracelet. Analysts also cite the difficulty of finding flexible processors for the phone's motherboard. IDC's Ramon Llamas notes the products will be much more attractive to consumers with the delivery of compelling applications and appealing features. However, Queen's University School of Computing professor Roel Vertegaal thinks there already is demand for flexible devices. He says the next step likely will be a smartwatch that can make calls without the need of a smartphone, and suggests eventually people may choose the type of smartphone they will use depending on their activities.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.