ACM SIGCHI Banner
Welcome to the September 2013 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Chatterbot: A Computer That Teaches People to Be Social
The New Yorker (08/29/13) Betsy Morais

Massachusetts Institute of Technology (MIT) scientists have created My Automated Conversation coacH (MACH), a program that helps users improve their social skills. MACH uses a virtual coach to simulate conversation while scanning the user's facial expressions, listening to patterns of speech, and interpreting behavioral cues. The program tracks measures such as speech disfluency and smile intensity, offering feedback and playing back a video of the conversation accompanied by charts detailing intonation, head movements, and other gestures. The virtual coach's animation uses arm and posture movements, facial expressions, gaze behavior, and lip synchronization to achieve a lifelike effect. The concept for MACH originated at an Asperger’s Association of New England workshop, at which MIT's M. Ehsan Hoque, who led the MACH team, and colleagues were asked about potential technological assistance with social difficulties. Specifically, attendees were interested in a tool that would enable them to privately practice interaction skills, without the pressure of a real-world social situation. Hoque's team tested the system in mock job interviews with 90 MIT students. "Everybody hated watching their videos," he notes. However, Hoque says an objective, computerized interviewer is useful and less stress-inducing than an actual person. Although MACH's primary goal is to help people with autism, the program also could have applications in other areas, such as public speaking or dating.


This Augmented-Reality Sandbox Turns Dirt Into a UI
Wired News (08/30/13) Kyle Vanhemert

Researchers at the University of California, Davis have developed an augmented-reality sandbox in which users shape terrain in sand that is then transformed into a digital topographic map. A Kinect camera captures physical activity in the display, while a projector superimposes a map over the sandbox, updating contour lines and elevation colors in real time. In addition, the projector provides a virtual rainstorm that shows runoff and watershed on the landscape. "There's just no better way to teach how topographic contour lines work, or how water flows over a landscape, than building whatever terrain you can imagine, and then seeing the contours and the water react in real time to any changes you make," says Oliver Kreylos, one of the project's lead researchers. Augmented sandboxes have been installed at ECHO Lake Aquarium and Science Center in Vermont and the Tahoe Environmental Research Center at Davis, and another is being installed this month at the Lawrence Hall of Science in Berkeley. The Kinect camera can pick up any object in the sandbox, and Kreylos has experimented with physical additions such as blocks used as levees. Projection-mapped tables could be modified to simulate animal ecosystems, demonstrate the spread of disease, or teach architecture or urban planning by visualizing concepts such as structural integrity or traffic flow.


Researcher Controls Colleague's Motions in 1st Human Brain-to-Brain Interface
UW News (WA) (08/27/13) Doree Armstrong; Michelle Ma

University of Washington (UW) researchers have demonstrated what they believe is the first noninvasive instance of human-to-human brain interfacing, in which one researcher transmitted brain signals over the Internet to control another researcher's hand motions. UW professor Rajesh Rao sent a brain signal to UW professor Andrea Stocco on the other side of the campus, which caused Stocco’s finger to move involuntarily. "The Internet was a way to connect computers, and now it can be a way to connect brains," Stocco says. "We want to take the knowledge of a brain and transmit it directly from brain to brain." During the demonstration, Rao remained in his lab wearing a cap with electrodes connected to an electroencephalography machine to read his brain's electrical activity. Stocco sat in his lab across campus with a transcranial magnetic stimulation coil placed over the left motor cortex of his brain, which controls hand movement. Rao played a video game using only his mind, and imagined moving his right hand to hit the fire button when he was supposed to fire a cannon at a target. At nearly the same moment, Stocco, who was not looking at a computer screen, involuntarily moved his right index finger as if firing the cannon. "This was basically a one-way flow of information from my brain to his," Rao says. "The next step is having a more equitable two-way conversation directly between the two brains."


Intel Readying Computers to Read Our Faces--and Hands
Times of Israel (08/28/13) David Shamah

Intel hopes to transform human-computer interaction with its Perceptual Computing (Per-C) project by enabling users to communicate with computers through gestures, voice, and eye tracking. Per-C aims to create computers and devices "that are natural, intuitive, and immersive," says Intel's Mooly Eden. Gestures, voice, and facial expressions are intuitive, says Eden, adding that Intel intends to integrate those capabilities into computing. Intel Israel recently held its first Hackathon, at which more than a dozen programmers offered a glimpse into Per-C's future. Within a 48-hour timeframe, teams created a Per-C application using Intel's Per-C toolkit, 3D cameras, and microphones for voice commands. "People may not realize how close Intel is to this revolution, but some of the applications that were on display at the Hackathon are good examples of what we will soon see as everyday uses for this technology," says an Intel spokesperson. One team created a dating game that uses the camera to gauge a person's feelings, based on measures such as smiling, interpersonal distance, and nodding. Another app enables people to log into their devices using gesture combinations as a password. The winning app, Hand Your Music, uses hand movements to create a sound and light show, raising music volume and intensity based on gesture intensity, direction, and type.


Our Friends Electric
The Economist (09/07/2013)

Technologists are developing collaborative robots that are better able to interact with people and to be more useful in work, school, and home settings. In work environments, robots need to be able to interact with human workers to improve productivity by optimizing human dexterity, flexibility, and problem-solving with the strength, endurance, and precision of robots. Currently, industrial robots typically operate in a space that is separate from workers as a safety precaution, thus restricting the tasks that robots can perform. For example, the final assembly in car factories is mostly performed by human workers at a high cost. In an effort to integrate robot and human workers, BMW in December introduced a slow-moving collaborative robot to its Spartanburg, S.C., factory to help human workers insulate and water-seal vehicle doors. The robot spreads and glues material, while a human worker holds down the material. Without robot assistance, the factory has to shift workers off this task after only an hour or two to prevent elbow strain. BMW plans to introduce more robots to its factories, with a large roll out of the technology slated for 2014 in Germany. As factories introduce more tasks to robots, they will require improved skills such as handing over objects smoothly to human workers. Experts say the new collaborative robots will move more slowly, lift less, and be less precise, which will enable them to work more effectively with humans and bring robots into the mainstream.


Haptography: A Digital Future With Feeling
Humans Invent (08/20/13) Leo Kent

University of Pennsylvania professor Katherine Kuchenbecker is experimenting with haptic photography, or haptography, that records the sensation of touch. Touch includes not only tactility, but also kinaesthesia, which is "the whole pose of your body and how hard you are having to work your muscles to hold your body in position or push on an object," Kuchenbecker says. She created a haptic tool resembling a pen with sensors that gather data about movement when moved over an object or material. The strength with which the user pushes the tool is measured with a force sensor, while a motion-tracking sensor tracks the device's precise path and a vibration sensor and accelerometer detect the tool shaking back and forth. The data is recorded and programmed into a tablet computer, so that a user later moving a stylus over the screen will feel vibrations similar to those felt when moving the tool across the surface of the real-life object. Haptography could have a wide range of applications, including in online shopping to enable users to feel fabrics, for example. The tool could also enable dental students to learn by "feeling" teeth in various conditions. However, Kuchenbecker believes haptics will first gain widespread use in medicine.


Apple Envisions Way to Control 3D Objects Using 3D Gestures
CNet (08/20/13) Lance Whitney

Apple has filed a patent called "Working with 3D objects" that describes how to manipulate touch-screen objects in three dimensions through gestures. Apple's technology could enable users to move objects on a phone or tablet by moving their fingers above the surface of the device. For example, a user could touch an object on the screen and then use gestures, such as pinching, to control it in two dimensions. By touching the object in three places and moving their fingers off the screen, users could make an object three-dimensional. To work with the object in 3D, a user could then move their fingers above the surface of the device. Apple believes its gesture-based technology could have significant impacts on CAD programs that require users to design and adjust 3D objects. The 3D display would provide images with different polarizations for the left and right eyes, requiring users to wear polarized 3D glasses.


Exploring Google Glass Through Eyes of Early Users
Associated Press (08/27/13) Michael Liedtke

Google Glass has made its way to roughly 10,000 people who are testing an early version of the device, most of whom were winners of Google's "If I had Glass" contest. Glass' hands-free camera, which uses voice commands, is the most popular feature among beta-testers. In addition, early testers enjoyed being able to connect to the Internet by tapping on the right frame of Glass and swiping along the same side to scroll through a menu. The menu enables users, for example, to get directions or search for information on Google's search engine, which is displayed on a thumbnail-sized transparent screen above the user's right eye. Glass' short battery life, particularly when a lot of video is being taken, was one of the largest flaws cited by testers. Glass purportedly lasts for an entire day on a single battery charge for the typical user, but one user said she sometimes ran out of power after 90 minutes when recording a lot of video. In addition, testers said the Glass speaker is not loud enough, especially on the street or in other loud environments. Some analysts question whether Glass will appeal to mainstream consumers, with some saying smart watches might prove more popular. Nevertheless, many beta-testers were enthusiastic about Glass. "This is like having the Internet in your eye socket," said one user. "But it's less intrusive than I thought it would be."


Tomorrow's Cities: Just How Smart Is Songdo?
BBC News (09/01/13) Lucy Williamson

Songdo, South Korea, is designed around smart technologies, and many are looking to the new city as a prototype of smart cities of the future. Songdo offers futuristic hardware such as sensors to monitor temperature, energy use, and traffic flow, which can alert citizens when their bus is due or inform local authorities of problems. In addition, the city has charging stations for electric cars and a water-recycling system that prevents clean drinking water from being used to flush office toilets. Household waste is sucked directly from individual kitchens through underground tunnels to waste-processing centers that sort and treat the refuse to make it more environmentally sound. Some of this waste will eventually be used to create renewable energy, but the system is not yet fully operational. Developed by U.S. developer Gale International, Songdo currently is less than half full and less than 20 percent of the commercial office space is occupied. Although families and young residents are moving to Songdo, businesses have thus far been slower to make the move. "You're trying to create a diversity and a vitality that organic development creates, in and of itself, so it's a challenge to try and replicate that in a masterplan setting," says Gale International's Jonathan Thorpe. "At the same time, with a masterplan you have the ability to size the infrastructure to make sure the city works--now and in 50 years' time."


Assistive Technology for the Blind Goes Mobile and User-Friendly
Government Technology (09/04/13) Sarah Rich

Smartphones and other mobile devices are bringing assistive tools for the visually impaired into the mainstream, making the technology less expensive and more accessible. The Idaho Commission for the Blind and Visually Impaired uses assistive technology to help prepare the blind to return to the workforce, re-enter school, and learn other vocational skills. Laine Amoureux, an assistive technologist at the commission, uses a smartphone and a mobile Braille display device to perform work and to communicate with blind people. Mobile devices that understand Braille are critical to Amoureux's work outside the commission's main Boise office. The latest smartphones have built-in text-to-speech capabilities, and mobile Braille displays such as BrailleNote can serve as a personal digital assistant for Braille users. BrailleNote devices have six specialized keys that correspond to the Braille system, and can store email as well as documents and dictate text back to the user. In addition, Braille display devices and smartphones can be linked to share information using Bluetooth. Former STAR Center assistive technology specialist Allison Shipp notes that the technology has come a long way since the mid-1990s, when the Windows-based JAWS screen-reading program was the most common tool. "The good thing now is that a lot of stuff is coming mainstream," Shipp says. "The accessibility is becoming more mainstream."


Interacting With Robots
ScienceAlert (Australia) (08/29/13)

People prefer to interact with robots that resemble humans, according to a recent University of Auckland study. A robot with a human-like 3D virtual face was favored by 60 percent of study participants, while only 30 percent preferred a robot with no face and 10 percent preferred a model with silver-colored, simplified human features. "It's important for robot designers to know how to make robots that interact effectively with humans, so that people feel comfortable interacting with the robots," says study leader Elizabeth Broadbent. "One key dimension is robot appearance and how humanlike the robot should be." The study tested the impact of the robot's appearance on perceptions of its personality and mind, including the level of uneasiness that people felt when interacting with the robot. Participants' perception of the robot as unsociable and unfriendly correlated to the level of uneasiness they felt with interactions. Socially-assistive robots are being created to fill healthcare roles, helping elderly people and assisting at medical centers. "In the future this research will help us design robots to help people continue living independently in their own homes when they need some help," says professor Bruce MacDonald.


Intel Bringing Vision, 3D to Laptop and Tablet Cameras
IDG News Service (08/26/13) Agam Shah

Intel is developing a depth-sensing camera that will "bridge the gap between the real and virtual world," says Intel director of perceptual products and solutions Anil Nanduri. The enhanced webcam will enable computers to better understand people, allow greater interactivity with 3D games, and transform webconferencing, Nanduri says. "You'll add the ability to sense your excitement, emotion--whether you are happy or smiling. The algorithms and technologies are there, but they are getting more refined, and as they get more robust, you'll see them," he says. With its ability to sense distance, size, depth, color, and the contours of structures, the camera could help shape emerging 3D printing technology by enabling users to obtain a depth-sensing picture of a model tailored to a specific, printable design. In addition, the camera's eye-tracking functionality could monitor a person's reading, for example to gauge whether a child is stuck on a word and to track the amount read. Intel says the technology will appear in external webcams in the next few quarters, and will be integrated into laptops and ultrabooks in the second half of 2014.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.