ACM SIGCHI Banner
Welcome to the June 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Learn a Language While You Text
MIT News (05/14/15) Adam Conner-Simons

Researchers led by Massachusetts Institute of Technology graduate student Carrie Cai at the Computer Science and Artificial Intelligence Lab are seeking to explore how the time people usually waste waiting for replies to text and instant messages can be put to more constructive use. The WaitChatter Google Chat extension the team has developed delivers foreign-language vocabulary quizzes directly to a person's chatbox any time the system detects that the person is waiting for an instant message. "This integrated approach, which we call 'wait-learning,' is far less likely to be perceived as time-consuming or intrusive compared to using a separate learning app," Cai reports. The WaitChatter system takes words from a built-in list and the user's ongoing chat conversations, and the vocabulary quizzes are scheduled dynamically so that words the user has difficulty with appear more often. WaitChatter users in a pilot study learned an average of about four words a day over a two-week period. Although tested in Spanish and French, WaitChatter can accommodate any alphabet-based language available on Google Translate. A paper on the WaitChatter project was presented at ACM's Computer-Human Interaction 2015 Conference in Korea in April.


Emoticons May Signal Better Customer Service ;)
Penn State News (05/21/15) Matt Swayne

Online customer service agents who use emoticons during business-related text chats tend to have more satisfied customers, according to a study led by Pennsylvania State University professor S. Shyam Sundar. This was reflected by the higher scores given to emoticon-using agents by study participants, who also reported such agents were more personal than those who used a profile picture with their responses. "The fact that the emoticon came within the message and that this person is conveying some type of emotion to customers makes customers feel like the agent has an emotional presence," Sundar says. Sungkyunkwan University researcher and study co-author Eun Kyung Park also reports emoticons can effectively relate empathy to customers, which adds to their preference for agents who employ emoticons. Another factor that fed into higher customer ratings for agents was faster response times during chats, with Park noting "feelings of co-presence, constructed by the agent's promptness, might lead customers to be loyal to the company by creating a favorable service experience." Sundar observes emoticons helped customers feel emotionally connected to the agent, while fast conversations evoked feelings of close physical proximity. "What this shows is that if a conversation can't happen in the same place, at least it can happen at the same time, which leads to positive evaluations," he says.


Smart Cities Show Progress at Events
EE Times (05/15/15) Rick Merritt

About 40 smart city projects worldwide will be represented at a June event in Washington, D.C., for the purpose of sharing best technological practices in an attempt to rectify the fragmentation of such initiatives and hopefully create standards and frameworks, according to the U.S. National Institute of Science and Technology's (NIST) Sokwoo Rhee. "In the last year or so the federal government has been paying more attention to smart cities because they realize by supporting these projects they can increase everyone's quality of life including those in rural areas," he notes. About 65 teams with members from more than 200 organizations working on smart city projects will be hosted at the event. Projects run the gamut from metropolitan hubs such as Chicago, New York City, and San Francisco to a gunshot-detection system used in some Idaho schools to a joint AT&T/IBM water management system expected to be used in three cities. Intel also will demonstrate a new air quality monitor, Rhee says. "One thing that's changing the game right now are air quality sensors that cost a few hundred or a thousand dollars and used to be $60,000 per station for professional quality," he points out. Following the event, NIST will concentrate on programs designed to foster collaborations between cities.


Controlling Swarms of Robots With a Finger
Georgia Tech News Center (05/12/15)

A fleet of robots that can be controlled with the swipe of a finger on a tablet has been created by Georgia Institute of Technology researchers. "It's not possible for a person to control a thousand or a million robots by individually programming each one where to go," says Georgia Tech professor Magnus Egerstedt. "Instead, the operator controls an area that needs to be explored. Then the robots work together to determine the best ways to accomplish the job." By tapping the tablet, the operator directs where a beam of light is projected on the floor, causing the swarm robots to roll toward the light; dragging the light across the floor via finger-dragging will cause the swarm to follow it, while placing two fingers in different locations divides the swarm into teams to complete the process. The Georgia Tech robotic coverage algorithm has the flexibility to enable robots to change their minds should conditions warrant it. Each robot in the swarm is continuously measuring how much light is in its local vicinity, and is in communication with its neighbor. "The robots are working together to make sure that each one has the same amount of light in its own area," Egerstedt notes. Among the applications he envisages for the system is search-and-rescue missions in catastrophe-ravaged locations.


Why Shaking Hands Matters (Even When It's Virtual)
University of Bath (05/11/15)

Shaking hands and similar social niceties could have a significant impact on future robotics, according to researchers at the University of Bath. They suggest virtual meetings through a humanoid robot will offer much more compelling two-way interaction, giving individuals a physical presence in a remote locale. Potential applications could include conducting business meetings and enabling people with severely limited mobility to engage with the world. The researchers organized fake negotiations between two participants, each randomly assigned to be either the buyer or seller in a fictitious real-estate scenario; by representing one negotiator using a humanoid robot, researchers produced a system that let individuals shake hands before negotiations, even though physically apart. The researchers used touch-sensitive sensors in the robot's hand to design a virtual handshake that transmitted a signal when the hand was grasped, making a controller in the negotiator's hand concurrently vibrate and creating a sense of connectedness between the two. Bath researcher Chris Bevan says the trial demonstrated the symbolic value of shaking hands in how people deem others as trustworthy and cooperative. "This effect holds true even when a person cannot see the face of their counterpart," he notes. The research was presented at the ACM 2015 International Conference on Human Robot Interaction in Portland, OR, in March.


Researchers Create New Form of Smart Device Communication
Dartmouth Now (05/18/2015) John Cramer

Dartmouth College researchers led by professor Xia Zhou have created a type of real-time communication that enables screens and cameras to talk to each other. Zhou says the system uses off-the-shelf smart devices to support an unobtrusive, flexible, and lightweight communication channel between screens of electronic devices such as TVs, laptops, cameras, and smartphones. Called HiLight, the system will enable new context-aware applications for smart devices, such as smart eyeglasses communicating with screens to realize augmented reality or acquiring personalized information without affecting the content that users are currently viewing. The findings, which were presented at the ACM MobiSys'15 conference last week in Florence, Italy, may have implications for new security and graphics applications in particular. In the HiLight system, screens display content as they normally do, while devices can "communicate with one another without sacrificing their original functionality," says Zhou, co-director of the DartNets (Dartmouth Networking and Ubiquitous Systems) Lab. HiLight leverages the alpha channel, a well-known concept in computer graphics, to encode bits into the pixel translucency change. Zhou says HiLight overcomes the key bottleneck of existing designs by removing the need to directly modify pixel color values as well as decoupling communication and screen content image layers. "Our work advances the state-of-the-art by pushing screen-camera communication to the maximal flexibility," she says.


Holographic Computing: Are You Seeing what Microsoft Sees?
Computerworld New Zealand (05/20/15) James Henderson

Microsoft's HoloLens augmented reality headset was showcased at the recent Microsoft Build 2015 event in San Francisco, where users tried out the device, consisting of a headband equipped with optics, audio, and an embedded computer to generate holograms. Ovum analyst Michael Azoff notes the HoloLens differs from virtual reality (VR) systems by having a see-through visor that integrates real space with holograms. "VR head gear also needs to be tethered to a computer, whereas the HoloLens is a self-contained computer with CPU and GPU embedded within the headband," he observes. "It also contains an additional chip, a new Microsoft invention, which is a holographic processing unit [HPU] designed to support holographic processing requirements." Azoff envisions most initial HoloLens application developments to emphasize the business/professional aspects of the device. "The user interacts with hologram applications by tapping the air [in the augmented reality or virtual space they are tapping holographic controls] and moving parts of the hologram," he points out. Azoff thinks potential HoloLens applications should widen as the device engages with the Internet of Things, with the expectation that later HoloLens generations will expand the holographic viewing area.


Study: 44 Percent of Parents Struggle to Limit Cell Phone Use at Playgrounds
UW Today (05/18/15) Jennifer Langston

A University of Washington (UW) study presented at the ACM Computer Human Interaction 2015 conference in Korea found 44 percent of polled parents felt they should limit their cell phone use while watching their children at playgrounds, but were wracked with guilt for not adhering to that practice. Moreover, adult caregivers absorbed in their phones paid less attention to children's requests than when they were conversing with friends or caring for other children. In addition, the study learned boredom often overtook guilt or fear of being judged and was the single biggest driver prompting people to use cell phones. Respondents were less likely to agree that phone use makes it harder for children to capture their attention. The study also found adults often overestimated their responsiveness to children's requests while using their phones. "Phones do distract us and that's something to be aware of, but I think it's not nearly as bad as some people have made things out to be," says UW professor Julie Kientz. "Plenty of people are being really attentive parents and thinking deeply about these issues." The researchers note the finding that the largest group of caretakers had reservations about smartphone absorption while parenting suggests phone and app designers might consider incorporating functions to ease disengagement, such as a "parenting mode."


Autism Conference Gathers Computer Scientists, Clinicians
KSL.com (UT) (05/17/15) Allison Oligschlaeger

Nearly 2,000 medical professionals and computer scientists from more than 40 countries recently convened to highlight their work at the 11th Annual International Meeting for Autism Research in Salt Lake City, UT. "It's all about stimulating these dual communities to work together," notes conference committee member Matthew Goodwin. "A lot of computer scientists are approaching autism as a problem to be solved, but they really don't have the content domain expertise. We're hoping to expose both communities to each other to try to facilitate more collaborative work." Goodwin says computers have enabled the collection of qualitative data, making clinicians less reliant on clinical observations. He also observes digital technologies appeal strongly to autistic individuals. One example of the innovations on display at the meeting was a video game from La Jolla, CA-based West Health Institute, which uses Microsoft's Kinect technology to enhance players' social skills by generating an immersive virtual environment where players can practice basic social tasks. Meanwhile, University of Kentucky students are developing a Google Glass program to help autistic job-seekers struggling to interpret social and nonverbal signals in a minimally socially invasive fashion; the program employs vivid and colorful text to indicate appropriate speaking volume and a smiley face to encourage eye contact.


You Sound Sad, Human
Pacific Standard (05/15) Nathan Collins

Computers that monitor for the right signals can identify six basic emotions with 91-percent accuracy, according to a new study conducted by Northwestern University researchers Reza Asadi and Harriet Fell. The research involved examining three groups of features in human speech: mel-frequency cepstrum coefficients (MFCCs) that filter out the effects of the throat, tongue, and lips; Teager energy operators (TEOs) that capture the flow of air through the vocal tract; and acoustic landmarks, or transition spots in speech that are heard as the start of a word or the end of a sentence, for example. The researchers initially extracted MFCCs, TEOs, and landmarks from a series of short audio clips in which actors spoke in a variety of emotional states. They used a portion of that data to train a commercially available algorithm to distinguish between anger, fear, disgust, sadness, joy, and neutrality. The strategy successfully characterized emotions related in 91 percent of test clips, compared to 9 percent in earlier experiments. Acoustic features were best at identifying sadness and joy, and TEOs were especially helpful for identifying anger and fear.


Could Your Next Running Partner Be a Drone?
Runner's World (05/18/2015) Alison Wade

Researchers in Australia conducted a study in which runners ran accompanied by an aerial drone to see how such technology could enhance the running experience. Florian Mueller and Matthew Muirhead at RMIT University's Exertion Games Lab enlisted 13 recreational runners for the study, and the participants appreciated the quadcopter drone's ability to help them maintain a constant pace, although they desired more control over the route the drone took and the speed at which it traveled. "This companion was interacted with not just like we do with a machine, but also like a toy, animal, and even other human beings: people were saying the quadcopter appeared to have a "'character,'" notes Mueller. "This resulted in a very different experience compared to experiences with other interactive systems, like jogging apps on a mobile phone. They are more like training tools, whereas the quadcopter was treated like a companion." Mueller thinks the use of personal drones by runners could increase in the future, if the drones are well-designed. He and Muirhead also hope their work will be inspirational to other designers, resulting in systems to help boost physical activity. The study was presented at the ACM Computer-Human Interaction 2015 Conference in Korea.


Transit Guide-Bots for Blind Passengers?
Route Fifty (05/17/15) Bill Lucia

Researchers at Carnegie Mellon University's Robotics Institute are striving to incorporate robots, smartphones, mobile applications, and crowdsourced data into a system to help sight-impaired people navigate transit stations and other unfamiliar urban environments. "Ideally you'd like these kinds of systems to help people learn a new route, get through an unfamiliar place that they may only go once in their life," says Carnegie Mellon professor Aaron Steinfeld. In one application, a trusted source could mark-up a route through a transit station via a smartphone app, which would be shared not only with blind and visually impaired app-users to help them navigate, but also with the guide robots, who could feed it into the guidance they might offer. Steinfeld says the goal is to design a system of robots and smartphone technology that provides "deep local knowledge of what's going on in the station and in the surrounding areas around the building, so that you're getting appropriate information for the time and place." In addition to developing a guide-bot, Steinfeld's team is experimenting with an assistive humanoid robot that could function as a station agent for blind travelers. A U.S. National Science Foundation grant is underwriting the assistive robotics research.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.