ACM SIGCHI Banner
Welcome to the January 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


What the Future of Robots Could Look Like
CNN (12/27/14) Chris Atkeson

Carnegie Mellon University Human Computer Interaction Institute professor Chris Atkeson envisions a future in which robots function as personal health care companions to the elderly and infirm. "Current technology already offers useful sensing, diagnosis, and cognitive assistance for adults, and we are close to making useful robot servants with traditional metal robotics that can help older adults and people with disabilities take more control over their lives," he notes. Atkeson cites a need for research to yield robot companions that can touch and physically engage with people in need of physical care, and he visualizes machines that reduce the chance of injuring patients by being very lightweight and likely inflatable. In addition, he says a care robot should be able to deliver monitoring and diagnostic aid, and while wearable technology is helping in this capacity, it will likely soon be supplemented by orally ingested or implantable devices. Atkeson also says users should be given ownership of the data caregiver robots collect to avoid privacy infringement, and he speculates such technology providing "new and easier opportunities for regular screening for diseases such as cancer and dementia, and drug efficacy, side-effects and interactions." Atkeson says the most formidable challenge is constructing a brain that can enable helpful human-robot interaction, and a possible basis for this brain could be question-answering agents such as Siri.


Soon Your Tech Will Talk to You Through Your Skin
Wired News (12/22/14) Clive Thompson

Haptic technologies are becoming increasingly popular and diverse, with examples including drivers' seats that vibrate in the direction of an impending collision, and an Apple smartwatch that delivers taps of varying intensity to the wearer's wrist to communicate messages. Safety expert Raymond Kiefer describes skin as an "underused channel" that can "cut through the visual and auditory clutter." Alerts may not be the most interesting application for haptics, which has the potential to enable an entirely new mode of communication. One challenge with haptics is avoiding overuse and overreliance, but experimental results indicate progress is being made in generating an easy-to-understand haptics vernacular. For example, Google designer Seungyon Claire Lee tested BuzzWear, a wristband that vibrated three small buzzers in 24 distinctive patterns; testing revealed subjects were able to differentiate among the patterns with 99-percent accuracy within 40 minutes of training. Another study by the University of British Columbia's Karon MacLean involved playing patterns onto people's fingertips using a smartphone game, which determined test subjects could recall them weeks later. "It was like learning new words, like learning verbal language," MacLean notes. Simple buzzer patterns are likely to cede to more complex and granular signals, and Lee envisages using inexpensive conductive threads that relay tiny electrical bursts to stitch hundreds or thousands of haptic pixels into apparel that could "draw" a picture onto the user's skin.


Virtual Reality Comes to the Web--Maybe for Real This Time
Scientific American (12/29/14) Susan Kuchinskas

A new Web-delivered form of virtual reality (VR) that Google and Mozilla are developing is designed to let users experience three-dimensional digital environments via head-mounted displays linked to various browser-enabled devices. Google and Mozilla's forthcoming updates of their respective Chrome and Firefox browsers will support Web VR, while other indications of the technology's move toward the mainstream include Facebook's acquisition of VR headset manufacturer Oculus VR and the release of relatively inexpensive headsets from Samsung and Google. The success of Web VR will hinge mainly on content availability, and businesses have started accepting the notion of immersive advertising, largely thanks to the Facebook/Oculus merger. The latest wave of Web VR projects are app-based, in that users must find and individually download the immersive apps to their computer or phone, while most of them are only accessible from one kind of headset. Headsets are likely to proliferate as prices decline, with Mozilla's Josh Carpenter noting "as we continue to see economies of scale, virtual reality gets smaller, faster, and cheaper." Web VR also cannot succeed without broad browser support and the ability to find compelling content, and Mozilla recently launched a VR website directory with the latter goal in mind. Meanwhile, Google's Brandon Jones is working to add Web VR interfaces to Chrome, collaborating with Mozilla to ensure the consistency of VR content in both browsers.


Speech Recognition Better Than a Human's Exists. You Just Can't Use It Yet
Bloomberg (12/23/14) Jack Clark

Google engineer Johan Schalkwyk says recent milestones in speech recognition and artificial intelligence will dramatically upgrade gadgets' ability to understand humans, and he predicts such context- and nuance-comprehending innovations will be released within the next two years. One possible advancement is a system that adds sophisticated voice commands to mobile apps, which might enable users to enter a store and ask their phone what aisle specific items are located in. "You're going to see speech-recognition systems that have human or better-than-human accuracy become commercialized," predicts Expect Labs founder Tim Tuttle. At the root of such innovations is an influential paper published several years ago by Google and University of Toronto researchers on using deep neural networks to model speech in computers, and a follow-up document from collaboration with IBM and Microsoft. Neural networks only recently became viable as a result of a massive acceleration in computer processing and the advent of new software strategies. Google's lab project is based on this work, and its transition from feed-forward neural networks to recurrent neural networks enables more information storage and longer and more complex sequence processing. The Google system employs context, physical location, and other knowledge about the speaker to make assumptions on where a conversation is going and what it means. The new network technology should be able to process larger amounts of data than before and answer more complex requests.


Technology to Help People With Disabilities to Learn and Communicate
Polytechnic University of Catalonia (Spain) (12/19/14)

Polytechnic University of Catalonia (UPC) researchers have designed the Easy Communicator (ECO) application to help autistic children, teenagers with cerebral palsy, and senior citizens with cognitive difficulties communicate and learn more easily. "The development focused on two aspects: firstly, programming the computer application, and secondly, defining the communication elements that the application uses," says UPC professor Daniel Guasch. The app's communications components integrate photos, pictograms, video, text, and voice so messages are available in alternative and complementary formats. "To indicate the concept of a school, for example, you can use a customized combination of a photograph of the user's school, a generic pictogram, a video of the word school in sign language, the spoken word, or the text," Guasch notes. In addition, ECO enables the incorporation of external resources and resources found by the user. The app's interactivity with the user, designed as a game, differentiates it from similar apps. The researchers note the free open source platform also enables users to produce and share tailored content in a flexible manner.


Boston Researcher Cynthia Breazeal Is Ready to Bring Robots Into the Home. Are You?
Re/code (12/12/14) James Temple

The field of social robotics merges psychology, computer science, and engineering to develop robots capable of more natural interaction with humans, and Massachusetts Institute of Technology (MIT) roboticist Cynthia Breazeal's team at the MIT Media Lab's Personal Robots Group has a mission to bring such machines to the real world as personal assistants, teachers' aides, and caretakers. Social robotics is founded on human-computer interaction for delivering instructions and receiving information in the most effective manner, while also accounting for the nuances of human communication. Breazeal found the success rate humans experience in terms of learning and improving is higher when they receive signals indicative of emotional support, and that rate can carry over to technology that mimics those behaviors. "If you design them according to these principles of how people interact...it turns out the human mind is so attuned to that, that it naturally responds and benefits from that sort of interaction,” she says. Realizing social robots' potential requires tackling challenges associated with privacy, security, jobs, digital manipulation, and the appropriate boundaries between humans and robots. "I get very concerned about...the potential for deception, who's responsible if something goes wrong, and what kind of harms [robots] might cause intentionally or unintentionally," says Yale University scholar Wendell Wallach. Breazeal says social robots will give people more freedom to pursue creative and fulfilling lives "by enhancing and supporting people in what we care about."


This Brain Hat Helps the Paralyzed Make Music
CNN (12/23/14) Phoebe Parke

Plymouth University researcher and musician Eduardo Miranda is developing technology to help paralyzed people compose music, and his latest creation is the brain computer music interface (BCMI), which enables music composition via users' eye movements. The system links electrodes to the back of the user's head to monitor neural signals and determine gaze direction. The user can select specific music snippets represented by flashing onscreen icons by staring at them, and then a musician plays the score generated by those selections in real time. "You don't need musical talent to use the system, but the more you understand music the easier it is for you--if you understand, for example, that crochets are quicker than semi-quavers, you have an idea of what you're selecting--but after 10 seconds you hear the snippet of music you chose being played by the string quartet, so you learn quickly," Miranda says. He currently is augmenting the BCMI so it is more resilient and user-friendly, but the system's high hardware cost (around $15,600) is one factor that limits its commercialization. "We are working on a project, which will allow the public to use the system in a 'brain booth' and download the music they create from the Internet afterward," Miranda notes.


Study Evaluates Child Creativity With Technology
The Battalion (TX) (12/14/11) Amanda Talbot

Texas A&M University researchers are assessing young children's interaction with iPad tablets via art "stampies" and how they impact their creativity. Stampies are the drawing tool the research subjects use on the tablets, with Texas A&M professor Jinsil Hwaryoung Seo noting they consist "of tangible objects made out of different materials--wood, felt, silicone, plastic--and an iPad drawing application." The children, ages 4-6, engage with the stampies for about 60 minutes in the study and employ the tool as they would a pencil. "We are going to study how children choose a different stampie based on artistic choices--color, brush, shape, etc.--and how the materiality of each stampie affects their digital art creation," Seo notes. She says the purpose of the study is to determine whether a physical object holds an intrinsic meaning for a child and whether this association can help the child better interact with a digital application. Seo says they have so far uncovered an association of wood with musical instruments and felt with clothing. "There is great potential to enhance the transparency of interfaces and to guide user actions by tapping into the power of materiality in systems for young adults," she says. Fellow researcher Janelle Arita says the data collected by the study will be offered to the human-computer interaction community, to "help to further knowledge on how to benefit tangible interaction design for children."


A Facebook Application Knows If You Are Having a Bad Day and Tells Your Teacher
Plataforma SINC (Spain) (12/18/14)

The SentBuk application developed by Autonomous University of Madrid (UAM) researchers can deduce Facebook users' emotional stress via message analysis. "The tool is based on two algorithms: the first calculates the emotional load of each message and classifies it as positive, negative, or neutral," says UAM's Alvaro Ortigosa. "The second deduces emotional state by comparing it with the emotional load of recent messages." Ortigosa notes the app employs a natural language synthesis method to identify significant words with emotional load, as well as an automatic, machine-learning-type classification system. SentBuk's creators think the app could help online educators by feeding them similar information to that acquired by in-person teachers who study students' faces. "The information obtained via SentBuk, with the approval of the user, will be able to be used to avoid recommending especially complex pieces of work at times when it detects that the student is in a negative state of mind or one that is less positive than usual," Ortigosa says. He says SentBuk alerts teachers when a significant number of students are in a negative emotional state, noting "these messages are analyzed in context. Although there may be many reasons for the emotional state, the hypothesis is that these negative emotions should be uniformly distributed across time."


Eye Tracking in Google Glass: A Window Into the Soul?
Scientific American (12/18/14) Julia Calderone

Google recently filed patents that could lead to embedding eye-tracking technology into Google Glass, which some experts say may have privacy ramifications. For example, Lancaster University's Melodie Vidal cites controlled experiments proving eye tracking can determine face recognition by a subject, whether the subject is tired, whether someone is reading or driving, and if they are in a social context. Sex and age inference also is within the technology's capabilities, although identification would require iris recognition, Vidal says. Meanwhile, Technical University of Munich's Michael Dorr cites the potential of gaze tracking, noting, "gaze is tightly linked to what visual information you might be processing in detail. And once you have a head-mounted [outward-facing] camera, you have a detailed representation of what people are looking at. You'd be able to find what they think is interesting in that scene." Access to eye-tracking data would depend on whether the server is public or private, and Vidal says, "if we integrated eye tracking into, say, a mobile device, then an app would ask for authority to access your eye-tracking data." Whether such integration constitutes an invasion of privacy depends on how Google uses the data, according to Vidal. She says such a product could benefit users by enabling more natural interaction, while Dorr thinks it could be useful as a research tool and an aid for people with multiple disabilities.


Your Smartphone Could Soon Listen for Sleep Disorders
Technology Review (12/12/14) Rachel Metz

Researchers at the Stevens Institute of Technology and Florida State University are developing technology designed to monitor sleep disorders in a more convenient way by positioning a smartphone and a pair of microphone-outfitted earphones in close proximity to the sleeper. The research team spent six months studying such a system recording sounds as six people slept, and they report even with the earphones on a table next to the bed, they used the microphone to track subjects' breathing to within half a breath per minute of what could be recorded with a chest-worn respiration monitor and a microphone affixed to participants' collars. The researchers assessed the sensitivity of the microphone on the earbuds for recording breathing rate, and tweaked the earbuds to function as additional microphones that recorded in mono and stereo. They filtered out ambient noise to concentrate on breathing and other actions that occurred during sleep. Stevens professor Yingying Chen says the researchers intend to issue a related smartphone app in 2015, which could make it easier and less expensive to accurately monitor the quality of your sleep than sensor-equipped wristbands or devices that sit on or under the mattress. Chen also hopes the research can help diagnose sleep apnea and similar disorders, noting it can be difficult for physicians to capture irregular patterns in a hospital environment.


'Draw Me a Picture,' Say Scientists; Computer May Respond
UIC News (12/17/14) Jeanne Galatzer-Levy

Researchers at the University of Illinois at Chicago (UIC) and the University of Hawaii have received a $300,000 grant from the U.S. National Science Foundation to develop a computer capable of extracting meaningful visualizations from data based on natural language requests, as well as common gestures. "Today, with big data, you really need to be using visualizations to help you figure out what it is you're looking at," notes UIC's Andrew Johnson. "Visualization should be interactive; a dynamic process. We want scientists to be able get ideas out there quickly." UIC professor Barbara Di Eugenio says interaction with the system would be designed to be more akin to a dialogue with a person in the room, and the computer would not require explicit queries to produce an answer. "The goal is for the computer to be able to interpret even indirect questions, like 'It would be better to show salinity only at 10 meters,' or even statements that hint at something, and put together the visualization," she says. Johnson says Microsoft Excel is currently the most popular graphing tool for researchers, but the data explosion is quickly outpacing Excel's abilities. "Imagine if you had the computer behind you, helping you to see your data--it could really push scientific discovery," he notes.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.