Welcome to the December 2013 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
HEADLINES AT A GLANCE
Expert Analyzes New Healthcare Website
The Observer (11/15/13) Henry Gens
David Mitropoulos-Rundus, director of the annual Internet User Experience conference, detailed at the event some of the shortcomings of the U.S. government's healthcare.gov website while emphasizing many precepts of good user design that could be utilized to improve it. His talk concentrated on the human issues rather than the technical issues with the site. Mitropoulos-Rundus said healthcare.gov's usability goal was to enable up to 7 million visitors by March 31, 2014, to be registered using the website. He stressed the usability problem of building a website that aims to be used by a large number of people from various backgrounds with the onerous job of finding a healthcare coverage plan. "We need to be really careful about how we word things, organize things, and present things because we are at risk of very quickly overwhelming people," Mitropoulos-Rundus noted. "Healthcare coverage, especially for people that have had jobs at companies that offered them one or two options, is very complex." Mitropoulos-Rundus cited confusing and unnecessary icons on many pages that complicated user understanding, and also pointed out that the account creation process was more convoluted and counter-intuitive than it should be. Another key usability obstacle was the website's failed attempt to brand things and add acronyms, Mitropoulos-Rundus noted. "The lesson here is that if you're going to brand something and label something, commit to it and be strong about it," he said. "This was weak and it fell apart, like the process."
The Future of Mobile: Smart Watches and Sweet Smelling Mobiles
City University London (11/21/13) Adrian David Cheok
City University London professor Adrian David Cheok says mobile devices will be transformed over the next five years by technologies such as haptics, as well as phones capable of transmitting fragrances. Cheok says haptic devices will enable users to communicate over the Internet using touch. He has developed hugging suits, for example, to allow geographically separated parents to hug their children. "In this way we are bringing emotional and touch communication to mobile devices," Cheok says. "Touch, taste, and smell are senses that are directly connected to parts of our brain controlling emotions and memory and we can therefore produce new forms of emotional communication through technology." Although smell and taste are the least commonly used senses online, Cheok and his colleagues have created a device that enables users to receive text messages and generate a scent. "We want people to not only send a fragrance with text but to also convey an emotion," Cheok says. "It is a truism that smell directly affects one's moods at a subconscious level so this kind of technology is important for bringing emotional communication to the Internet." He says the technology will "bring new kinds of entertainment, advertising, and communication via the Internet and mobile phone."
The Key to Better—and Safer—Robots Is Teaching Them About Human Interaction, Researchers Say
National Post (Canada) (11/22/13) Tristin Hopper
Human-robot interaction specialists around the world are working to make robots safer and more welcome in human environments by teaching them human gestures, mannerisms, and ethics. For example, the University of British Columbia's Collaborative Advanced Robotics and Intelligent Systems Laboratory (CARIS) is working to create a new generation of robots that are "as easy to interact with as other humans," says Ph.D. student Matthew Pan. CARIS wants to develop robots that know how to appropriately interrupt, share, communicate, understand simple hand gestures, and respond. By recreating human mannerisms, CARIS researchers hope to weave the "interactive fabric of a robot living in your life," says CARIS founder Elizabeth Croft. Students have studied videos of humans carrying out simple tasks to learn the mechanics of subtle, non-verbal cues. In addition, students attached biological sensors to people to determine which robot behaviors made them the most uncomfortable, and learned that rapid trajectories and unnatural contortions in robots made people nervous. Although labs are finding that humans do not treat robots as people, they do treat them as entirely new entities for which human qualities might apply. "We are, as a community of researchers, beginning to realize that robots are another category of being," Croft says.
'Wise Chisels': Art, Craftsmanship, and Power Tools
MIT News (11/22/13) David L. Chandler
A Massachusetts Institute of Technology (MIT) project seeks to marry handmade craft with computerized control systems. One product generated by this effort is a handheld carving tool designed by the MIT Media Lab's Amit Zoran that enables the user to control the carving process while assisted by a computer guidance system programmed with the desired three-dimensional shape. Anytime the user's motions extend into the region of the desired final form, the device supplies physical feedback that slows the motion. If the carving changes the shape to the degree that it would compromise the object's structural integrity, the computerized system can make real-time adjustments. The principles underlying such smart tools could be applied to physical safety; for example, by having the tools sense that a sharp blade is getting too close to a user's fingers, and automatically deflecting its path to avoid injury. Zoran says the project is motivated by his search "for this human quality, for ways to translate the long heritage of craft and creativity" into the digital era. "We're developing tools that don't have a direct physical, craft heritage, but are entirely new," he says. "Creativity is all about error. We're looking for creativity, for something that surprises us." The project was detailed at the recent ACM Symposium on User Interface Software and Technology.
Yummy Nigella—How We Could Soon Sample TV Chefs' Food From Our TVs
Irish Independent (11/22/13)
New simulator technology that can electronically replicate tastes by stimulating the tongue's taste buds via electrodes could make it possible for TV watchers to sample food displayed on their screens. The Digital State Interface also could be employed in computer games or to let people share meals online, says National University of Singapore engineer Nimesha Ranasinghe. The technology can alter the taste experience by making nuanced temperature adjustments through thermal stimulation. "By manipulating the magnitude of current, frequency, and temperature--both heating and cooling--thus far salty, sour, and bitter sensations have been successfully generated," Ranasinghe reports. Earlier, unsuccessful "taste TV" experiments focused on using computer-emitted chemicals, while the use of electrodes permits the digital transmission of tastes without requiring messy and costly chemical interfaces. "To simulate flavors we need to go beyond taste and incorporate smell, texture, colors, and other modalities, because flavor is a cross-sensory experience with multiple senses," Ranasinghe notes. "At the moment, we are expanding our technology to add the sense of smell into the experience, with the hope that by doing so we can expand the varieties of flavor sensations we can generate digitally." Among the healthcare applications Ranasinghe thinks the technology could be used for is to give diabetics the taste sensation of sweet foods without changing their blood sugar levels.
Google Patents Robot Help for Social Media Burnout
BBC News (11/22/13)
Google has patented plans for software that gradually learns how a person reacts on social networks so that it can emulate that person's usual responses to messages and updates from friends and relatives to help manage the daily inundation of data. In addition, the software analyzes ongoing interactions and flags messages requiring a more personal response. Google software engineer Ashish Bhatia envisions a refined system that gathers information about the various social networks someone has joined, and that logs what they do and notes how they respond to the different types of messages, notifications, status changes, videos, images, and links sent to them. Analyzing these responses helps the system formulate its own suggestions that should ideally be indistinguishable from those of an actual person. The proposed system also should have the flexibility to deal with many different types of events, employ data gleaned from other interactions with a person, and tailor the responses to match different social networks' required style. The software would produce suggested responses, which a person could simply agree to be sent on their behalf. Social media technologist Hadley Beeman cautions that human interaction's fine nuances might undermine the system's ability to choose what matters most and flag it appropriately. "The problem is that the 'important stuff' [or the trivial] depends on what our relationship is," she points out.
Researcher Working on Computer-Based Learning System to Gauge Kids' Emotions
Times of India (11/15/13) Hemali Chhapia
IITB-Monash Research Academy researcher Ramkumar Rajendran is developing a mathematical model for an intelligent tutoring system (ITS) that would identify and respond to a student's emotions while learning. An ITS modifies learning content based on factors such as student performance and existing knowledge, enabling users to learn at their own pace. Adding the ability to determine emotions would enable an ITS to minimize negative states such as frustration, thereby increasing student retention. The research suggests a model to predict and address frustration in real time, based on data from student interaction with the computer. A wide range of emotion-prediction methods have been used with ITS technology, including human observation, learners' self-reported data, mining the system's log data, face-based emotion-recognition systems, analyzing data from physical sensors such as a posture analysis seat, and analyzing data from physiological sensors such as electrocardiogram, electromyography, and galvanic skin response. However, in real-world settings, data-mining approaches that use data from the log file generated by the system as a record are currently the most feasible for commercial ITS implementation. "I personally believe that, in a decade or less, the way of interacting with computer-based technology will be entirely different from how we interact with systems now," Rajendran says.
The People’s Panopticon
The Economist (11/16/13)
Inexpensive and wearable cameras are enabling people to record both their own lives and the lives of others at an unprecedented level. About a billion inexpensive cameras shipped with mobile phones and tablets in 2012, according to research firm ABI. Wearable cameras can help professionals with legal liabilities, such as law enforcement officers, to avoid or reduce the cost of lawsuits. In addition, research shows that patients with impaired memories can benefit from reviewing recordings of their lives, and such approaches could improve symptoms of dementia and Alzheimer’s disease. "Life loggers" record nearly every moment of their lives, and image-scanning software can help make the recorded events searchable. Google is aiming to bring wearable cameras in to the mainstream with Glass. Although not designed for life-logging, Glass will enable more people to record and rewatch more of their daily lives, and the lives of others whose paths they cross. Glass aims to provide the hands-free functionality of a smartphone, blurring the line between online data and the real world by immediately displaying information in a user's field of vision. The technology could display names of songs as they play or information about plants and animals as the user encounters them. If people perceive benefits in increased recording of their daily lives and privacy is no longer "a social norm," as Facebook CEO Mark Zuckerberg has suggested, the technology is likely to progress unchecked.
Neural Engineering With Brain-Machine Interfaces Hold Much Promise
Science World Report (11/18/13) Amber Harmon
There is considerable potential in advanced brain-computer or brain-machine interfaces to restore motor function and communication to paralyzed people by digitizing the brain's electrical activity and translating it into action. For example, Duke University researcher Bryan Howell is leading a deep brain stimulation (DBS) project for treating Parkinson's disease. "DBS involves implanting an electrode within a target region of the brain and using generated electrical fields to modulate abnormal activity of neurons associated with symptoms," Howell notes. "The electrode is sending current pulses at a constant frequency that can transform someone from uncontrollable tremor to fluid movement. Within a couple of days, 90 to 95 percent of visible symptoms are eliminated." Duke neural engineers are striving to enhance DBS power efficiency and selectivity by optimizing certain elements of the device such as the electrode used to deliver current, and the waveform and pattern of current pulse. "Instead of just implanting these devices and hoping for the best, we can model the device and the brain regions in which it is used," Howell says. He also cites the importance of distributed computing to DBS research. "With a combination of campus grid and Open Science Grid resources, simulations that would take on the order of months to years to solve using a single desktop computer can be solved in a matter of days," he says.
MIT Group's Shape Display Steps to New Realm in Interaction Future
Phys.Org (11/14/13) Nancy Owano
Massachusetts Institute of Technology (MIT) Media Lab researchers in the Tangible Media Group are developing the inFORM shape-shifting surface, which enables users to physically interact with digital matter. The researchers define inFORM as a "Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way." Furthermore, inFORM can interact with the physical world, for example, by moving objects on the table's surface, and can physically show remote participants in a video conference, "allowing for a strong sense of presence and the ability to interact physically at a distance," the team says. Factors such as scale and expense currently restrict shape displays, but the researchers say, "this work is an exploration of the interaction capabilities and is meant to inspire further research in this area." The team believes shape-changing interfaces will become increasingly available in the future, and their work will develop a vocabulary and design space for more general-purpose interaction for shape displays, including content and user interface elements. The researchers say the inFORM display could have applications in surgical simulations as well as geospatial data, including maps, geographical information systems, and terrain and architectural models.
Hacked Google Glass Recognizes Finger Gestures
New Scientist (11/13/13) Hal Hodson
Researchers at the Massachusetts Institute of Technology (MIT) and other institutions are working to refine gestural interfaces for Google Glass and other wearable computing devices. MIT Media Lab's David Way is working to modify Glass to recognize gestures, using a depth camera strapped to a user's wrist that detects three-dimensional (3D) gestures. The system creates a personalized model of how users perform movements such as a finger lift, and maps each one to a function such as typing a specific letter. However, gesture-based typing is not Way's goal, because speech recognition is already sufficiently effective to replace most keyboard functions. Rather, Way hopes to identify gestures that users can easily use, to create an interface that learns and adapts to a user's personal preferences. Startup company 3dim also is working on gesture interfaces through a prototype that began at MIT as a hack of Google Glass. The 3dim system attaches an infrared LED and photodiode sensors to the Glass headset, and sensors detect light reflected from the user's hand as it moves. The 3dim researchers say the components are inexpensive, use less battery power than a depth camera, and can be integrated directly into Glass. Although not precise enough to type, the 3dim system identifies larger hand and arm motions that could help, for example, to swipe away notifications or navigate menus.
Carnegie Mellon and Temple Researchers Offer Fresh Perspective to Improve Learning by Taming Instructional Complexity
Carnegie Mellon News (PA) (11/21/13) Shilo Rea
Researchers at Carnegie Mellon University and Temple University have determined that available instructional options number more than 205 trillion. "Part of the instructional complexity challenge is that education is not 'one size fits all,' and optimal forms of instruction depend on details, such as how much a learner already knows and whether a fact, concept, or thinking skill is being targeted," notes Carnegie Mellon professor Ken Koedinger. The researchers probed existing education research to demonstrate that the space is too immense, with virtually countless possibilities for simple studies, to determine what methods will work for which students at different learning stages. "Much of the work on these learning principles has been conducted in laboratory settings," says Temple professor Julie Booth. "We need to shift our focus to determine when and for whom these techniques work in real-world classrooms." The researchers offer several recommendations to rein in instructional complexity and maximize the potential of enhancing research behind educational practice and student learning. Among their recommendations is to focus research on how different forms of instruction fulfill different functional needs, conduct more experiments to ascertain how different instructional techniques augment different learning functions, and exploit educational technology to further understand how people learn and which instructional dimensions can or cannot be applied separately by performing massive online studies.
Can Computers Negotiate? Win-Win Negotiations in a Virtual World
Knowledge INSEAD (11/13/13) Horacio Falcao
Computers are not only capable of negotiating, but also of using win-win moves to help their human equivalents generate more value for both parties, according to a recent paper by INSEAD in collaboration with A*STAR's Yinping Yang, Pluris' Nuno Delicado, and Northwestern University's Andrew Ortony. They determined that trust can be cultivated between humans and computers with the addition of a simple dynamic: taking the initiative of placing one priority on the table, explaining the motivation for doing so, and inviting the counterparty to reciprocate. The experiment involved a multi-issue negotiation in which a computer agent was the seller and humans the purchasers of laptops, with the issues the computer could disclose including price, quantity, service level, and delivery terms. The researchers found that distrusting humans were willing to deal once the machine presented one of its issues, shared its intention to collaborate, and invited the counterparty to do the same. The researchers said their experiment demonstrates that if you take the correct actions and share information that can improve both parties' positions throughout the negotiation, you can normalize even distrustful counterparts. They also say their finding has significant ramifications for companies applying software to negotiations, as well as to revealing the potential for human-to-human and human-to-computer collaborative research.
Abstract News © Copyright 2013 INFORMATION, INC.