ACM SIGCHI Banner
Welcome to the August 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Your Third Hand: Why Tongues Have the Mouse Licked
New Scientist (07/27/14) Rachel E. Gross

Replacing the computer mouse with a device that uses the tongue is a goal of a long-term project led by Georgia Institute of Technology researcher Maysam Ghovanloo. The Tongue Drive is currently used to control phones and drive wheelchairs, but Ghovanloo envisions the device enabling the tongue to function as an additional limb to manipulate games, gadgets, or even a variant of Google Glass when the user's hands are occupied. "You have as many degrees of freedom in your tongue as you do in your hands and fingers," he says. The Tongue Drive uses a patented control system in which the movement of a magnet on the tongue is read by sensors that translate the data into wheelchair actions. The device currently has only six simple commands in its repertoire, and Ghovanloo is planning outside trials for the Tongue Drive to assess the threshold of its safe operation and to obtain regulatory approval. Applications in which the Tongue Drive is being explored include a project at Georgia State University that focuses on enabling paralysis victims to control robotic limbs. Meanwhile, the Shepherd Center's Kimberly Wilson hopes the technology can aid in the rehabilitation of soldiers with brain injuries by helping them learn to talk again. She thinks the Tongue Drive will let her visualize tongue movements so she can spot and correct errors.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Paid Subscription | Return to Headlines


New Gadget Helps the Vision Impaired to Read Graphs
Curtin News (07/29/14)

Curtin University researchers have developed a digital reading system for vision-impaired people so they can read graphical material. The system merges pattern-recognition technologies into a single platform and enables the extraction and description of mathematics and graphical material without sighted intervention for the first time. The system reads a document and identifies blocks of text or pictures and then segments them into related blocks configured in the proper reading order. Blocks are then categorized as images, graphs, math, or text, and recognized through optical character recognition or the Mathspeak utility. The content is then converted into audio format with navigation markup. "We hope this device will open up new opportunities for people with vision impairment--it's a matter of providing more independence, and not having to rely on sighted assistance to be able to read graphical and mathematical material," says co-developer and lecturer Iain Murray. He also notes many people should be able to afford the device thanks to its operation on very inexpensive platforms and an expected per-unit manufacturing cost of about $100. "Our system is easily operated by people of all ages and abilities and it is open source, meaning anyone with the skill can use and modify the software to suit their application," Murray says.


Wearables Sing in Smart Clothes
EE Times (07/28/14) Jessica Lipsky

Smart apparel's success may hinge on the incorporation of music and audio once chipmakers adopt cloth-friendly materials to facilitate such a breakthrough, said researcher Sabine Seymour at the Wearable Tech Expo. Her collaboration with a European fashion studio and Vienna's Museum of Applied Arts has yielded sonic fabric, applied to a poncho with various closures that each produces a different sound, while the buttons function as speakers. "Every single closure is creating a part of a soundtrack," Seymour notes. "Depending on which closure, you can control your own soundscape." The sonic fabric also has found use in an art installation that generated an interactive soundscape from pieces of clothing and accessories. By walking through a curtain in the exhibit, people could control its volume and sound. Seymour says the potential of such products cannot be realized until processors are seamlessly integrated into a garment, while charging infrastructure must be improved. She also says the management of the large data sets provided by the wearables will require higher standards for broadband and wireless carriers. Seymour and her associates have founded a startup committed to making "soft hardware" for smart fabrics, which can be more easily integrated into garment manufacturing processes. Seymour's research into computational cellulose at Aalto University is part of this initiative.


How Emotion-Sensing Movie Technology Could Save Lives of Elite Rescue Workers
The Conversation (07/22/14) Ian Greatbatch; Andrea Kleinsmith

Real-world applications of affective computing (AC) and human-computer interaction technology used in films could include life-saving measures for elite rescue workers, who face tremendous physical and psychological stress in their jobs, write Kingston University London senior lecturer Ian Greatbatch and University of Florida postdoctoral associate Andrea Kleinsmith. They say a method for unobtrusively monitoring Urban Search and Rescue workers' emotional states to determine if they are approaching physical exhaustion or a damaging level of psychological trauma could be extremely beneficial. AC technology could address many challenges of building a system that could alert a crew manager when a firefighter was reaching the limit of his endurance, in which case a rotation system could be applied to protect the firefighter's long-term welfare. By wearing motion sensors in their uniforms or gear, firefighters could be represented on a screen at the control point as stick figures showing their posture at that time. This representation could signal a firefighter may be trapped or unconscious if he is prone for too long, for example. Or if rescue personnel were using a tool in a way that could lead to long-term injury, a warning icon could flash over their avatar, or a timer could start, prompting the crew manager to rotate staff. AC technology also could be used to interpret firefighter movement to identify fatigue, inebriation, or injury, and be applied in smart machines so they would not activate for operators in such conditions.


Professor Seth Teller Dies at Age 50
MIT News (07/02/14)

Massachusetts Institute of Technology (MIT) professor Seth Teller has passed away, leaving behind a legacy of research to advance human-robot interaction technology as leader of the Computer Science and Artificial Intelligence Laboratory's Robotics, Vision, and Sensor Networks group. The group's mission is to instill environmental awareness within machines and make them capable of natural engagement with people in healthcare, military, civilian, and disaster-relief settings. Teller's recent projects included wearable devices that supply information about nearby terrain, objects, text, and people to the blind, a voice-commanded robot wheelchair, and a humanoid machine that performs hazardous tasks with limited human intervention. Teller was leader of the MIT team that will participate in the final round of the U.S. Defense Advanced Research Projects Agency's (DARPA) Robotics Challenge to promote robot technology for disaster response. He also led a team that secured fourth place in a previous DARPA contest to create a self-driving car. Teller's group focused on planning algorithms designed to make robots move more smoothly and less obtrusively, as part of the professor's interest in machines that could blend seamlessly into human surroundings. The robotic wheelchair controlled by vocal commands reflects Teller's interest in assistive technologies, an area his group concentrated on in recent years. Teller also worked on efforts to employ robotic sensor systems to enhance human capabilities, a recent example being a project to develop a system to help policemen who frequently stop on the shoulders of busy roadways avoid injury.


Stanford Scientists Identify Body Language Tied to Creativity, Learning
Stanford Report (CA) (07/24/14) Bjorn Carey

Stanford University researchers have learned that the observation of subtle changes in torso and head movements can predict creative output or learning ability. The research utilized the video-game cameras at Stanford's Virtual Human Interaction Lab to measure the movements of participants' bodies, limbs, and heads. One hundred subjects were studied and recorded at 30 frames per second, and the data produced was parsed with a machine-learning model trained to objectively identify patterns that might be missed by the human eye. The method was then applied to an experiment showing how body language impacts how effectively one person can teach another. "For our sample and our task, students with very extreme movements with their upper body tended to learn worse than others," notes Stanford professor Jeremy Bailenson. Although such outcomes offer almost no clues into cause and effect, Bailenson says "regardless of whether we know the cause, we can detect whether people are about to learn or not. This gives us the opportunity to devise ways to adjust in real time to improve learning." The second experiment involved two people who were monitored as they brainstormed water conservation methods, with creativity measured by the number of ideas they generated. It was discovered the more synchronous the subjects' head movements were, the more creative ideas they produced.


eBay Psychologist Works on Human Aspects of Ecommerce
eCommerce Bytes (07/30/14) Ina Steiner

Psychologist and research scientist Elizabeth Churchill has spent the last two years at eBay Research Labs exploring ways to optimize user e-commerce experiences as director of human-computer interaction research. Churchill says her work verifies that "when people feel good, they linger way longer than it takes to efficiently get something one." She describes her Putting the Person into Personalization project as a program for investigating tailored process personalization, while the Vintage Values project focuses on what fashion and accessory brands hold value over time, and attempts to comprehend more deeply people's values around conservation and preservation as part of their interest in vintage fashion and accessories. Churchill says by accompanying people on shopping trips to vintage and thrift stores, and interviewing them about buying and selling vintage items on eBay, she has learned they "have a strong sense of personal identity that rubs off in how they value eBay as an enabler for their passion. We are carrying out analyses right now to identify the processes and patterns in how people attach value to items and give them new life." The psychology inherent in gaming is another research focus of Churchill's. "We are asking: can we identify who, when, and why a game-like experience will engage people more versus when it is a turn-off, an irritation, and a negative experience," she notes.


A Quick Reminder That Technology Can Be Wonderful
Slate (07/22/14) Jon Kelvey

State-of-the-art telepresence technologies offer severely disabled people the chance to engage with the outside world. One such person is Henry Evans, a mute quadriplegic whose head movements enable him to access the Internet, use a voice synthesizer, and communicate via email using a special interface. Evans also has been piloting several telepresence robots, which have enabled him to take remote museum tours. Such robots might potentially help usher in a new accessibility standard, according to Disability Rights Education and Defense Fund policy analyst Marilyn Golden. She notes a regulation of the Americans With Disabilities Act was updated in 2010 to account for new technologies for inclusion in the definition of auxiliary aids and services. Golden says this could serve as a model for a future update that might include telepresence robots as an auxiliary aid, mandating they be made available unless they are financially or administratively burdensome to an organization. Although other revolutionary accessibility technologies exist, telepresence robots have the advantage of commercial availability and wide user appeal. "Because of [their] general-purpose capability, [robots] can be commoditized and you can benefit from economies of scale," notes the Georgia Institute of Technology's Charlie Kemp. "There will be lower costs, they will be more widespread." Still, some are concerned an overemphasis on remote access might dilute the basic visitor experience at art centers and similar venues.


Giving Emotions to Virtual Characters
Autonomous University of the State of Mexico (UAEM) (07/31/14)

Autonomous University of the State of Mexico (UAEM) researchers have successfully modeled human facial expressions in virtual characters and used them to enhance environments within a virtual communication. The purpose of the project is to generate expressions and emotions based on actual people by referring to the 43 muscles involved in facial behavior as dictated by the psychological setting. Tactile sensors were affixed to human models, and they discharge tiny electrical pulses to induce different gestures with which a three-dimensional camera captures the personality traits. The traits are converted into numerical data and then entered into UAEM's kinesic model to sort and generate animation of the expressions and gestures of the virtual characters to evoke happiness, sadness, surprise, fear, anger, and disgust. The simulated characters were incorporated into a "serious game" project whose goal is to run educational, scientific, or civil strategies, according to UAEM's Marco Antonio Ramos Corchado. The project aims to cultivate attitudes of self-improvement, use contextual dynamics to improve the learning process, and promote collaborative environments and communication to address problems and riddles. UAEM partnered with CINVESTAV GDL and the University of Guadalajara, where students served as models to acquire the virtual characters' physical and psychological traits.


An Innovative System Anticipates the Driver Fatigue to Prevent Accidents
RUVID Association (07/22/14)

Researchers at the Biomechanics Institute of Valencia, Spain (IBV) have worked on the European HARKEN project to develop a device mated to smart materials that can detect driver fatigue and prevent motorists from falling asleep and causing accidents. The sensor system, which measures heart and breathing rates, is integrated into the vehicle's seat cover and seat belt. "When people go into a state of fatigue or drowsiness, modifications appear in their breathing and heart rate; HARKEN can monitor those variables and therefore warn the driver before the symptoms (of fatigue) appear," says IBV's Jose Solaz. He notes the HARKEN device can quantify both heart rate and respiration in situations affected by vibrations and user movements. It "detects the mechanical effect of the heart beat and the respiratory activity, filtering and canceling the noise produced by the moving vehicle elements (vibrations and body movements), calculating the relevant parameters that will be integrated into future fatigue or somnolence detectors," Solaz says. The project has yielded a fully functional prototype that can facilitate the anticipation of fatigue symptoms associated with cardiac and respiratory rhythms and monitor such activity to prevent accidents. In addition to the seat cover and seat belt sensors, the device features a signal-processing unit that processes sensor data in real time.


Patients Tell More Secrets to Virtual Humans
Futurity.org (07/21/14) Tanya Abrams

Researchers at the University of Southern California's Institute for Creative Technologies (ICT) have discovered that patients are more willing to disclose personal information to virtual humans than to actual ones, probably because computers, unlike people, are not judgmental or critical. The researchers employed a virtual human medical screener named Ellie to gauge 239 patients' honesty when the avatar interviewed them in a private laboratory setting. Patients were interviewed as part of an assessment of SimSensei, a virtual human application that can be used to spot signs of depression and other mental health issues via real-time sensing and recognition of nonverbal behaviors. Although some patients were informed the virtual human's responses were fully automated and others were told the responses were remotely controlled by a person, all participants were randomly assigned a fully or semi-automated virtual human. The subjects were more open about their symptoms, no matter how potentially embarrassing, when they thought a human observer was not privy to the conversation. An analysis of participants' facial expressions also confirmed they were more likely to express sadness more intensely if they thought only the avatar was present. The ICT study provides the first empirical evidence that virtual humans can boost a patient's willingness to reveal personal information in a clinical environment, and also presents a strong case for doctors to start using virtual humans as medical screeners.


U.S. Brain-Machine Interface Expert to Direct New Wyss Center in Geneva
Science Insider (07/18/14) Emily Underwood

Brain-machine interface expert and Brown University neuroscientist John Donoghue will head the new Wyss Center for Bio- and Neuro-Engineering in Geneva. In an interview, Donoghue says the new facility's purpose is to design practical neuroprosthetics. He attributes his close alignment with the Wyss Center's mission to his work with BrainGate, a system developed at Brown to help paralysis victims control prosthetic limbs by thought. "We have been working for now a solid decade in the human realm trying to get a product that is really able to be used by people every day, and we're not there yet," Donoghue notes. "Wyss has really quite substantial resources to provide a stable base for projects like that." Donoghue says although the United States is home to many outstanding scientists and neuroscience projects, he is somewhat disappointed that U.S. investment is not as aggressive compared to other nations. "If you look at countries that are investing heavily in industry, education, and science, it's Germany and Switzerland," Donoghue observes. "Now tell me the two strongest economies in the world? Germany and Switzerland."


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.