ACM SIGCHI Banner
Welcome to the July 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Science Fiction Come True: Moving a Paralyzed Hand With the Power of Thought
The Washington Post (06/24/14) Jim Tankersley

Researchers at Battelle have developed brain-machine interface technology designed to help paralysis victims move their hands by thought. The process first involves installing an electrode-studded chip into the patient's brain so it can read commands from the brain region that controls hand movement. The chip is wired to a port in the patient's skull, which in turns links to a cable that carries the signals from the chip to a computer. An algorithm then decodes the brain's commands and adds additional commands that would normally originate from the spinal cord. The computer connects to an electrode sleeve wrapped around the patient's arm, which fires in a sequence to stimulate muscles and trigger the movements the subject is thinking. Patients prepare to operate the interface through intense practice drilling, a process during which they watch a digital hand move on a screen and think about moving their hand the same way, with their thoughts translated into the movements of a second, animated hand. Doctors and engineers who worked on the project say a recent successful test of the technology has ushered in a new future of bionic enhancement. They say future enhancements could include sleeves that would slide over paralyzed limbs, headbands that would replace implanted brain chips, and cellphones that would function as the central computing device.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


New Sensors Will Scoop Up 'Big Data' on Chicago
Chicago Tribune (06/23/14) David Heinzmann

The "Array of Things" smart cities project aims to install a system of data-collection sensors throughout Chicago to measure air quality, light intensity, sound volume, heat, precipitation, and wind, as well as counting people by measuring wireless signals on mobile devices. "Our intention is to understand cities better," says Charlie Catlett, director of the Urban Center for Computation and Data, a joint initiative between the University of Chicago and Argonne National Laboratory. "Part of the goal is to make these things essentially a public utility." Catlett says Chicago researchers are hoping the Array of Things will make the city a leader in research about how modern cities operate. The researchers also say the uniqueness of a permanent data collection infrastructure may give Chicago a competitive edge in drawing technological research. Planners foresee a permanent system of data-collection boxes used by researchers who lack the resources to build their own testing infrastructure. Both researchers and city officials are united in their desire to promote "Chicago as a test bed of urban analytical research," notes Chicago commissioner of information and technology Brenna Berman. She also notes the city will have final say on what kind of personal data the system collects because it will be deployed on city property. "We've been extremely sensitive to the security and the privacy of residents' data," Berman says.


Robots Learn From (Even Bad) Human Language
Cornell Chronicle (06/24/14) Bill Steele

Cornell University professor Ashutosh Saxena is leading a research effort to train robots to understand natural-language instructions from various speakers, account for missing data, and adjust to their current environment. Saxena's robot uses a three-dimensional camera to scan its surroundings and identify objects, using computer-vision software developed in Saxena's Robot Learning Lab. The robot has been taught to draw links between objects and their capabilities, so knowing that a pan can be poured into or poured from, for example, enables it to identify the pan, pinpoint the water faucet, and incorporate that data into its procedure. Saxena's team employs machine-learning methods to teach the robot's computer brain to associate whole commands with flexibly characterized actions. Animated video simulations of the action to be performed, along with recorded voice commands from several speakers, are fed to the computer, which stores the combination of numerous similar commands as a flexible template that can match many variations. When these capabilities were tested, the robot correctly followed commands up to 64 percent of the time even when the instructions were varied or the environment was changed, and it was able to fill in missing steps. The researchers also have created a website to develop a crowdsourced library of instructions for the Cornell robots, with plans to scale the library to include millions of examples. "With crowdsourcing at such a scale, robots will learn at a much faster rate," Saxena says.


How a Sensor-Filled World Will Change Human Consciousness
Scientific American (07/14) Vol. 311, No. 1, P. 36 Gershon Dublon; Joseph A. Paradiso

Most of the data generated by current network-connected sensors is hidden from view, siloed for use by specific applications. The ubiquitous computing era will only arrive when those silos are removed, which will enable sensor data utilization by any networked-linked device, write Massachusetts Institute of Technology Media Lab professor Joseph A. Paradiso and Ph.D. student Gershon Dublon. The free availability of ubiquitous sensor data across devices should lead to an explosion of innovation. One likely effect that ubiquitous computing will have is environment-embedded sensors serving as extensions of the human nervous system, so wearable computing devices could be transformed into sensory prosthetics. Obtaining a better understanding of the wearer's state of attention could be key to unlocking such prosthetics' potential. Media Lab researchers are performing experiments to determine whether wearable computers can tie into the brain's innate ability to concentrate on tasks while maintaining a preattentive link to the environment. Researchers also are focused on translating sensor data into the language of human perception via innovations such as DoppelLab sensor-browsing software. DoppelLab renders steams of sensor data into graphic form, overlaying it on a computer-aided design model of a building environment; the app also collects sounds through microphones to produce a visual sonic environment. Sensors and computers could enable virtual travel to remote environments, which would profoundly change people's concepts of physical presence and privacy.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Paid Subscription | Return to Headlines


Woman or Machine? New Robots Look Creepily Human
Associated Press (06/24/14) Yuri Kageyama

Remote-controlled robots that closely resemble humans have been developed by Osaka University professor Hiroshi Ishiguro and installed as guides at a Tokyo museum. Ishiguro says he created the robots, which stay seated but can move their hands, to study human-robot interaction and the differences between people and machines. "Making androids is about exploring what it means to be human, examining the question of what is emotion, what is awareness, what is thinking," Ishiguro says. In a demonstration, the robots moved their pink lips in time to a voice-over, twitched their eyebrows, blinked, and moved their heads from side to side. The Kodomoroid device is designed to look like a girl, and it has a repertoire of voices to express emotions, while speech can be input by text to give the machine flawless articulation, Ishiguro notes. Kodomoroid has a companion robot announcer called Otonaroid that closely resembles a woman, while a third robot, Telenoid, is more cuddly, with pointed arms and a hairless head. Ishiguro has specialized in creating robots that approximate the human appearance, and has sent them to deliver lectures abroad. He says robots could soon become a part of everyday life. "Robots are now becoming affordable--no different from owning a laptop," Ishiguro says.


Obama Appoints Indian American Professor to National Science Board
India West (06/16/14)

U.S. President Barack Obama has appointed Arizona State University (ASU) professor Sethuraman Panchanathan to the board of the U.S. National Science Foundation. "This is a fantastic opportunity to help our nation be in the vanguard of global competitiveness through the rapid advancement of science, technology, entrepreneurship, and innovation," Panchanathan says. Panchanathan, an Indian American, is senior vice president in ASU's office of Knowledge Enterprise Development, and has been a founding director of the Center for Cognitive Ubiquitous Computing for the last 13 years. He also founded ASU's Department of Biomedical Informatics in 2005 and its School of Computing and Informatics in 2006. Previously, he was the founding director of the University of Ottawa's Visual Computing and Communications Laboratory. The author of more than 400 papers, Panchanathan's areas of interest include human-centered multimedia computing, designing ubiquitous computing environments for improving the quality of life for disabled individuals, health informatics, and haptic user interfaces. "[Panchanathan] has worked tirelessly in advancing Arizona State and its rapidly growing research enterprise, promoting our unique capabilities and what we offer businesses and government agencies, and leading the way to a greater public understanding of the benefits that scientific research and technology development have to offer," says ASU president Michael Crow.


Wearable Computing Gloves Can Teach Braille, Even If You're Not Paying Attention
Georgia Tech News Center (06/23/14) Matt Nagel

Georgia Institute of Technology (Georgia Tech) researchers have developed a passive haptic learning interface that can help people learn how to read and write Braille without paying attention, building from an earlier interface that can teach piano melody playing to beginners. "We've learned that people can acquire motor skills through vibrations without devoting active attention to their hands," says Georgia Tech professor Thad Starner. He and Ph.D. student Caitlyn Seim performed a study in which each participant wore gloves with motors stitched into the knuckles. The motors vibrated in a sequence corresponding with the typing pattern of a predetermined phrase in Braille, and audio cues enable the users to know the Braille letters produced by typing that sequence. Starner says tests in which the sequences were repeated when participants were playing a game determined those who felt vibrations during the game were more accurate when they attempted to type the phrase after the game without wearing gloves. "Remarkably, we found that people could transfer knowledge learned from typing Braille to reading Braille," Seim says. "After the typing test, passive learners were able to read and recognize more than 70 percent of the phrase's letters." The Braille studies will be presented in September at the 18th International Symposium on Wearable Computers, which is collocated with ACM's International Joint Conference on Pervasive and Ubiquitous Computing.


Antonio Camurri--Better Understanding Non-Verbal Communication
Youris.com (06/16/14) Luca Tancredi Barone

University of Genoa human-computer interaction professor Antonio Camurri thinks the possibility exists for quantitatively assessing the brain mechanisms underpinning creative social communication and co-creativity. In an interview, he discusses how combining the disciplines of science and the performing arts can improve understanding of brain function. "We aimed at measuring the non-verbal expression of emotions and of social signals, such as leadership, co-creation, cohesion, or the entrainment in a small group of people," Camurri says. "To do so, we looked at multimodal signals including audio signals, movement of performers' bodies, and physiological signals such as respiration, heartbeat, or muscle tension." By studying ensemble music performance and audience experience, Camurri's research team was able to identify features that ascertain who the leader of a string quartet is, and measure whether the group is or is not in tune and their entrainment's effect on the audience, for example. Camurri envisions many applications for this line of research, such as the ability to enhance negotiations by evaluating how much a negotiating group is converging and who is impeding an agreement. Another area being researched is rehabilitation therapies based on games that are designed to offer more than pure entrainment, through a European Union-funded project called ASC-INCLUSION. Camurri says the project aims to help autistic children recognize emotions displayed in videos.


Picture Books for Visually Impaired Kids Go 3D Thanks to CU-Boulder Research Team
CU-Boulder News Center (06/23/14) Jim Scott

University of Colorado (CU) Boulder scientists have printed a three-dimensional (3D) version of the children's classic "Goodnight Moon" so visually-impaired kids can use their sense of touch to feel objects in the pictures as the story is read aloud. CU-Boulder professor Tom Yeh says although the idea of tactile picture books is not unique, "what is new is making 3D printing more accessible and interactive so parents and teachers of visually-impaired children can customize and print these kinds of picture books in 3D." Yeh says the concept is to represent two-dimensional graphics in a tactile way on a scale appropriate for young children's cognitive capacities and interests. The researchers integrate this information with computational algorithms to support an interface that enables parents, teachers, and advocates to print their own tailored picture books with 3D computers. "We are investigating the scientific, technical, and human issues that must be addressed before this vision can be fully realized," Yeh notes. The Tactile Picture Books Project got a boost last year with an outreach grant from the university, which has helped the researchers work with Denver's Anchor Center for Blind Children to better comprehend the needs of visually-impaired toddlers and how parents can engage them in reading. Yeh made a presentation on the subject at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Wearable Self-Tracking Tool Listens for Yawns, Coughs, and Munches
Technology Review (06/19/14) David Talbot

Researchers at Cornell University have built a wearable sensor system that detects body noises other than speech, such as coughing, laughter, grunting, tooth grinding, and hard breathing. Such sounds can provide clues about mood and health. The system consists of a microphone that attaches behind the ear, picks up sound waves transmitted through the skull, and can detect subtle clues about the activity or emotional state of the user. The wearable sensor could provide a reliable way to track food consumption in an automated way, says research leader Tanzeem Choudhury. "This can reliably detect the onset of eating and how frequently are you eating," Choudhury says. She notes the sensor technology could even measure fitness and health in a city if it is built into smartphones and enough people use it. "This could be a bridge between tracking pollution and coughing and other respiratory sounds to get a better measure of how pollution is affecting the population," Choudhury says. The researchers say the system also could be combined with other methods of ambient sensing in smartphones and could be built into the frame of a device such as Google Glass.


Intel Ventures Into 3D Mobile Chat App That Tracks Faces, Moods
IDG News Service (06/19/14) Agam Shah

Intel's Pocket Avatars mobile chat app is designed to enable animated three-dimensional avatars to reflect the user's expressions and emotions via face-tracking technology employed in conjunction with a handheld's camera. The camera captures moving faces, lighting conditions, and various emotional cues such as smiles, eye-blinks, or kisses. An algorithm processes the recordings and plots them out on the avatar in real time, and then the app deletes the information. "We are not storing databases with people's faces," says Intel's Mike Bell. "We're not selling ads or information. This is about social messaging, using an animated avatar. It's not mining [data]." The avatar functions as a surrogate for those who do not want to put their real face on screen, according to Bell. Pocket Avatars is freely downloadable and is equipped with 40 avatars. Bell says the Pocket Avatars app signals Intel's willingness to experiment and tinker in new areas and markets and is an outgrowth of the company's research into how technology can impact health and human behavior. "It's just a fun thing on top of standard messaging," he says. Bell notes future plans include adding more avatars and capabilities to the app.


Studying Human-Computer Interaction at Microsoft Research
Science (06/16/14) John Bohannon

In an interview, Microsoft Research cognitive psychologist Mary Czerwinski says she manages a group researching future applications of human-computer interaction. "We are logging and sensing what you do and how you react to computer/social interactions and anticipating your needs based on your context and past behaviors," she says. The goal is to make human-computer interaction more positive, productive, and enjoyable, which requires "a computer system that is more emotionally sentient but that also remembers how you prefer to interact with a system," Czerwinski says. She says another focus of her team is visualizing data in a manner that is easier to use and interpret so personally relevant patterns and trends can be understood by users. "When I do basic research, we ask questions, for example, about the benefits of doing visual search in [three dimensions] using particular kinds of distortion techniques or using specially designed input devices, and compare those innovations to existing practice," Czerwinski says. "In those research studies, there are no products in mind, just the human-computer interface research questions." Czerwinski emphasizes the value of social scientists learning coding, as it helps them communicate with engineers and makes them more efficient at project collaboration. "And ultimately, if you have some coding skills, then you can fall back on that and do prototyping of projects yourself if you need to," she says.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.