ACM SIGCHI Banner
Welcome to the December 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Brain Zaps Could Boost Our Minds When Computers See Us Flagging
New Scientist (11/18/15) Aviva Rutkin

Transcranial direct current stimulation (tDCS) is being researched at Tufts University in a project investigating how computers and wearable devices could interpret brain signals and stimulate the brain in certain ways. Using functional near infrared spectroscopy (fNIRS) to read oxygen levels in the brain, Tufts professor Robert Jacob's lab is obtaining data about neural activity, enabling a computer to monitor and adjust to a human subject's cognitive state. The fNIRS technique is being used in conjunction with machine-learning algorithms to calibrate sensors on devices for users' brains, letting the device know whether the user is concentrating hard or idling. Jacob says the next step is using tDCS to give the brain a boost for the task at hand. The method involves running current from a battery into the brain via wet electrodes to change the excitability of some neurons, making them more or less likely to fire. The project's first test for tDCS could involve simulated flying drones, in which cognitive stimulation is used to adapt participants' brains to the task. Some are skeptical of tDCS' potential, with University of Melbourne researchers noting they found inconsistency in the reported benefits of the technique, which include better memory retention, greater accuracy, and faster task completion.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


A Robotic Tabletop Makes Simple Structures All by Itself
Technology Review (11/18/15) Will Knight

Massachusetts Institute of Technology Media Lab researchers led by Stanford University professor Sean Follmer have developed a robotic tabletop that uses mechanically controlled square pins to configure objects placed on its surface. The device recently demonstrated an ability to manipulate specially designed blocks. Triggering the pins at the proper speed can cause the blocks to flip over, or even hop on top of one another. To build more complex structures, a three-dimensional printer was used to create magnetic blocks that lock together. In addition, "kinematic blocks" with buttons or knobs on top that could be used to control the pins underneath were developed, enabling a user to control the movements of another block by pressing the button or turning the knob on a control block. Follmer thinks the technology could have use in industrial production, such as "a kind of conveyor belt where you can manipulate things directly but also in conjunction with other robotic arms." He envisions less expensive robotic hardware enabling more complex physical interfaces similar to the tabletop. The research was presented at the recent ACM Symposium on User Interface Software and Technology (UIST 2015) in Charlotte, NC.


Shape-Changing LineFORM May Belong to Interface Future
Tech Xplore (11/11/15) Nancy Owano

The Massachusetts Institute of Technology Media Lab's Tangible Media Group has developed LineFORM, a linear shape-shifting user interface. "Lines have several interesting characteristics from the perspective of interaction design: abstractness of data representation; a variety of inherent interactions/affordances; and constraints as boundaries or borderlines," note LineFORM's developers. "By utilizing such aspects of lines together with the added capability of shape-shifting, we present various applications in different scenarios." The serpentine device employs a linear series of actuators that can move independently or collectively to reconfigure itself into new shapes. LineFORM's repertoire of functions includes body constraints and data manipulation, and the prototype also can reshape itself into telephone mode, with a smart wristband giving the user haptic feedback. Coupling the device with flexible displays is being investigated, and the researchers envision the concept as "next-generation mobile devices," which could be used to display complex information, provide affordances on demand for different tasks, and constrain user engagement. "A relatively small number of actuators can be used to achieve an expressive display, and these systems may be easier to prototype than other form factors of high-resolution shape display," the researchers report. LineFORM was presented at the recent ACM Symposium on User Interface Software and Technology (UIST 2015) in Charlotte, NC.


Eyes Off: The 3D Printed Cape That Warns You When You're Being Watched
CNN (11/18/15) Allyssia Alleyne

University of Southern California Ph.D. candidate Behnaz Farahi's latest project, Caress of the Gaze, melds apparel with three-dimensional printing to create an animatronic garment that detects when and where the wearer is being stared at, triggering movement and morphing. The garment is equipped with a minuscule camera lens that detects a watcher's stare, while an algorithm maps the precise spot where the person is gazing; spines affixed to that spot then become rigid and sway. "The idea for this project was really to create a garment that becomes an extension of our actual skin," Farahi says. She notes the garment's morphology was modeled after fish and snake scales, while its movement used goose bumps as an inspiration. Looking ahead, Farahi would like to make the garment capable of identifying the gazer's age or gender via the incorporation of recognition technology. "How can our clothing or fashion items become an interface with the world around us?" she asks. "What kind of scenarios are there for the future of fashion? We need to think about how this sort of technology is changing our notions of our bodies and our notions of ourselves."


These Powerful Smart Glasses Could Help the Blind 'See'
Quartz (11/12/15) Akshat Rathi

California Institute of Technology researchers Noelle Stiles and Shinsuke Shimojo are exploring the practical applications of echolocation for visually impaired people with a pair of glasses they have built that renders images as sounds. The researchers first had sighted volunteers match images of natural textures with the most appealing sound while blind volunteers felt the textures and chose a matching sound. That data was entered into an algorithm to produce an intuitive video-to-sound conversion. When the device was used by visually handicapped people unfamiliar with the concept, their shape-to-sound matching performance was about equal to that of people who had been trained on the device--approximately 33 percent better than chance alone. A control group had more difficulty conducting the task when they reversed the algorithm. The smart glasses' current iteration has sound fed to users via headphones, which might block out other ambient noises that can help the blind. Bone conduction, in which converted video is directly fed to the inner ear through the skull, could circumvent this problem. Stiles and Shimojo say they hope the device will be very beneficial once it is perfected.


New Tech Helps Handlers Monitor Health, Well-Being of Guide Dogs
NCSU News (11/16/15) Matt Shipman

North Carolina State University (NCSU) researchers have developed a device designed to enable visually impaired people to monitor the health and well-being of their guide dogs. The device consists of a harness handle equipped with a pair of motors that vibrate in time with the dog's respiration and heartbeat. "We wanted to use electronic signals that intuitively make sense for the dog handlers," says NCSU professor David Roberts. The motors' vibrations speed up in time with the dog's increasing heartbeat and breathing, which can signal to the handler the dog may be in distress. The prototype device has been tested with simulated heart rate and respiratory data, and was found to be effective at accurately communicating information to users. "We're refreshing the design and plan to do additional testing with guide-dog handlers," Roberts notes. "Our ultimate goal is to provide technology that can help both guide dogs and their people. That won't be in the immediate future, but we're optimistic that we'll get there." The research was presented at the recent Second International Congress on Animal Computer Interaction in Johor, Malaysia.


Take a Jab at a Concept Device That Serves Virtual Punches You Can Feel
PSFK (11/16/15) Leo Lutero

Researchers at the Hasso Plattner Institute's Human-Computer Interaction (HCI) lab have developed Impacto, a prototype virtual-reality (VR) system that can simulate physical impact. The system features a band that integrates haptic feedback with electrode-triggered muscle stimulation that can be worn on the arm, leg, or foot of a virtual reality user. In combination with a VR headset, the researchers say Impacto creates a more immersive VR boxing game experience by supporting the illusion of hitting and being hit. They also envision it being used in a game in which the player bounces a soccer ball on their foot and they can feel each strike, or in baseball games in which the device mimics the sensation of the ball hitting a bat. "The key idea that allows the small and light Impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation," say the researchers in a paper describing the Impacto technology. "The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment."


Self-Calibration Enhances BrainGate Ease, Reliability
News from Brown (11/11/15) David Orenstein

The BrainGate intracortical brain-computer interface (BCI) can now recalibrate itself instead of depending on its users, thanks to several software advances that enhance its ability to decode users' movement intentions from their neural signals. These signals inevitably change, causing BCI performance to decline until the decoder can be recalibrated, usually via a special task in which the user attempts to move a cursor to prescribed targets so movement intentions can be plotted to the new neural activity patterns. The decoder upgrades include "retrospective target inference," a method in which the decoder analyzes the users' recent target selections to update its interpretation of the neural signals that generated it. The decoder also monitors the baseline levels of neural activity in the motor cortex during moments when users pause, so the system can start with better calibration when the user decides to reactivate it. The third upgrade involves tracking emergent biases in the velocity of cursor movement and subtracting them from the decoded movements as users employ the system; this supports greater accuracy and a more intuitive user experience. "Eliminating the need to run a calibration task whenever the recorded signals change will make a clinical BCI more user friendly and easy to use," says Brown University professor Beata Jarosiewicz. The upgraded decoder could find use in other BCI tasks such as three-dimensional control of a robot arm or electronically reanimated human limb operation.


UC-San Diego's New Robotics Institute Aims to Help People Age at Home
California Healthline (11/16/15) Lisa Zamosky

The University of California, San Diego (UCSD) has launched its Contextual Robotics Institute this month to develop robotic technology with artificial intelligence for the purpose of helping the U.S.'s growing elderly population "age in place." UCSD professor Rajesh Gupta describes the institute's work as drawing heavily on cognitive sciences to engineer robots that can read emotions and respond to people in more human-like ways. The multidisciplinary institute will convene engineering, computer, and social sciences experts to develop robots that Gupta says will be capable of recognizing their surroundings, understanding the context of a situation, and synthesizing the information to take appropriate action. "The robot has to be able to sense things, not necessarily be told to do everything" in order to function in a home environment, Gupta says. "When it comes to interaction with humans, most robotic machines are too stiff or too autistic. They don't really make a distinction between what you're thinking or feeling." Boomer Health Tech Watch founder Laurie Orlov says the wide adoption of robotic innovations depends on their price, functionality, and service. Meanwhile, Aging 2.0 founder Stephen Johnston cites several aging in place technologies, such as sensors for monitoring eating habits, sleep, and wake times, which could benefit elderly dementia and obesity sufferers.


UW Researchers Develop Hyperspectral Camera in Conjunction With Microsoft
The Daily (11/18/15) Arunabh Satpathy

A joint project between the University of Washington's (UW) Ubiquitous Computing Lab and Microsoft has yielded the HyperCam, a hyperspectral camera whose possible applications include biometric scanning, medical uses, and analyzing vegetables and fruits. The HyperCam can visualize wavelengths outside of the range of human vision and can make sense of the images it captures using special software. UW Ph.D. student Mayank Goel helped develop the software, some of which was written by Microsoft researcher Neel Joshi using his code and an application programming interface from hardware designer Marcel Gavriliu. "Mayank built his image-processing algorithms on top of this," Joshi notes. The camera uses 17 different light-emitting diodes to provide illumination for short periods of time while taking pictures. "The software...sees a scene and it tries to figure out 'what is interesting here? What could be something here that the user would like to see?'" Goel says. He says taking pictures of fruits with the HyperCam reveals whether they are ripening or not, while shooting images of someone's hand can show a change in color when blood flow is constricted. UW Ph.D. student Alex Mariakakis says HyperCam enables different features of a person's hand to be picked out, which has potential for biometric security.


4 Ways Humans Will Live and Collaborate With Robots
TechRepublic (11/12/15) Erin Carson

The recent Next: Economy Event held in San Francisco focused on the evolving relationship between humans and robots, and how they will coexist and collaborate in the future. Udacity CEO Sebastian Thrun discussed driverless cars enabled by machine learning, noting they could extensively reduce traffic injuries and fatalities because "in robot land, once a car makes a mistake, no other car will make that mistake." Viv's Adam Cheyer and M's Alexandre Lebrun discussed intelligent personal assistants, which Cheyer envisions as cloud-based and interacting with the world's websites and Web services to answer complex queries. The New York Times journalist John Markoff and Stanford University's Jerry Kaplan joined the panel, where Markoff said he noticed a lack of communication between researchers exploring fully autonomous artificial intelligence (AI) systems and scientists working on AI for human enhancement. In relation to theories that many U.S. jobs are threatened by automation, Kaplan stressed the issue is not about replacing people with automation, but automating tasks that do not require human labor. Narrative Science's Kristian Hammond added to the discussion, noting AI will not make journalists obsolete as some people fear, but will instead help address a shortage of data analysts. "Data science was the sexiest job of the 21st century, now it's the next job we're going to automate," he predicted.


Using Mobile Devices to Augment Reality Can Enhance Creative Play and Exploration
EurekAlert (11/09/15) Jennifer Liu

Disney Research scientists say their Augmented Creativity concept merges immersive but passive digital media with physical interaction and provides a way to enhance a child's play and exploration. The researchers have devised several prototype applications to demonstrate the concept. "Our research brings the seamless fusion of the real and virtual world together with an intelligent and creative gameplay," says Disney Research's Markus Gross. "We believe that these concepts offer exciting virtual enhancements over real-world interactions." Principal research scientist Robert W. Sumner describes an augmented-reality app that enables children to explore music styles and arrangements by adding, removing, and re-configuring cards that represent different instruments and styles. Sumner says another app lets users tailor three-dimensional animated characters by coloring them as they would in a coloring book. The researchers also developed a prototype multi-player game in which players use a mobile tablet to track virtual objects as they move around and converse to foil alien invaders. A fourth app is a city-wide gaming architecture enabling the creation of outdoor games in which players search for interactive elements superimposed on landmarks. In addition, the researchers developed a framework to help users write interactive narratives. Another app can help teach robotic programming to children "by making the dynamics of program execution visible," Sumner says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.