ACM SIGCHI Banner
Welcome to the July 2013 SIGCHI edition of ACM TechNews.


ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Matrix-Like Human Computer Interface Circa 2045?
The Hindu (India) (06/21/13)

Some futurists participating in the recent Global Future 2045 International Congress were convinced that technology enabling humans to upload their brains into computers could be available as soon as 2045. Google's Ray Kurzweil predicted that by that time, "based on conservative estimates of the amount of computation you need to functionally simulate a human brain, we'll be able to expand the scope of our intelligence a billion-fold." Meanwhile, there already have been significant accomplishments in the field of brain-computer interfaces (BCIs), and Jose Carmena and Michel Maharbiz at the University of California, Berkeley are striving to create cutting-edge motor BCIs that would enable a user to move a computer cursor, prosthetic limb, or other device by thought. The BCIs would achieve this through the use of pill-size electrode arrays that record neural signals from the brain's motor area, which are then decoded by a computer. The University of Southern California's Theodore Berger is working on an even more advanced BCI designed to replace part of the brain's hippocampus to facilitate longer memory retention. Also presenting at the congress was entrepreneur Martine Rothblatt, who discussed the concept of immortal digital versions of humans called "mind clones." She says the mind clones would be generated from an online personality repository or "mindfile" that would be run on "mindware," or software for consciousness.


Microsoft Develops 3D Touchscreen With Tactile Feedback
BBC News (07/02/13) Leo Kelion

Microsoft has developed a touchscreen that shows three-dimensional images that can be felt and manipulated. The device combines a liquid-crystal display panel with force sensors and a robotic arm that moves it back and forward. The company says controlling the level of resistance to a user's fingertip enables it to simulate the shape and weight of objects shown on screen. A computer adjusts the size and perspective of the on-screen graphics to create a 3D effect. "As your finger pushes on the touchscreen and the senses merge with stereo vision, if we do the convergence correctly and update the visuals constantly so that they correspond to your finger's depth perception, this is enough for your brain to accept the virtual world as real," says senior researcher Michael Pahud. In demonstrations, users felt the shape of objects such as a virtual cup or ball, while using special glasses for stereo-vision effect. Microsoft says the technology has potential uses in medicine, such as for touching and finding tumors, as well as in gaming.


New Research Project to Make Robots More Trustworthy
University of Bristol News (07/02/13)

Increasing the trustworthiness of robots is the purpose of a project involving collaboration between multiple U.K. universities. "This is a vital step in developing robots for a whole range of functions for the future, where they will be useful to humans," says University of the West of England (UWE) professor Tony Pipe. The Trustworthy Robotic Assistants (TRA) project teams experts from the Bristol Robotics Laboratory (BRL), a collaborative partnership between the University of Bristol and UWE, with peers at the universities of Liverpool and Hertfordshire and industry partners, with funding from the Engineering and Physical Sciences Research Council. "Safety assurance of robots is an urgent research challenge that must be addressed before many products that already exist in labs can be unlocked for mass production," notes principal TRA project investigator Kerstin Eder. "This requires collaboration of verification experts with roboticists and those who specialize in human-robot interaction, so that a human-centric, holistic approach to safety assurance can be developed." Among the robotic platforms utilized for the project is BERT, developed as part of BRL's research initiative on Cooperative Human Robot Interactive Systems. BERT has been employed to study scenarios in which the platform assisted human colleagues to complete manufacturing tasks.


Facial Analysis Software Spots Struggling Students
Technology Review (07/01/13) Will Knight

North Carolina State University (NCSU) researchers have developed technology that could be used to identify students who are struggling with subject material in real time. The researchers used facial-recognition technology to assess engagement and frustration, and the software's findings closely matched what participants reported on their own state of mind. The researchers would like to develop a tutoring system that could help students "bolster their confidence and keep them motivated," says NSCU's Joseph Grafsgaard. The study comes at a time when academia is starting to develop computers and other devices that can identify and respond to emotion. It also is part of a broader effort to develop emotion-sensing technology for affective-computing applications. For example, technology that tracks the emotional state of students could be integrated into online learning platforms. "Udacity and Coursera have on the order of a million students, and I imagine some fraction of them could be persuaded to turn their webcams on," says Emotient's Jacob Whitehill. "I think you would learn a lot about what parts of a lecture are working and what parts are not, and where students are getting confused."


Medicine That Monitors You
New York Times (06/24/13) Nick Bilton

Ingestible computers and pills outfitted with minute sensors and transmitters are a focus of scientists and startups exploring the frontiers of medical and convenience technology, with people already swallowing such devices to track various health data and wirelessly relay this information to physicians. "You will...take a pill, which you think of as a pill but is in fact a microscopic robot, which will monitor your systems" and wirelessly transmit what is happening, says Google's Eric E. Schmidt. "If it makes the difference between health and death, you're going to want this thing." One pill uses the human body as a power source, harnessing stomach acids to generate electricity through magnesium and copper surfaces on the sensor's sides. The device is designed to track and transmit information about the user's medication-taking behaviors, as well as monitoring the body's response to medicine and identifying the person's movements and rest patterns. Another ingestible device comes with a built-in battery and is designed to wirelessly transmit real-time patient body temperature, while plans are underway to introduce a consumer version in the next year that would wirelessly communicate to a smartphone app. Also under development are pills that would enable the user to activate other devices, such as smartphones and automobiles, by tactile contact-based identity authentication.


Microsoft's 'Moodscope' Phone Software Senses Your Mood
IDG News Service (06/29/13) Mark Hachman

Microsoft researchers have developed a service that can sense the mood of smartphone users based on their phone calls, texts, browsing, and other common interactions. A smartphone would need to be trained over a two-month period to sense the user's mood, and then MoodScope would be able to accurately predict the user's mood 93 percent of the time, according to the researchers. The mood sensor could share information on a user's mood with their social network, friends and family, and even services such as Spotify, which could curate a playlist based on how the user is feeling. The service would be based on an app and tapped into via an application programming interface, and both the smartphone and a cloud service would collectively produce the mood responses. "We foresee mood inference as a vital next step for application context-awareness," the researchers say. "Such inference would improve the utility of the smartphone and lower the social barrier for sharing mood." The researchers note that other aspects of the technology also could be investigated, such as rises in frustration levels tied to heavy traffic.


Wearable Computers a Smart Fashion Trend
Agence France-Presse (06/27/13) Glenn Chapman

The wearable computing is a major part of the "quantified self" trend, in which people track all aspects of their daily life. "We are heading for the wearable computing era," says Gartner analyst Van Baker. "People are going to be walking around with personal area networks on their bodies and have multiple devices that talk to each other and the Web." One of the most popular uses for the technology is with devices that keep track of whether people are leading active, healthy lifestyles. The devices use sensors to detect micro movements and then feed the information to applications that analyze the data and provide feedback to the users. A recent Forrester Research survey found that 6 percent of U.S. adults use a device to track their performance in a sport. There are other gadgets that recommend films based on the users' moods and find keys for unlocking cars or homes, according to the survey. "When you combine wearable computing with sensors and machine learning algorithms then you get context, the computer knows your state and is able to help out clearly in the situation," says Carnegie Mellon University professor Asim Smailagic.


This Glove Could Help Deaf-Blind People Communicate With Anyone, Anywhere
The Atlantic (06/26/13) Michael Scaturro

Tom Bieling, a doctoral candidate at Berlin's University of the Arts Design Research Lab, has created a way for the deaf-blind to communicate using a computerized glove that translates text into impulses. Made from black GoreTex covered with sensors, the glove includes a plastic box that fits on the user's forearm and holds a Bluetooth transmitter that links to a mobile phone to transmit messages. The glove uses the Lorm alphabet, which assigns letters of the alphabet to specific movements on designated parts of the hand. Motors translate incoming texts into vibrations representing Lorm alphabet letters on the dorsal side of the hand. To send a message, users tap letters onto the palm side of the glove. Although Lorm is primarily used by deaf-blind people in Germany, Austria, and the Netherlands, the system is relatively easy to learn and enables users to communicate with anyone capable of texting. Bieling intends to design future gloves using a thinner material, smaller sensors, and invisible cables, which might appeal to mainstream users. In the near future, Bieling believes tactile sensors in clothing will be commonplace, enabling fast, silent communication. Users would not need to learn the whole Lorm alphabet, but only a subset of motions pertaining to their work.


A Brain Computer Interface That Could Help Quadriplegics Walk
Les Echos (06/21/13) Benoit Georges

The French Alternative Energies and Atomic Energy Commission (CEA) has created the Wimagine brain-controlled implant system, which quadriplegic people can use to carry out motor functions. "We wanted to start with a project that means something, that provides a clear benefit to patients," says project leader Alim-Louis Benabid. "We also wanted to use different kinds of cutting-edge technology: micro and nano-electronics, implantable devices, robotics." The team is using an exoskeleton piloted by the brain because "it allows the user to recover an almost natural mobility," Benabid says. The French team implants electrodes on the surface of the motor cortex, which is responsible for voluntary movements. "What makes brain control possible is that when we think of a movement, the electric activity within the motor cortex is the same as when we actually make the movement," says CEA researcher Corinne Mestais. Wimagine is a five-centimeter-in-diameter electronic device with 64 sensors that can record and broadcast electric activity. The researchers also are developing a computer model that will interpret the very-low-frequency electric signals and convert them into intended movements.


Intel Explores Brain Scanning to Make Roads Safer
IDG News Service (06/25/13) Martyn Williams

Intel Labs researchers recently demonstrated a brain-scanning technique aimed at making roads safer by determining whether drivers are concentrating on the road. Using functional near-infrared spectroscopy and building on existing human-computer interaction research, Intel is trying to identify when drivers are focusing on the road and when they are preoccupied. The demonstration involved a driver navigating a virtual Formula One car around a racetrack at 50 miles per hour, and then at over 250 mph in a separate test. The driver wore a cap with infrared sensors to detect the differences between the two drives, revealing information about the intense concentration needed for the higher-speed test. The researchers say the data eventually could be integrated into the car's computer to prompt changes to environmental controls that would keep drivers more alert, or adjust controls on safety features such as automatic braking or lane control. "With that information, we can say maybe they need some additional stimulation, maybe we [change the radio station], maybe we dial up or down the amount of control, maybe we pull you off the car in front of you a little bit," says Intel Labs senior research scientist Paul Crawford. Intel also is working with the National Taiwan University to provide vehicle-to-vehicle communications about the vehicle's state using LED brake lights.


Virtual Reality: Dancing With a Rhino-Headed Army
New Scientist (06/21/13) Sandrine Ceurstemont

Virtual reality is making its way into the theater, with a recent contemporary dance show in Paris using pre-recorded motion capture to enable live dancers to perform with animated characters. Dassault Systems developed the technology, which uses vehicle- and plane-modeling software to create illusory perspectives. Although virtual reality environments are typically controlled rooms, the dance show requires effects to work in a conventional theater without special glasses. Using virtual cameras, the system simulates various perspectives and finds the "trompe l'oeil" effects that are most effective from all seats. The system had to be easily transportable, compatible with different theaters, and designed for front projection. Using six projectors controlled by a single computer, the system plays back visuals automatically synchronized with sound. The system uses three Kinect sensors above the stage with infrared cameras to record changes, and overlapping signals require additional processing to obtain usable information. However, the largest challenge was live interaction between the virtual visuals and the dancers, who were hesitant to trust the technology. For this reason, the motion-capture sequences were pre-recorded, but Dassault's Benoit Marini believes dancers will adjust to using virtual reality systems live in the future.


The Changing Face of Digital Design
Techday (06/25/13)

Digital interface design currently hinges on the user's ability to manipulate devices with touch, as mobile devices proliferate and surpass desktop PC usage. The majority of users hold smartphones in their dominant right hand, meaning that buttons and interactive device features are best placed in the lower left hand side of a screen to allow the use of the right thumb for interaction. The impact of scrolling on content layout must be considered so that users do not inadvertently activate links or buttons as they scroll across the screen. Desktops are incorporating some of the design features that are emerging for mobile devices, despite the fact that touch is not yet a primary input method for desktops. Mobile design features are evident in Windows 8 and Microsoft Surface, which met with mixed user reactions, although users are likely to grow increasingly accustomed to new mobile methodologies. Desktop and mobile interfaces are expected to blend further over the next year, as users begin to expect all of their tools to offer similar touch-oriented interaction experiences.


The Future of Smart Glasses
Scientific American (07/01/13) Vol. 309, No. 1, P. 21 Larry Greenemeier

Intel Labs director Justin R. Rattner says in an interview that smart eyewear technology such as Google Glass is not yet ready for mainstream consumption, although he notes the technology has great potential. "The sensor technology, the communications technology, and the computer technology have all reached a point where, for the first time, the potential for high-volume consumer wearables is real." Rattner sees a grand challenge with the technology, noting "no one's been able to demonstrate a high-performance see-through display. The side-view display that you see in [Google's smart glasses] Google Glass and the Oakley Airwave snow goggles is, in some sense, a recognition of the fact that no one has solved the transparent display problem." He says such a display would require an optical engine for the left and right eyes that would project images onto the lenses, and would be built in such a manner as to keep the images in front of the wearer, no matter where he turns his head or gaze. For see-through augmented-reality glasses to achieve a form factor similar to regular eyewear, there has to be an extremely high level of optical engineering, according to Rattner. He points out that the research into such technologies is primarily concentrated in small, underfunded startups or people who are only focused on the optics.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Paid Subscription | Return to Headlines


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.