ACM SIGCHI Banner
Welcome to the April 2013 SIGCHI edition of ACM TechNews.


ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Face of the Future Rears Its Head
University of Cambridge (03/19/13)

In a development that could signal the next generation of human-computer interaction, researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge have created Zoe, a digital talking head that expresses human emotions on demand in a lifelike way. The technology expresses a full range of human emotions and alters its voice to match any requested feeling, with users typing in a message and desired emotion. Zoe's creators say the technology is the most expressive controllable avatar in existence. The face belongs to actress Zoe Lister, and researchers spent several days recording her speech and facial expressions to recreate them. The system can work with mobile technology and could serve as a personal assistant in smartphones or to “face message” friends. In the near future, Zoe's template could enable people to upload their own faces and voices within seconds rather than days to customize their own digital assistants. The tool has multiple applications, and researchers are working with a school for autistic and deaf children to use Zoe to teach emotions and lip-reading. The system also could be used in gaming, audio-visual books, online lecture delivery, and other user interfaces. “This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” says Cambridge professor Roberto Cipolla.


New 3-D Display Could Let Phones and Tablets Produce Holograms
Technology Review (03/20/13) Katherine Bourzac

Hewlett-Packard Labs has created a 3D display that plays hologram-like videos on a modified traditional liquid-crystal display. The system displays videos or images that float above the screen and can be experienced from 200 different viewpoints. The researchers say the system could lay the groundwork for new user interfaces for portable electronics, gaming, and data visualization, with 3D displays that are as thin as half a millimeter. Although conventional 3D images offer a single perspective, the HP Labs display allows multiview 3D, which requires the reproduction of all the light rays reflecting off an object from every angle and the use of a different image to the left and right eye of the viewer. The display features nanopatterned grooves, or “directional pixels,” which send light in different directions, with patterns built into the display's existing backlight. The display uses three sets of grooves that direct red, green, and blue light in a particular direction for each directional pixel. The number of directional pixels determines the number of viewpoints the display can produce. The system currently allows static images with 200 viewpoints or videos with 64 viewpoints and 30 frames per second. Content for the new display requires 200 different images, some of which can be digitally created without the need for 200 cameras.


Information Overload? There's a Solution for That
Concordia University (03/05/13) Laurence Miall

Concordia University researchers have developed an open source architecture that facilitates reading, understanding, and writing information by combining natural-language processing (NLP) and MediaWiki software. The researchers developed two applications, IntelliGenWiki and ReqWiki, to test on the architecture. IntelliGenWiki was tested at Concordia’s Center for Structural and Functional Genomics on research into enzymes that can create environmentally-friendly biofuels. Using IntelliGenWiki, scientists identified relevant research abstracts 67 percent more quickly than usual and retrieved pertinent data from full academic papers 20 percent faster. Furthermore, no NLP training is necessary because of the wiki format. ReqWiki is used by Concordia’s software engineering students to assist with requirements writing to improve clarity, and results in documents with measurably better quality than those developed with standard word processing programs. Reviewing and analyzing relevant information is essential for scientists and other researchers, as estimates indicate that every two days people generate a volume of information equivalent to that produced between the dawn of civilization and 2003. “This is a new paradigm for human-computer interaction,” says Concordia professor Rene Witte. “What’s new is we’ve introduced a way for humans to work collaboratively with computers and use Semantic Assistants (software architecture) for these knowledge-intensive tasks."


Innovation: Smartwear
Financial Times (03/21/13) Chris Nuttall

Wearable computers represent a new era of personal technology that attaches to and sometimes interacts with the body itself. “They unlock a domain of data that was previously inaccessible: data about the body. And that has unlimited potential,” says Forrester Research analyst Sarah Rotman Epps. “If you think about other domains and what we’ve been able to do—such as shopping or maps data—once something is mapped, you can create products and services around it, and so we’re only just scratching the surface with body-generated data that’s captured by these wearable devices.” Wearable computing could create new industries and hybrid professions, such as data scientists who understand physiology and ethnography. Several challenges exist for the industry, including proving that a market for the technology exists and creating computers that can be worn on a daily basis. Cloud data centers will enable the analysis of wearable computing data, while smartphones are making the wearable movement possible. Social and gamification components are critical to driving the consumer adoption of wearable computing, with physiological signals being integrated into more social experiences. As open standards emerge and hardware prices drop, adoption will rise, experts say. Going beyond wearable devices, some companies are creating sensors that are embedded in the body.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


UW Professor Helps Scientists Analyze Their Research in 3D
University of Wyoming (03/06/13)

University of Wyoming professor Amy Banic researches 3D human-computer interaction and virtual reality, focusing on data interpretation and analysis in 3D environments, including immersive visualizations, virtual environments, and virtual humans. She intends to use computing clusters at the NCAR-Wyoming Supercomputing Center to test her interaction techniques. Banic will work with scientists to make research applications more visual and interactive through partnerships with Idaho National Laboratory and Kitware. She also plans to partner with scientists at NCAR and the University of Wyoming, as well as visualization experts. The 3D interaction can offer more explanation for a researcher’s data, enable different analyses of the same information, and allow examination of data for a specific time period. In addition, Banic will teach a 3D visualization course for computer science students that will offer an opportunity to work with big data. Banic also has worked with Idaho National Laboratory researchers to explore touch-based interaction for menu control in immersive visualizations. This allows users to adjust the image position on any of the three axes that comprise 3D space and rotate image orientation along the axes, enabling researchers to view various 3D angles, zoom in on a portion of the data, and watch all or part of the data unfold in real time.


Museum Exhibit Developed at SEAS Puts Evolution at Visitors' Fingertips
Harvard School of Engineering and Applied Sciences (03/25/13) Bonnie Lei

The interactive Tree of Life visualization at the Harvard Museum of Natural History is a computerized tabletop exhibit that teaches about evolution and the history of life on Earth. The Life on Earth exhibit, a three-year project funded by the U.S. National Science Foundation and based at the Harvard School of Engineering and Applied Sciences (SEAS), utilizes tabletop computing technology with a multi-touch surface and programming that enables users to scroll through the evolutionary history of millions of interconnected species. The visualization uses DeepTree software to let users explore the evolutionary relationships of more than 70,000 named species and learn how they are related through common-derived traits. In addition, the FloTree program simulates evolution in action. Branching lineages of organisms scroll up the screen until an environmental event such as a hand on the tabletop prevents them from interbreeding, and lineages continue to multiply creating genetic variations and new species over generations. The FlowBlocks user interface is critical because it allows the tree to simultaneously handle many conflicting inputs.


Steve Mann: My “Augmediated” Life
IEEE Spectrum (03/13) Steve Mann

Computer-aided vision and augmented-reality technology have made significant progress over the decades that University of Toronto professor Steve Mann has been creating and wearing computerized eyewear. He has realized many benefits from using the technology, and believes his experiences are useful for future users as well as companies grappling with design issues. One benefit is that Mann's systems combine multiple images taken with different exposures, which enables him to clearly view another driver's face at night even when a car’s headlights shine directly into his eyes. Some systems take in other spectral bands; for example, using a camera that is sensitive to long-wavelength infrared enables detection of heat signatures to locate recently vacated seats in a lecture hall. However, Mann worries that wearable computing companies will make design decisions that complicate their use or damage users' eyesight. The effects of altering the brain's normal visual information processing have long plagued virtual-reality researchers, and these effects are compounded in equipment that augments the real world. Another design problem is the asymmetry of viewing the display through one eye with lenses that make the display appear farther away to allow the eye to focus, which causes severe eyestrain. In addition, Mann says the display should allow the wearer to see directly ahead without needing to look up.


Mind-Controlled Exoskeleton to Help Disabled People Walk Again
CORDIS News (03/07/13)

European Union-funded researchers are working on a mind-controlled robotic exoskeleton that one day could help paralyzed people walk. The Mindwalker project is based on brain-neural-computer interface (BNCI) technology integrated with a lightweight exoskeleton attached to users' legs. The system also could help rehabilitate stroke victims and assist astronauts in rebuilding muscle mass after spending time in space. The researchers aim to bypass the spinal cord entirely and send brain signals to a robotic exoskeleton using a BNCI system that translates electroencephalography (EEG) signals from the brain or electromyography (EMG) signals from shoulder muscles into electronic commands. The Laboratory of Neurophysiology and Movement Biomechanics at the Université Libre de Bruxelles worked on using EEG and EMG signals treated by an artificial neural network, and the Foundation Santa Lucia developed techniques based on EMG signals modeled by coupling neural and biomechanical oscillators. The BNCI signals are filtered and processed prior to use by the exoskeleton via a dynamic recurrent neural network capable of learning and exploiting the dynamic character of the BNCI signals. Although most BNCI systems require users to have invasive sensors placed in their brain tissue or to wear a wet cap, Mindwalker uses a dry technology that consists of an electrode-covered cap users can fit themselves.


Pump Iron the Smart Way With a Motion-Capture Coach
New Scientist (03/21/13) Hal Hodson

Lancaster University's Eduardo Velloso has developed a new workout-tracking system that monitors weightlifting for proper form and completion. Although there are an increasing number of devices that use on-body or ambient sensors to track sports activity, these tools primarily rely on accelerometers to track movement and do not provide feedback on technique. Velloso's system uses a depth-sensing camera from a Microsoft Kinect gaming sensor to record motion in three dimensions, tracking form during lifting and offering real-time feedback on a liquid-crystal display panel. The system uses lights to indicate whether a weightlifter's back, feet, and elbows are correctly positioned, and also displays the range of motion and speed of each lift. During testing, the system reduced mistakes by 23 percent for beginning weightlifters during lateral dumbbell raises, and by almost 80 percent during bicep curls. In addition, Velloso developed a separate system that observes user movements and automatically extracts a model of the movement, with the ultimate goal being to capture an expert's athletic motions and compare that to a beginner's movements to offer feedback.


Bringing Art to Life Through Augmented Reality
Henrietta Post (NY) (03/20/13) James Battaglia

Augmented reality (AR) is transforming art through a partnership between the Rochester Institute of Technology (RIT) and the Memorial Art Gallery in Rochester, N.Y. The project began when RIT College of Imaging Arts and Sciences professors Susan Lakin and David Halbstein merged their photography and 3D digital graphics classes into one Collaborative Composite Image course to simulate the industry experience and aid students in creating a common technical language. The inspiration for the art project came from an issue of GQ, which featured an advertisement showing a painting coming to life. Lakin and Halbstein approached the museum about using AR to make paintings come alive, and the museum welcomed the idea. Pairs consisting of one photography and one 3D digital graphics student then selected paintings and used true-color digital images of the paintings to create AR effects with the Aurasma smartphone app. One project enhanced Rachel Ruysch's 1686 "Floral Still Life" so that when viewed through Aurasma on a phone or iPad, butterflies in the painting appear to fly away and follow a woman who walks across the picture. "It's stimulating, it's provocative, it makes you think," says Marjorie Searl, the museum's chief curator. "There's sort of a brave new world out there that technology has invented."


Using Facebook Makes You Feel Happier
UoP News (03/18/13)

Facebook users can improve their mood by looking at their own photos and posts, according to a report from University of Portsmouth senior lecturer Alice Good. Nearly 90 percent of users access the site to look at their own wall posts, and 75 percent view their own photos when they feel unhappy, the report says. The findings run counter to previous studies that have suggested that Facebook use negatively impacts mental health, and Good says she intends to study larger groups to see whether the results remain consistent. In a survey of 144 Facebook users, Good found that as a form of self-soothing, people often use the social network to reminisce over old photos and wall posts. Pictures posted on Facebook are typically reminders of a positive past event that can remind a person who is feeling negative of the positive feelings they experience, says Good. People who are prone to mental health issues are especially comforted by Facebook use, says Good. Reminiscent therapy, which involves looking at old photos and is known to assist older people with memory difficulty, could effectively treat some mental health issues, the report notes. Good's study is part of a larger research project that is examining how applications can support self-soothing and a person's overall well-being.


MIRAGE Virtual System Gets Whole Body Involved in Lab
Iowa State Daily (02/28/13) Emily Drees

Iowa State University's Mixed Reality Adaptive Generalizable Environment (Mirage) room, originally designed for Army training, is a research lab that physically involves the entire human body and enhances the experience with virtual reality. “We wanted to create an environment for training that can be just as immersive as a real city or building but that has more flexibility like a videogame,” says Mirage co-founder Stephen Gilbert. Mirage uses a 41-feet-wide by 13-feet-high screen run by six active-stereo projectors, a surround-sound audio system, and reconfigurable walls with props. In addition, a motion analysis tracking system uses red-tracking cameras on the ceiling linked to trackers on helmets, suits, guns, and walls. Tracking devices enable participants to replay their actions and forward the information to a computer for human review. The computer also follows the movements of people in the system to obtain their exact orientation and position. Gilbert says the training is highly realistic and digital technology combined with physical walls and rooms enable flexibility and boundless scenarios. For example, Mirage can deploy a system to train firefighters that measures heart rate, blood pressure, and stress. The researchers say their purpose is to use the room as a lab to discover how to improve training elsewhere and make the technology more accessible.


Motion Control, Other Innovative Interfaces Inch Closer to Reality
ZDNet (03/01/13) John Morris

New interface innovations are emerging while the touch interface is just starting to move beyond smartphones and tablets to make inroads onto PCs. For example, the Leap Motion Controller, a USB device, adds motion controls to computers and is much more accurate than a game console's motion controller. Motion control also is underway at Intel, which posted a YouTube video demonstrating its motion-control technology as well as voice and facial recognition. Meanwhile, SoftKinetic developed a 3D DepthSense camera for laptops and motion-control software, and created the technology behind Intel's Perceptual Computing and YOUi Labs and Marvell smart TV demonstrations. Qualcomm announced Snapdragon Voice Activation, which uses a voice command to securely activate a mobile device in standby or airplane mode. In addition, STMicroelectronics demonstrated its Fingertip technology on a Nexus 7 tablet, which uses microelectromechanical systems sensors to allow users to control a mobile device by hovering about 2 inches above it instead of touching the device. Moreover, a Massachusetts Institute of Technology graduate student recently demonstrated the SpaceTop 3D interface, which uses a transparent display and two cameras to enable users to physically manipulate objects on the screen.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.