ACM SIGCHI Banner
Welcome to the February 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Beams of Sound Immerse You in Music Others Can't Hear
New Scientist (01/29/14) Paul Marks

Technical University of Berlin researcher Jorg Muller and his colleagues are working on three-dimensional (3D) sound technology that makes sound appear in precise locations, as a hologram does with images. Muller's BoomRoom at the university is configured with a circle of 56 loudspeakers that allow sounds to be assigned to stationary or mobile locations. Users can control the sounds using 16 gesture-recognition cameras. For example, users could assign a song to a vase in the room, allowing them to "empty" the vase to hear the song and use hand gestures to adjust volume, treble, and bass. The room is based on a wave field synthesis (WFS) technique that cancels and reinforces sound waves to create a 3D sound field. An algorithm controls the speakers with constructive and destructive sound wave interference to move sounds to the desired locations at specific times. Muller notes manipulating sound in this way only recently has become possible, due to improved audio-processing algorithms, directional loudspeakers, and gesture-recognition technology. He says the technology could be used, for example, to design a smart room for visually-impaired people in which important objects could announce their locations and users could leave messages for one another in mid-air. In addition, the technology could be used for gaming, or to reduce the number of gadgets needed in a home by turning an everyday object into an answering machine.


Sony's Jun Rekimoto Dreams Up Gadgets for the Far Future
IEEE Spectrum (01/28/14) Eliza Strickland

Sony Computer Science Laboratories' Jun Rekimoto designs futuristic technologies that emphasize human-computer interaction (HCI). Rekimoto's interest in HCI began at the Tokyo Institute of Technology in the early 1980s, when he read an article by computer scientist Alan Kay about graphical computer interfaces and decided to focus on inventing new ways for people and computers to interact. One of his earliest efforts merged a handheld computer and a video camera into a system that he called a NaviCam, which was very similar to current smartphones. His work on near-field communication (NFC) helped advance Sony's commercial electronic gear that communicates with a smartphone. Rekimoto was inspired by mouse inventor and 1997 ACM A.M. Turing Award recipient Douglas Engelbart, who felt that user interfaces should enhance human capabilities. "He said that the mouse was just a tiny piece of a much larger project aimed at augmenting the human intellect," Rekimoto says. "So my work has become more about human augmentation." For example, Rekimoto is working on expressive appliances such as a refrigerator that opens only when a person smiles, which he says is aimed at human augmentation that will improve mental well-being. "If it is our destiny to merge with machines, we should think about what parts of humans can be augmented by that merger," Rekimoto says. "Technology shouldn't just improve our intellectual abilities—technology should make you happy."


3-D Air-Touch Display Operates on Mobile Devices
Phys.Org (01/30/14) Lisa Zyga

National Chiao Tung University researchers have developed the Air-Touch system for mobile devices, which enables users to touch floating three-dimensional (3D) images. Using optical sensors embedded in the display pixels, Air Touch senses finger movement in the 3D space above the device to enable applications such as 3D games and interactive digital signage. Researcher Guo-Zhen Wang and colleagues designed the 3D system in a 4-inch display screen with an infrared backlight incorporated into the device and angular scanning illuminators at the edges of the display to provide adequate lighting. The infrared backlight and the optical sensors first calculate the two-dimensional fingertip position, then the angular illuminators emit infrared light at various tilt angles to determine depth. The 3D location of the fingertip is calculated through an analysis of the accumulated intensity at various regions that shows the scanning angle with maximum reflectance. The Air Touch prototype has a depth range of three centimeters, which the researchers believe can be broadened with adjustments to sensor sensitivity and scanning resolution. The team also believes multi-touch functionality could be possible with Air Touch, making more applications possible.


BETT 2014: Exploring the Classroom of the Future
Telegraph (01/23/14) Sophie Curtis

The recent BETT show in London showcased several technologies that are likely to play a significant role in tomorrow's classrooms, as technology enables new methods of teacher-student interaction. For example, Intel described the use of gesture recognition to facilitate navigation through three-dimensional models and simulations. In addition, students could learn languages with help from automated speech recognition with pronunciation coaching and recitation practice. Augmented reality applications could improve student understanding of material by providing computer-generated contextual data that would, for example, display a digital image of a page of the Magna Carta as a student reads about it in a textbook. Meanwhile, Intel is exploring the use of affective computing to understand students' expressions of excitement, frustration, and boredom, which could help teachers better understand student motivation. Companies also are looking at the educational potential of technologies such as humanoid robots, the Internet of Things, wearable devices, touch tables, large wall projectors, and pocket-sized tablets. As prices fall, schools are expected to increasingly adopt emerging technologies that will raise student interest and engagement while teaching critical technological skills.


One Day an Elevator Might Ask—Are You Getting On?
Washington Post (01/22/14) Matt McFarland

Microsoft researchers used artificial intelligence to enable an elevator in a corporate building to predict whether a person will want to board it. Over a period of months, a Kinect camera in the ceiling monitored the behaviors of people boarding the elevator and those walking past, and transmitted the data to an artificial intelligence system. The system learned to identify passengers wanting to board and began opening the elevator doors automatically as they approached. Microsoft's Redmond research lab co-director Eric Horvitz led the project, and is now beginning a second phase involving human-like interactions between elevators and passengers. "Something as stodgy and old-fashioned as an elevator could have really cute gestures and curiosities and say 'Are you coming?' with a door motion," Horvitz says. "It's the 21st century and we’re still requiring people to jam their legs into doors to tell the elevator they want to get on." Horvitz believes machines will transform the world as they begin to think like and understand people. In addition, Horvitz is working on a virtual personal assistant, and already has a face on a computer screen that greets visitors to his office and tells them whether he can be interrupted, based on his current tasks. "These intelligent assistants, across devices or a single device, will be the next frontier for computing and computer science," Horvitz says.


The Rise of the Brain Machines
Sydney Morning Herald (Australia) (01/23/14) Iain Gillespie

Deep learning, which uses neural networks to mimic the human brain by collecting information and responding independently, is bringing about a new age of artificial intelligence. Technology firms are investing heavily to establish themselves as deep learning leaders, and consumers are already seeing software improvements as a result. Google and Microsoft have used deep learning to improve their speech recognition technology, and Facebook is believed to be working on software that could identify emotions in text, recognize objects in photos, and predict future user behavior. Neural networks use information-processing algorithms called neural nodes that are layered on top of one another, with each layer transferring what it has learned to the above node to arrive at a progressively refined result. This sparse coding approach breaks down information into simple parts that are processed through successive layers of neural nodes in a manner similar to that of the human brain. Researchers say the method can be applied to any type of data and yields impressive results. For example, Cornell University professor Hod Lipson has used deep learning to create self-aware robots that use feedback from their limbs to learn to walk. Lipson's robots also can understand themselves and self-replicate. Google chief engineer Ray Kurzweil believes a conscious machine that can understand complex natural language will be created by 2029.


Gesture Recognition: Gaining the Upper Hand
A*STAR Research (01/22/14)

Researchers Li Cheng and Chi Xu at the A*STAR Bioinformatics Institute in Singapore have developed a hand-gesture-recognition model that appears faster and more accurate than existing techniques. Whereas existing methods use both video images and depth information to represent hand gestures, the new model uses only raw depth data to enable faster processing. Cheng and Xu initially used synthetic images of hand gestures, then applied the model to realistic low-resolution, noisy depth images. The researchers applied a feature classification technique called a Hough transform to the data to locate the hand and confirm the probable orientation of the palm and fingers. Because the hand can roll sideways, the researchers also applied a second Hough 'forest' with specially modified depth features to account for possible rotation and create a list of probable hand poses. Whereas previous models have assumed the palm to be flat, the new model takes the arch of the palm into account, which significantly improves accuracy. The model was fast and accurate in tests involving real hand movements, and the researchers hope their work will improve hand pose estimations in computer software.


Seeing Things: A New Transparent Display System Could Provide Heads-Up Data
MIT News (01/21/14) David L. Chandler

Massachusetts Institute of Technology (MIT) researchers say they have created a transparent display system that offers a wide viewing angle, simplicity of manufacture, and potentially low cost and scalability. Current "heads-up" display systems use a mirror or beam-splitter to project an image directly into the user's eyes, requiring a user to be in a precise position to view the image. However, MIT's system projects the image onto the glass itself, enabling it to be viewed from a variety of angles. In addition, previous displays integrate electronics into the glass, making them costly and complicated, while restricting transparency. MIT's system embeds nanoparticles in the transparent material that scatters only certain wavelengths, while allowing others to pass through. The researchers say this allows the glass to remain transparent, while simultaneously making a single-color display clearly visible. The researchers believe it is possible to create full-color display images using the same approach. The particles could be added to a thin, inexpensive plastic coating applied to glass, which would work with existing laser projectors or conventional projectors to create a specified color. The displays have many potential applications, such as projecting images onto store windows while allowing passersby to see into the store or offering drivers and pilots heads-up windshield displays regardless of their viewing angle.


Urban Computing Reveals the Hidden City
IEEE Spectrum (01/27/14) Paul McFedries

Urban computing is transforming cities into distributed computers that provide citizens with data through a street-level interface. Citizens can access information about objects in their environment via a reality browser enabled by smartphones. Wayshowing interfaces are emerging that offer specific directions from one location to another, and social navigation tools enable other people to assist users with navigation, for example, to avoid traffic and check in with friends upon arrival. In addition, augmented reality can enable users at a physical location to access virtual data. For example, the Museum of London Streetmuseum app overlays historical photos onto corresponding street scenes viewed through a smartphone's camera. Amplified reality goes one step further by building additional data into a street object via radio-frequency identification or near-field communication technologies. This enables the creation of location-based media that transmit information about a specific place to users as they approach the location in what is called a situated interaction. For example, a sound garden designates certain sounds for public locations, which are accessible via Wi-Fi–enabled devices. Urban computing aims to increase civic engagement, making technology visible, and encouraging citizens to interact.


Africans Design Audible Guides for Blind People
SciDev.net (01/24/14)

Inventors in Africa have separately developed devices for blind people that provide audible navigation cues. Algerian inventor Badreddine Zebbiche has designed GuideMe, a blueprint for a device that uses three-dimensional sensors on the user's shoes to sense obstacles. The device then sends warnings to a smartphone application, which offers the user verbal navigational advice such as which direction to turn. Zebbiche last year was a co-winner of the 2013 Technology Idea competition, run by the Global Innovation through Science & Technology initiative. In addition, Nigerian inventors last year created a similar "wearable obstacle detection system." Using ultrasound, the device identifies obstacles on the floor and transmits a radio signal to a headpiece that makes a noise of variable volume to indicate how close a user is to obstacles. "The advantage of our system is its small size, low cost, and [lack of] wearable constraints," says Obafemi Awolowo University researcher Adebimpe Obembe. The team says tests showed the device helped users avoid collision with minimal training. However, the system cannot detect moving obstacles and has a range of only four meters. The Nigerian team is working to further refine the device.


Are Phablets Really Here to Stay?
CBC News (Canada) (01/21/14) Janet Davison

Phablets that bridge smartphone and tablet technology are gaining popularity, but experts offer mixed predictions on the future of the technology. Experts say consumers are gravitating toward larger handheld screens for functions ranging from Web browsing to map viewing. Phablets will account for 25 percent of smartphone sales worldwide this year, according to Deloitte. However, Deloitte points to practical issues, such as pocket and purse size, as well as the need for a two-handed telephone grip, that set a "natural ceiling" for the phablet market. University of St. Andrews in Scotland human-computer interaction chair Aaron Quigley believes the phablet will serve as a transitional device for people concerned about abandoning their laptops, but will not have a major long-term impact. Phablets will play a niche role for businesses and schools that want to move desktop users to a more mobile environment, he says. In addition, he believes that phablets represent progress toward freeing people from ties to one particular computer. "It's another step on the movement towards a more liberated computing space or a more ubiquitous computing environment, effectively where you're not coupling the individual to a single machine," Quiqley says. "You're coupling them to a kind of computing space."


Technology Breaking Down Social Barriers
Cyprus Mail (01/12/14) Zoe Christodoulides

The Cyprus University of Technology in Limassol's Cyprus Interaction Lab pursues human-computer interaction and instructional technology to demonstrate the impact of novel technical research on human behavior. The lab runs user evaluations on emerging technologies such as interactive tables, specialized usability and accessibility software, and brain-computer interfaces. With a special focus on the needs of users, especially those with disabilities, the lab is testing a humanoid robot's ability to improve communication with autistic children. In addition, the lab is studying how an interactive table can unite people to resolve a conflict. For example, the researchers used the interactive tablet to study Greek Cypriots' perceptions of immigrants, to help the immigrants integrate into society. The lab's primary focus is on understanding and improving user experience with everyday technology for the Web, gaming, and education. The lab is examining the user experience of websites designed for deaf individuals, and studying how to make creative spaces for learning by integrating technology that children find interesting into education. "It's about using everyday technologies in more innovative ways, something which goes far beyond projections or power point presentations which aren't really interesting enough for kids anymore," says lab co-manager Antri Ioannou. The researchers hope the lab's findings will soon have greater reach as it aims to join the European Horizon Network, which funds research and innovation projects across Europe.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.