ACM SIGCHI Banner
Welcome to the August 2013 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


NASA Turns to Open Source Middleware for Human-to-Robot Communications
CIO (07/31/13) Thor Olavsrud

The U.S. National Aeronautics and Space Administration (NASA) seeks to enhance human-to-robot communication through data management effected via open source middleware. NASA's Human Exploration Telerobotics (HET) project is pursuing this goal along two tracks: space-based control of terrestrial robots, and ground-based operation of robots in space. Open source software and platforms are extensively employed by the HET machines, with Android and Linux used for the bulk of their computing. All of the HET robots must be enabled for both high-speed and low-bandwidth delayed communications. A common, flexible, interoperable data communications interface that can readily integrate across each robot's novel apps and operating systems also is needed to meet that requirement. "This is really what the Internet of things is all about: machines that generate data and need to receive data in the form of commands telling them what to do," notes Real-Time Innovations' (RTI) Curt Schacker. RTI co-developed the Data Distribution Service for Real-Time Systems (DDS), an open standard designed to facilitate scalable, real-time, reliable, high-performance and interoperable data exchanges that NASA uses as the core of its Disruption Tolerant Networking software. The software compensates for intermittent network connectivity and delays when transmitting data between computers on earth and robots in space, or the other way round. NASA and other space agencies also are using DDS software to support a space-based Internet.


As Work Habits Change, Software Makers Rush to Innovate
The New York Times (07/30/13) Quentin Hardy

Office software is changing to accommodate employees who work from any location, with increasing emphasis on collaboration, small screens, rapid turnarounds, social media, and mobility. Box's Sam Schillace notes that when he created Google Docs, speed and ease of use were of utmost importance. "We were guilty of taking the existing nature of documents, but six years ago connectivity was a question," Schillace says. "Now everything is connected all the time." Microsoft and Google are among the traditional developers working to add mobile and social media aspects to their products, while retaining their existing user base. For example, Microsoft's Julia White says in the future a user might "like" an email to show that they've read it instead of writing a response. Meanwhile, mobile design focuses on limiting choices, with keyboard commands often used instead of icons to conserve screen space. Document-writing software startup Quip, for example, references pictures and tables by touching the "@" key on a pop-up screen keyboard, in a manner similar to Twitter. Quip offers only two fonts, compared to 600 for Google Docs and more than 200 for Microsoft Word. "The way people use things is fundamentally changing," says Quip co-founder Kevin Gibbs. However, Stanford University human-computer interaction researcher Mathias Crawford says users are comfortable with familiar icons in Word and Docs. "Until business structures change, nothing is going to happen" to Microsoft Word, Crawford says. "There is always recourse to showing physical things."


Disney Research Creates Haptic Feedback Out of Thin Air
CNet (07/23/13) Michelle Starr

Disney Research has developed Aireal, a computer peripheral that provides tactile feedback using only air to create a more immersive augmented reality. Aireal gives users feedback on full-body gestures and enables them to feel virtual objects and textures without actually making contact. Using five subwoofers and diaphragms, Aireal produces vortices of moving air that move through a flexible nozzle mounted on a gimbal structure, while a three-dimensional (3D) depth camera tracks user movement. Using the camera and the flexible nozzle, the vortices can accurately target the user's body. By using several Aireal devices, users can create a rich tactile environment that can be small enough for a mobile device or large enough to fill several rooms. Aireal offers significant potential as a low-cost accessory because most of its components are 3D-printed. Although the most immediate application of the technology will be in gaming, Aireal's creators eventually hope to be able to create 3D shapes in the air. "Imagine holding out your hand and feeling someone's face," says Rajinder Sodhi, who led the research. "This will start truly eroding the boundary between real and virtual."


A Magnetic Pen for Smartphones Adds Another Level of Conveniences
KAIST (07/25/13)

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) and Sungkyunkwan University have created a magnetic pen called MagPen that works both on and around mobile devices. MagPen is compatible with any smartphone or tablet computer with an embedded magnetometer, which nearly all mobile devices currently have in order to provide location-based services. KAIST's Sungjae Hwang and his team created a technology that enables an input tool for mobile devices such as a stylus pen to interact more effectively with touchscreens. MagPen detects the direction of the stylus, recognizes pens with different magnetic properties, identifies pen-spinning gestures, and gauges finger pressure applied to the pen. In addition, the MagPen broadens the scope of input gestures recognized by a stylus pen. Spinning allows the pen's tip to change between a pointer and an eraser, and to choose the thickness of the lines drawn on a screen. "It's quite remarkable to see that the MagPen can understand spinning motion," Hwang says. "It's like the pen changes its living environment from two dimensions to three dimensions. This is the most creative characteristic of our technology."


Douglas Engelbart’s Unfinished Revolution
Technology Review (07/23/13) Howard Rheingold

Computing pioneer Doug Engelbart invented the mouse, but also had a vision of human-computer interaction that would enable people to work collaboratively. In the 1960s, Engelbart headed the Augmentation Research Center at the Stanford Research Institute (now SRI International), where he conceived of ideas such as people controlling computers by pointing and clicking, using audio-video and screen-sharing, and navigating information via hyperlinks. However, Engelbart, recipient of the ACM A.M. Turing Award in 1997, did not receive the backing to realize most of his vision. Engelbart viewed computers, interfaces, and networks as a way to broaden human intelligence, and in 1962 wrote that increasing the "collective IQ" resulted in "more-rapid comprehension … better solutions, and the possibility of finding solutions to problems that before seemed insoluble." Engelbart had the idea of people using screens and computers to work together to solve problems and devoted much of his life to pursuing this, but experts at the time considered his ideas extreme, believing that computers were only for scientific computation or business data. The mouse that Engelbart invented was a critical piece of his vision, but only the beginning of a much larger scheme that he said centered on "humans, using language, artifacts, methodology, and training."


Carnegie Mellon, Microsoft Scientists Use Mobile Games to Generate Database for Large-Scale Analysis of Human Drawing
Carnegie Mellon News (PA) (07/22/13) Byron Spice

Researchers at Carnegie Mellon University (CMU) and Microsoft Research have created DrawAFriend, a drawing assistance application that uses big data to improve users' ability to sketch on touchscreens. Although big data can help drawing and writing in many ways, the challenge is to create an adequately-sized database on which to base models. To address this, the researchers launched an iPhone drawing game that motivated thousands of users to sketch celebrities on their phones. The game produced 1,500 images daily in its first week and now has more than 17,000 images with stroke-by-stroke data on its creation. "We are in the middle of a big data revolution," says CMU professor Adrien Treuille. "We've found that big data can be used to do amazing things. But success is not inevitable; you have to have the dataset first. With DrawAFriend, we've found a way to use crowdsourcing to create this critical resource for a data-impoverished phenomenon." The team created a stroke-correction method based on the consensus of strokes from the database, eliminating poorly placed strokes that occur when fingers are too large for small screens. Because the stroke correction takes place in real time, the correction process is invisible to users.


Google Glass Could Help Doctors View Vital Data in Emergency Rooms
eWeek (07/23/13) Brian T. Horowitz

Google Glass could revolutionize the health care industry by providing doctors with real-time patient data, says John D. Halamka, CIO of Boston's Beth Israel Deaconess Medical Center. With preliminary Glass testing complete, Halamka now intends to conduct a pilot program of the technology in the medical school's emergency department. The health care industry could use Glass to satisfy a Stage 2 core objective of the federal government's meaningful-use program for electronic health record incentives, Halamka notes. "Could Google Glass, [by] providing advice, showing you pictures, validating that you've got the right patient with the right medicine, be completely assistive? Absolutely," Halamka says. The temple touch user interface could instantly display a patient's lab and radiology results. In addition, Google Glass could improve clinical documentation by allowing doctors to record audio and visual information. Emergency room doctors could use Glass to view a patient's vital signs, triage details, and nursing documentation. Glass offers several advantages over traditional PCs, which can be cumbersome to use with the gloves that doctors wear during procedures. In addition, doctors can use Glass while still giving patients the impression of undivided attention. Halamka says Glass also enables doctors working on a patient to view information that they might have trouble remembering, such as cardiac arrest resuscitation algorithms.


Virtual Companions Making Interaction More Social
CORDIS News (07/23/13)

The European Union-funded COMPANIONS project created conversational interfaces for the Internet that recognize people and make computer interfaces more human. The research focused on machine learning to enable the software to learn and respond without specific programming to do so. The technology could reduce depression stemming from loneliness and offer an alternative access point for Internet resources. The researchers' English Companion listens to statements in English and replies with interest and empathy by analyzing a person's voice and dialogue content. Embedded rules help the software use an appropriate "emovoice," with negative-active, negative-passive, neutral, positive-passive, and positive-active expressions. In addition, the researchers created Czech Companion, which shows progress in Czech language automatic speech recognition, natural language understanding, natural language generation, and text-to-speech synthesis. The Senior Companion discusses news, photos, and humorous anecdotes with elderly users, and locates tagged online photos of users to create a timeline of life events. Another tool, the Health and Fitness Companion, helps users improve their nutrition and exercise by asking users questions such as whether they will run that day.


Paper-Thin E-Skin Responds to Touch, Holds Promise for Sensory Robotics and Interactive Environments
UC Berkeley NewsCenter (07/21/13) Sarah Yang

University of California, Berkeley engineers have created electronic skin using a sensor network on flexible plastic, which they say will improve the ability of robots to respond to touch. The e-skin responds to touch by lighting up, with brightness increasing in accordance with pressure. "With the interactive e-skin, we have demonstrated an elegant system on plastic that can be wrapped around different objects to enable a new form of human-machine interfacing," says Berkeley professor Ali Javey. The technology has other potential applications, such as wallpaper that functions as a touchscreen or a bandage that acts as a health monitor, the team notes. "Integrating sensors into a network is not new, but converting the data obtained into something interactive is the breakthrough," says Berkeley professor Chuan Wang. "And unlike the stiff touchscreens on iPhones, computer monitors, and ATMs, the e-skin is flexible and can be easily laminated on any surface." The next step for the researchers is to make the e-skin sensors respond to temperature and light.


New Navigation Gadget for People Who Are Blind
Curtin News (07/18/13) Megan Meates

Curtin University researchers are creating a device through their Indoor Navigation Project that will enhance the traditional white cane by enabling people who are blind to sense their surroundings beyond the cane's tip. The device, which resembles a smartphone, senses the features in a room, constructs a virtual map, and provides this information to the user. "What we are developing is a multi-sensor device for people who are blind, who are also often hearing impaired, to tell them what is exactly around them from wall to wall," says project leader Iain Murray. The researchers are developing five different sensors that gauge information such as change of velocity, images, and noise, and are integrating them into a single device. The sensors use stereoscopic cameras to detect path edges and obstacles, and image-processing technology helps convert the information into a map. Other sensors gather audio cues to locate and track moving objects from a mobile receiver. Murray notes that building owners do not need to install any infrastructure for the device to work because the sensors pick up all of the information needed for the maps.


What Comes After Click: A Crash Course in Tangible User Interfaces
Gizmodo (07/18/13) Kelsey Campbell-Dollaghan

New forms of tangible or graspable user interface (UI) design are emerging to revolutionize the user-computer interaction experience by embedding digital properties into physical objects. One example is Hibou, a radio that users control through varying tactile engagement with its palladium surface. Meanwhile, a Massachusetts Institute of Technology project involves a UI that can control room lights or other appliances via levitating magnets. In addition, engineers at the Frog design consultancy are developing room-sized interfaces controlled by gesture or voice. "It has the potential to be more heads-up, allowing users to be present in their environment," says project leader Jared Ficklin. "Eventually, room-size computing will touch everything." Tangible UIs such as Good Night Lamp can help support long-distance relationships: the UI enables a person at one location to turn on their light and cause an identical light at the other person's location to go on. Another notable graspable UI is DIRTI, an iPad app for children in which users can play with a bowl of tapioca to control an audiovisual symphony.


Sensing Cities, Smart Citizens?
Scientific American (07/16/13) Roger Dennis

The concept of the smart city is often embodied by the idea of cities having a central control analogous to the brain, but a better model might be the city as an ecosystem in which knotted relationships exist between the streams of resources, information, and people. This model allows technology's urban role to be reframed as a component of the ecosystem, supplying information that assists and informs these streams. Microsoft's recent analysis of energy consumption across more than a dozen buildings at its main campus demonstrated that it could save money by implementing analytics software to monitor the buildings by detecting issues that would not be resolved previously. More than $12.5 million a year in savings was realized in one building management area with an initial investment of less than 10 percent of Microsoft's annual energy bill. Reaping the full potential of technology in the urban environment calls for making the technology overlay accessible to citizens so that they can understand how to get the most from their environment, and shape how this happens. An effort in this vein is Christchurch, New Zealand's Sensing City project, which aims to aid efficient city operations in the wake of a series of quakes that severely damaged the central business district. Sensors embedded in the rebuilt infrastructure will track everything from noise levels to water use in real time, while also adding data from existing databases and building management control systems.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.