ACM SIGCHI Banner
Welcome to the November 2013 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Professor Clifford I. Nass, Expert on Human-Computer Interactions, Dead at 55
Stanford Report (CA) (11/04/13) Kathleen J. Sullivan

Stanford University professor Clifford I. Nass, an expert on human-computer interaction, has passed away at age 55. Prior to joining Stanford's faculty in 1986, Nass earned a bachelor's degree cum laude in mathematics, a master's degree in sociology, and a doctorate in sociology from Princeton University, and worked as a computer scientist at Intel. Nass also was founder and director of Stanford's Communication between Humans and Interactive Media Lab, which concentrates on communication in and between cars, social and psychological aspects of mobile and ubiquitous technology, the abilities of people, and human-robot engagement. The co-author of several books on humans' relationships with technology, Nass in recent years drew national interest with research that challenged the idea that people could multitask using digital devices. He also warned at a social sciences summit last year that growing media consumption threatens to damage the human brain, compounded by corporate policies that force employees to multitask. At TedX Stanford 2013, Nass detailed research highlighting the need for face-to-face interaction between people as a necessity for successful social and emotional development. Nass' work on human-computer interaction has been applied to more than 250 media products and services. Fellow professor Jeremy Bailenson notes Nass' early work on social responses to computer technology was hailed as revolutionary and revelatory when it was first published, while today its impact has become so persuasive that social relationships with media are taken for granted.


Voice Commands or Gesture Recognition: How Will We Control the Computers of the Future?
The Independent (United Kingdom) (10/28/13) Andrew Walker

Wearable computers are hitting the marketplace, but the success of the technology could depend on the type of input device that succeeds the mouse. "Time of flight" controllers measure the "flight time" of a user's hand using light beams and measuring how they bounce back. These devices offer a wide range of motion and can match the functionality of a mouse or touchscreen with a wave of the finger, but must be attached to a wall or desk, which restricts them to indoor use. Another input option is voice recognition, which is now included on most new computing devices and eliminates the need to use hands. However, voice recognition is stumped by regional accents and slang, and is too slow for surfing the Internet, playing games, and using complex drop-down boxes. Muscle control, or myoelectrics, devices are typically armbands that measure the electrical signals of muscles and translate them into mouse movement. The technology is similar to a time of flight system, with the advantage of mobility. One drawback is that muscle control devices require exact arm and hand motions to achieve accuracy, which is difficult for most users. Smart gloves use technology such as accelerometers to let users control computers, and are used by specialists such as brain surgeons, but are not feasible for everyday wear. Electroencephalography devices measure brainwaves to control a computer and have shown promise with mobility-impaired people, but require a headset and are more complicated to use than myoelectrics. Some observers believe that smartphones are the logical input device of the future, which could limit the success of wearable computers.


MSU-Created Joystick Advances Independent Voting
MSUToday (10/29/13) Andy Henion

Michigan State University (MSU) researchers have developed the Smart Voting Joystick, which is designed to facilitate independent voting for people with dexterity impairments, senior citizens, and others. Accessible voting machines currently require users with dexterity challenges to repeatedly press small buttons or switches, forcing many to seek assistance from volunteers and leading others to skip voting. Although most polling places have become more accessible in recent years, 46 percent still have a system that challenges voters with disabilities, according to a U.S. Government Accountability Office report released in April. The Smart Voting Joystick is similar to the joystick used to control motorized wheelchairs and offers a significant improvement over current accessible technology, says project leader Sarah Swierenga. She notes the joystick received positive feedback in MSU campus tests, and says real-world use hinges on federal approval and a manufacturer producing the device. Accessibility at the polling place has been a focus for years, yet it remains ineffective," Swierenga says. "The expectation among the next generation is that they're not going to put up with this the way prior generations might have. The pendulum is swinging toward inclusion on many issues, voting being one of them."


National Robotics Initiative Invests $38 million in Next-Generation Robotics
National Science Foundation (10/23/13) Lisa-Joy Zgorski

The U.S. National Robotics Initiative (NRI), announced two years ago under President Barack Obama's Advanced Manufacturing Partnership Initiative, has made a second round of funding awards for the development of robots that work with people to improve human productivity and safety. Led by the National Science Foundation (NSF), NRI also includes the National Institutes of Health (NIH), the U.S. Department of Agriculture (USDA), and the National Aeronautics and Space Administration (NASA). The projects receiving funding focus on collaborative robots, or co-robots, that assist in areas including advanced manufacturing, health care and rehabilitation, space and undersea exploration, independence and quality of life improvement, and driver safety. NSF is investing about $31 million in 30 new projects over the next three years to advance robotics, with projects including research to improve robotic motion and sensing. NIH is providing $2.4 million in funding over five years for three projects, including the development of a co-robotic cane that could help visually impaired users navigate their environments. USDA will invest $4.5 million in five grants, with projects that will develop robotics for fruit harvesting, early disease and stress detection in fruits and vegetables, and water sampling in remote areas. NASA is continuing to fund its 2012 investments in eight U.S. universities to advance robotics capabilities, with projects including avatar robots for co-exploration of hazardous environments and skin to improve tactile feedback in robotics.


Beyond a Gadget: Google Glass Is a Boon to Disabled
USA Today (10/22/13) Marco Della Cava

Google Glass has tremendous potential to advance human-computer interaction and improve life for disabled people due to its hands-free capabilities. Researchers across a range of disciplines are examining how to leverage the technology for those with mobility, vision, and hearing impairments. The device also could help people with autism to recognize emotions using facial recognition software. Advances in speech recognition could soon enable deaf people to use Glass to read a real-time transcript of a person's speech, while vision-impaired users could follow walking directions from Glass speakers. Disabled users can help shape Glass by providing input through the Explorers program. "From taking a picture with ease to helping those with low vision redefine their world, this has the possibility to level the playing field," says American Association of People with Disabilities CEO Mark Perriello. Glass increases accessibility by "reducing the time between intention and action," notes Glass lead designer Thad Starner. "Glass keeps you in the flow of what you're doing, and for people with disabilities, that's even more vital." Rosalind Picard, founder of the Affective Computing Research Group at Massachusetts Institute of Technology's Media Lab, says Glass will provide revolutionary capabilities for people with disabilities. "One day soon, we'll look at regular glasses the way we now look at old phones," Picard says. "It will change things so much."


Clemson, Dartmouth Use $1.5M Grant to Develop Mobile Health Technology
Clemson University (SC) (10/28/13) Brian M. Mullen

Researchers at Clemson University and Dartmouth College are using a $1.5 million U.S. National Science Foundation grant to develop wearable computers that support mobile-health applications. "The advent of mobile health (mHealth) technology brings great opportunity to improve quality of life, individual and public health, and reduce health-care costs," says Clemson professor Kelly Caine. "Although mHealth devices and applications are proliferating, many challenges remain to provide the necessary usability, manageability, interoperability, availability, security, and privacy." The researchers are creating a general framework for body-area pervasive computing for health-monitoring and health-management applications. Wearable devices such as bracelets or pendants can coordinate the activity of the body-area network and provide a discreet means of communication with the user, says Clemson professor Jacob Sorber. "Our vision is that computational jewelry, in a form like a bracelet or pendant, will provide the properties essential for successful body-area mHealth networks," Sorber says. The team is creating an electronic bracelet called Amulet and a software framework in which developers will build mHealth applications that are compatible with everyday activities. Amulet will track medication use and issue reminders to take medicine, and offer health data to first responders in the event of a medical emergency. The researchers will assess the advantages that wearable devices bring in availability, reliability, security, privacy, and usability.


'Minority Report'-Style Goggles Enable Interaction With Floating Display
Computerworld (10/23/13) Lucas Mearian

Taiwan's Industrial Technology Research Institute (ITRI) recently unveiled its i-Air Touch virtual display, which lets users interact with virtual keyboards and touchscreens. Using special glasses and a defined distance with defined range (DDDR) camera, i-Air Touch enables users to control virtual data, images, and devices that float in front of them. The display is being developed to work with multiple devices, such as PCs, laptops, wearable computers, and mobile devices, eliminating the need for a physical device such as a touchpad or keyboard for input. The DDDR camera uses a phase- and color-coded lens to discern an object that is 11 inches to 12.5 inches away from the eyeglasses, and activates only when a fingertip is within that input range. The camera records the image of a user's fingertip and splits the image into green and red color codes to offer image-processing segmentation. Because the camera only activates when a user's fingertip is in a specific input range, the camera conserves battery power and ensures that the user intends to provide input. "I-Air Touch creates new possibilities for wearable and mobile computing by freeing users from the distraction of locating and touching keys on a physical input device for hands-free computing and improving security over voice commands," says ITRI's Golden Tiao.


Where Humans Will Always Beat the Robots
The Atlantic (10/22/13) Michael Copeland

Human intelligence will always be superior to robot intelligence at certain tasks, says Massachusetts Institute of Technology professor Rob Miller, who specializes in crowd computing. "There are a bunch of things that human beings can do that we don't know how to model with computers," Miller says. For example, a group of people can decipher difficult to read handwriting using context for clues, whereas algorithms are unable to read unclear writing. He says crowd-computing software leverages the efforts of groups of people, each performing small tasks, to solve a problem more effectively than a single person or algorithm. Rather than a scenario in which computers take over all human tasks, Miller says humans and machines will both benefit with crowd computing because "the machine wants to acquire data so it can train and get better." Meanwhile, the crowd benefits from pay or education, and end users "get the benefit of a more accurate and fast answer," he says. Miller's User Interface Design Group has created programs demonstrating this symbiotic relationship, such as the Cobi software, which uses the academic community to plan large-scale conferences. Miller says software increasingly will rely on crowd development in the future, with the trend perpetuated by the interconnected needs of crowds, end users, and software.


Students in Oswego, Australia Team on Transhumanism Course, Robot Videos
SUNY Oswego (10/24/13)

Researchers at the State University of New York (SUNY) at Oswego and the Royal Melbourne Institute of Technology (RMIT) in Australia collaborated on an international online course called "Transhumanism," which examined artificial intelligence, robots, and avatars in the real and virtual worlds. The effort joined graduate students in human-computer interaction (HCI), English literature, and creative writing. RMIT creative writing students wrote questions for HCI students at Oswego to answer, and scripts for short videos starring the Oswego Players and programmable, humanoid NAO robots. "Robotics lets us step outside ourselves and allows us to do some problem-solving," says RMIT's Lisa Dethridge. "That's a kind of fourth-dimensional space" where people can project themselves onto an inanimate object, observe, and learn. Professors asked students to question technological advances, including what qualities define a human and whether robots can learn to have beliefs. "As HCI students, we spend a lot of time thinking about humans and robots and computers," says Phillip Moore, a student in the program after receiving his undergraduate degree in graphic design. "We brought them a little different perspective just thinking about how to write for robots."


Camera Lets Blind People Navigate With Gestures
Technology Review (10/17/13) Richard Moss

University of Nevada researchers at the recent ACM Symposium on User Interface Software and Technology presented a wearable depth-sensing camera that could help blind people navigate using spoken guidance. The Gestural Interface for Remote Spatial Perception (GIST) system uses a Microsoft Kinect sensor to analyze and identify objects in its field of view. The team drew on the Massachusetts Institute of Technology's Sixth Sense augmented reality project, which enables users to interact with information projected onto the physical world by a wearable device. However, GIST gathers data from hand gestures to augment a blind person's spatial perception. For example, users can hold out a closed fist and GIST will tell them whether a person is in that direction and how far away that person is. In addition to gestures, the system also recognizes objects, faces, and speech. The researchers intend to try adding functionality that would enable GIST to tell users who is in front of them, using a database the user could establish with voice commands. Kinect-like sensors will eventually be small enough to fit into smartphones, which should be highly useful for people with vision impairments who have been using interfaces such as speech for many years, says GIST researcher Eelke Folmer.


Beyond the Touchscreen: The Human Body as User Interface
The Network (10/20/13) Laurence Cruz

Researchers at Germany's Hasso Plattner Institute and the Technical University of Darmstadt are separately experimenting with technologies that advance human-computer interaction by using the human body as part of the user interface. The Hasso Plattner Institute group created the Imaginary Phone interface that assigns points on the palm of the user's hand to represent functions such as number pad elements, news, and email. A camera on the user's chest observes the hand's interaction and a Bluetooth earpiece offers audio feedback. "Interaction with mobile phones today involves taking a physical device out of your pocket, looking directly at it, and using your fingers to interact with it," says Hasso Ph.D. student Sean Gustafson. "This is all fine, except that it takes time and removes us from the environment. We wanted to leave the user's vision available to experience the world." The Technical University of Darmstadt team created the EarPut prototype, which uses the ear's surface to input commands from user to computer. The researchers set out to improve accessories worn behind the ear, such as headsets or glasses. The ear facilitates one-handed command input without visual attention because of proprioception, which is the human ability to sense body part positions relative to one another. Experts say these new interfaces could serve as remote controls for everyday objects that come online as the Internet of Things matures.


Yoga Accessible for the Blind With New Microsoft Kinect-Based Program
UW News (WA) (10/17/13) Michelle Ma

University of Washington researchers have developed a Microsoft Kinect-based program to aid visually impaired users with yoga instruction by giving auditory feedback on the accuracy of their movements. The Eyes-Free Yoga program tracks body movements using a camera and software, and provides real-time spoken feedback on yoga poses. Washington doctoral student and project lead Kyle Rector's code instructs the camera and software to read body angles and offer specific instructions for adjustment. The program is in the form of a game, which enables people without sight to interact verbally with a simulated yoga instructor. The program includes six poses, each of which offers about 30 suggestions for correct pose alignment, based on criteria that Rector obtained through working with yoga instructors. In addition, Rector tested the program with 16 blind and low-vision people, with 13 participants saying they would recommend the program to others and nearly all saying they would use it again. "I see this as a good way of helping people who may not know much about yoga to try something on their own and feel comfortable and confident doing it," says Washington Human Centered Design and Engineering professor Julie Kientz. "We hope this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale."


Smart Cities and Smart Buildings Make Smart People
London Telegraph (United Kingdom) (10/13/13) Monty Munford

Future cities and the buildings within them are likely to exhibit a psychogeographical aspect, in which the way people feel when they are close to or inside a building greatly influences their behavior. People currently commute at the busiest and most costly times of the day. However, this situation will eventually change for the better, while the pressure of crowded transit systems will be mitigated by integrated transport systems that will reshape cities. Meanwhile, machine-to-machine communication and the Internet of Things will revolutionize the office space, and for those who are not telecommuting, the rethought office may be even more efficient and leisurely than working from home. Avaya's Michael Bayer says smart buildings will emerge as the future workplace. "Imagine a workspace that's aware of you as an individual--whether you're an employee, partner, customer or supplier," Bayer says. "Imagine an office that recognizes who is entering the building, what physical access they require, what devices they have with them, and what information they might need." He notes a smart building also will know its occupants' preferences for light, temperature, and room type. "It will alert you when someone who might be useful to a project you're working on enters the building; and even automatically set up a meeting with that person," Bayer says.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.