ACM SIGCHI Banner
Welcome to the June 2013 SIGCHI edition of ACM TechNews.


ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


New Lab Aims to De-Stress Technology Use
Stanford Daily (05/28/13) Brittany Torrez

Stanford's Calming Technology Lab (CTL) aims to develop technologies that can help calm a user in terms of cognition, emotion, and physiology. “We’ve found that just using a computer activates the fight or flight response,” says CTL director Neema Moraveji. “So just by using the computer all day, it’s kind of tiring.” While Moraveji was working on his dissertation about human-computer interactions, he created a sensor called Breathwear that monitors breathing patterns and alerts users when stress occurs. Another CTL project, Mail0, monitors email-checking habits to ensure an empty inbox is reached by the end of each day, while Morphine Drip helps athletes manage pain and stress through SMS mechanisms. CTL researchers and designers have varied backgrounds, including computer science, mechanical engineering, psychology, and symbolic systems. “Technology’s not disappearing, so we need to do something else. Sometimes you have to create new technology," says Moraveji, who believes product designers will increasingly focus on technology's emotional and cognitive impact. “Now there’s a value put in place on how that [technology] makes me feel, how that technology makes me perform better, love better, and build better friendships and a deeper life and more meaningful life,” Moraveji says.


How Password Strength Meters Can Improve Security
InformationWeek (05/20/13) Mathew J. Schwartz

Password strength meters that use colors to rate a proposed password's security level lead to stronger passwords when users are forced to revise existing passwords on important accounts, according to a study by researchers at the University of California, Berkeley, Microsoft Research, and the University of British Columbia. The researchers also determined that there is likely to be a marginal impact on user adoption of such meters resulting from graphical design variations between different meter types. The study found that password meters offer simple and instant visual feedback about what amounts to sufficiently strong password security. "The original purpose of the experiment was to see whether meters based on social pressure would yield an improvement, since we didn't expect existing meters to be effective," says Berkeley researcher Serge Egelman. "We were surprised that one, meter design doesn't appear to matter much, and two, meters do work under certain circumstances." The researchers also found that the meters did not appear to cause memorability difficulties for users. However, in a second study in which users were asked to produce passwords for unimportant accounts, the researchers observed that the meters made no noticeable difference. Egelman suggests the meters should not be used for unimportant passwords, as "people have a finite amount of memory, which shouldn't be wasted protecting resources that are unimportant."


Augmenting Social Reality in the Workplace
Technology Review (05/15/13) Ben Waber

The field of augmented social reality delves into questions of whether we can alter physical reality by harnessing data about people, writes Sociometric Solutions CEO Ben Waber. "Augmented social reality is about systems that change reality to meet the social needs of a group," he says. One example Waber cites involved an augmented cubicle experiment to influence workplace social dynamics. The cubicle's blinds would go up or down at certain intervals depending on whether the experiment's controllers wanted different people to be communicating with each other. "The next challenge is to use what we learn from this behavioral data to influence or enhance how people work with each other," Waber says. His Massachusetts Institute of Technology Media Lab spinoff company uses sensor-equipped ID badges to measure workers' movements, their tone of voice, their office location, and whom they are talking to. Data collected by the sensors is used to advise firms on how to modify their organizations, and Waber proposes that some of these alterations could be made in real time in the future. Among the visions of augmented social reality via behavioral data is a system that suggests who should convene with whom in an organization. "We might be able to draw on sensor and digital communication data to compare actual communication patterns in the workplace with an organizational ideal, then prompt people to make introductions to bridge the gaps," Waber reasons.


Flirting With the Satnav
Inderscience Publishers (05/29/13)

Motorists respond emotionally to their automobile's sounds, especially the voice of their satellite navigation (satnav) system, according to British researchers in the International Journal of Vehicle Noise and Vibration. The researchers note the satnav system supplies realistic vocal utterances during driving, with most devices coming with options for changing the voice's gender, accent, and so on. Fifty volunteers were enlisted to test their preferences and responses to different satnav voices by engineers at the University of Nottingham's Human Factors Research Group. They assessed a dozen satnav voices from the Garmin and TomTom systems, vocalizing 36 messages. They asked volunteers to complete a psychometric test to categorize the different vocalizations based on their perceived clarity of speech, their assertiveness, trustworthiness, annoyance level, how distracting they were, and whether or not the participants would select a specific voice for everyday use in their own satnav while driving. The researchers observed a strong positive correlation among volunteers between the ratings for trustworthiness, assertiveness, and clarity of the voices and whether the volunteers would use that voice. It was demonstrated that the participants gave the satnav voices personality traits even though the device is not an actual person and many of the voices were computer generated. The researchers say such effects could inform hardware, software, and interface design improvements, perhaps leading to better understanding and accessibility.


One Day Your Phone Will Know If You're Happy or Sad
Smithsonian.com (05/22/13) Randy Rieland

Technology that could make cellphones and other devices capable of reading the user's emotions to provoke an empathetic-like response is being driven by the discipline of affective computing. The field is centered on software that can quantify, interpret, and react to human feelings. For example, Affectiva has developed Affdex, software that records expressions and employs proprietary algorithms to analyze facial cues, tapping into a database of 300 million frames of human face elements. The software is often used to monitor people watching commercials to assess how they feel about what they have seen. One of the software's creators, Massachusetts Institute of Technology scientist Rana el Kaliouby, says the technology has potential applications outside of advertising. For example, smart TVs could be even more knowledgeable about viewers' favorite programs if they can develop a memory bank of facial expressions. Meanwhile, politicians could adjust their messages on the fly by receiving real-time reactions to each line they deliver. El Kaliouby also sees potential health applications, noting the possibility of reading a person's heart rate with a webcam by analyzing the subject's facial blood flow. “Imagine having a camera on all the time monitoring your heart rate, so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax,” she says.


Brainpainting via Computer Frees Expression for the Paralyzed
PhysOrg.com (05/27/13) Nancy Owano

Paralyzed artist Heide Pfutzner creates abstract art via a brain-computer interface (BCI) that translates her thoughts into images. Her brainwaves are rendered as instructions for which colors, shapes, and brushes will be used through the use of a controlled computer system. Researchers have demonstrated that users with impaired motor skills, such as patients suffering from Amyotrophic Lateral Sclerosis (ALS), can employ the event-related potentials-based P300-BCI to communicate. Users are shown a matrix comprised of letters and numbers flashed consecutively, and the act of focusing on the intended letter or number elicits a notable positive deflection, the P300, in the user's electroencephalogram (EEG). Detection of the P300 from the event-related EEG enables the system to recognize the letter/number the user is intending to spell. Researchers also have adapted the P300 to a brain-painting application that uses brain activity to paint pictures. A P300 spelling app was used as the basis of a brain-painting app designed by artist Adi Hosle in collaboration with the University of Tubingen's Institute of Medical Psychology and Behavioral Neurobiology. The system uses the cells of a 6x8 matrix containing symbols that stand for color, objects, object size, transparency, and cursor movement. Researchers note that ALS patients may find that BCIs give them an opportunity to express themselves.


Hands Up! Do You Speak Digital Body Language?
New Scientist (05/25/13) MacGregor Campbell

This year, everyday computers will be capable of understanding gestures with unprecedented accuracy, and the subsequent change in human behavior this facilitates will be fed into the technology's maturation. The catalyst for this expected trend is the July launch of a Leap Motion product that can plug into most computers and give them the ability to track extremely fine hand and finger movements. Meanwhile, Leap Motion's Airspace app store will offer a slate of gesture-controlled software when it launches concurrently. "There is no question that a gesture vocabulary of sorts will enter further into the psyche of today's and future computer users," says University of Washington in Seattle researcher Jacob Wobbrock. Designer Jamie Zigelbaum notes it is an open challenge to design an easy-to-remember gestural vocabulary that can enable complex interaction. Areas of vocabulary development being researched include a gesture catalog of 20 commands devised by Zigelbaum and colleagues at the Massachusetts Institute of Technology Media Lab. Another project seeks to train people in gesture commands as they go, for example by using a projector to beam visual instructions onto a person's body. Another factor that will influence gestural vocabularies is the need for the gestures to be repeatable and performable over long stretches of time. It is anticipated that gesture commands will not render the keyboard or mouse obsolete, but rather become another tool for human-computer communication.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


The Age of Smart Machines
Economist (05/25/13)

Smart machines are poised to revolutionize society by taking over many tasks currently performed by humans, but analysts say the technology must be managed wisely if it is to bring more benefit than harm. A new study from the McKinsey Global Institute (MGI), “Disruptive Technologies: Advances That Will Transform Life, Business, and the Global Economy,” says the rate of progress will rise dramatically due to Moore’s law and the confluence of machine learning, voice recognition, and nanotechnology. Jobs that once required human intelligence will be performed by small computers, and consumers with at least moderate income will have access to wearable physicians and electronic personal assistants capable of tasks such as booking flights. As knowledge workers spend less time on routine tasks, they will have more time to innovate and increase productivity, according to MGI. Cloud computing will bring small businesses many of the advantages of their larger counterparts, such as access to greater processing power and storage, and the Internet will eliminate distance as an obstacle. Consumers have reaped two-thirds of the benefits brought about by the Internet, MGI estimates. However, the study echoes the concerns of other research warning that modern technologies will increase inequality and social exclusion, thus provoking a backlash and requiring policymakers to govern technological advances with great providence.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Paid Subscription | Return to Headlines


Q&A: Leila Takayama, Research Scientist, on Human-Robot Interaction
SmartPlanet (05/20/13) Christina Hernandez

Willow Garage research scientist Leila Takayama studies human-robot interaction to make robots more acceptable and likable in human environments. "I care about designing technologies that leverage what we know about people, so [the technologies are] designed to respect people and our capabilities," Takayama says. "Robots aren’t designed at all to be human-friendly right now." Through social collaboration, Web technologies have become more user-friendly, but robotics has not benefited from such collaboration, she says. Robots that annoy or frustrate users are ultimately not used, so their value hinges on being useful and friendly to people, Takayama notes. Interaction designers and character animators can make robots more predictable for humans, which in turn makes them safer and more user-friendly. To avoid a backlash in the near future, robots should not be designed to look like people, but rather should appear as a friendly entity such as a dog, which eliminates the expectation for the robot to have human-like intelligence, she says. Takayama and her colleagues have created prototypes of remote presence devices that she describes as "Skype on wheels," and observed people interacting with the technology. This information enables them to make suggestions to engineers on how to make the robots more effective, for example, by making the robots taller so that they are perceived as more persuasive.


'Makers' 3-D Print Shapes Created Using New Design Tool, Bare Hands
Purdue University News (05/14/13) Emil Venere

Purdue University researchers have created Shape-It-Up, a design tool that enables users to sculpt three-dimensional shapes using hand gestures in a virtual workspace. The shapes can then be produced with a 3D printer. "You create and modify shapes using hand gestures alone, no mouse or keyboard," says Purdue professor Karthik Ramani. "By bringing hands into the virtual space with a single depth camera, we are able to manipulate the 3D artifacts as if they actually exist." The system employs the Microsoft Kinect camera, which can sense 3D space, while algorithms identify hand gestures, understand that the hand is interacting with the shape, and then change the shape in response. Ramani says Shape-It-Up, developed with funding from the U.S. National Science Foundation, is intended to transcend the limitations of current computer-aided design tools so the designer plays an essential role in the shape-modeling process. "We conclusively demonstrate the modeling of a wide variety of asymmetric 3D shapes within a few seconds," Ramani notes. "One can bend and deform them in various ways to explore new shapes by natural interactions. The effect is immediate." Shape-It-Up also goes beyond the restrictions of an earlier version that could only create rotationally symmetric objects that have the same measurements on all sides.


After the Breakup: What to Do With Digital Possessions
Santa Cruz Sentinel (CA) (05/11/13)

It has become harder following romantic breakups for former couples to eliminate virtual reminders of relationships such as photos, music, and messages preserved on social media, and University of Santa Cruz professor Steve Whittaker and Lancaster University researcher Corina Sas have written a paper focusing on the challenge of disposing of such digital keepsakes. They propose that the pervasiveness of digital possessions "creates problems during a breakup, as people 'inhabit' their digital space where photos and music constantly remind them about their prior relationship." Whittaker and Sas interviewed 24 people, aged 19 to 34, and found that the subjects followed distinctive disposal strategies. Twelve of the interviewees deleted the digital possessions, eight kept them, and four disposed of them selectively. The researchers determined that some individuals may wish to forget but are "extremely resistant to actual deletion," usually those who were dumped. Others later harbor regrets about deleting all keepsakes. Whittaker and Sas note disposal is all the more difficult now because "digital possessions are in vast collections spread across multiple devices, applications, Web services, and platforms." The researchers suggest that software solutions might help cleanse cyberspace of painful memories; for example, automatic harvesting that utilizes facial recognition, machine learning, or entity extraction. They also propose a Pandora's Box that could automatically collect all digital artifacts of a relationship and place them in an area for strategic deletion or retention later on.


UI Grad Student's Video System Merges Virtual, Reality
Champaign News-Gazette (IL) (05/09/13) Christine Des Garennes

University of Illinois at Urbana-Champaign graduate student Brett Jones worked with a team at Microsoft Research to create the IllumiRoom augmented reality video system, which blends virtual and real-world elements. "It takes the game out of the TV and into your living room," says Jones, who recently presented a paper with his team in Paris at the recent ACM SIGCHI conference in Paris, which won a best paper award. IllumiRoom, created in a three-month time frame using a standard projector and a Kinect sensor, recognizes color, distance, and shapes based on a color picture and a three-dimensional scan of the room. The technology can make a video game or TV show appear to expand beyond the limits of a TV screen and into the room. In addition, IllumiRoom can make objects in the room appear cartoon-like and produce a "radial wobble" that makes the room appear to shake after a gamer shoots a gun, for example. Jones says they have "just kind of scratched the surface" of what the technology could do, and he and his colleagues intend to work with game designers and cinematographers to develop it further.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.