ACM TechNews SIGCHI Edition
ACM SIGCHI Banner
Welcome to the April 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Democratizing Data Visualization
MIT News (03/26/14) Larry Hardesty

The Exhibit Web development tool set released by the Massachusetts Institute of Technology's (MIT) Haystack Group in 2007 lets neophytes quickly organize interactive data visualizations, and in April Haystack members will detail a study of how Exhibit has been applied at the ACM Conference on Human Factors in Computing Systems in Toronto. The study demonstrates how websites could better determine the effectiveness of their published visualizations, with additional implications for the design of data-visualization tools, data-management software, and Web-authoring software. MIT researcher Ted Benson and professor David Karger say such insights could help them design more engaging data displays and perhaps extract new, previously overlooked meaning in the data. Through an analysis of nearly 2,000 Exhibit-built Web pages, the automatically generated access logs of the 100 most popular Exhibit sites, mouse clicks performed by site visitors, and other variables, Benson and Karger found evidence that novices are rapidly building their own pages by cutting and pasting other people's code. Exhibit's status as a declarative rather than an imperative programming language promotes ease of use, Karger contends. Meanwhile, an Exhibit page can feature different visualizations of the same information. The researchers also found that although most developers used spreadsheets to generate their data, their visualizations tended to leverage more complex data relationships than spreadsheets are designed to accommodate.


UW Student Researches Ways to Make Robots More Human
Badger Herald (03/24/14) Rachael Lallensack

Sean Andrist, a graduate researcher at the University of Wisconsin, is investigating how robots and digitally constructed virtual agents can improve their interactions with human beings by studying the ways people maintain and break eye contact. Andrist is specifically interested in gaze aversion, the ways in which people will break eye contact during a conversation. Andrist researches what such behaviors signal to others and how they can be incorporated into robot and virtual-agent behaviors. He says gaze aversion communicates intention and engagement during a conversation and also promotes conversational intimacy. Andrist's recent paper on gaze aversion examined when, how often, and for how long people avert their gaze during conversation, and won recognition at the recent International Conference on Human-Robot Interaction in Germany. Andrist says a better understanding of gaze aversion in human beings will help roboticists and others bridge the uncanny valley by imbuing their creations with more genuine and human-like behaviors. Doing so also will make it easier for people to accept and freely interact with robots and virtual agents, which Andrist says will be very valuable in areas such as education, elderly care and assistance, and therapeutics.


The Real Sim City: How Over 15,000 Sensors Made Santander Smart
Telecoms Tech (03/26/14) Francisco Jariego

Santander, Spain, is serving as a laboratory for the Smart Santander experiment led by Telefonica, which installed more than 15,000 sensors throughout the city over four phases. Each phase of the smart city project used different types of sensors according to the services being envisioned, with a large percentage concealed in white boxes and affixed to street infrastructure such as street lamps, buildings, and utility poles, while others were buried in the pavement. Some sensors were mobile and installed in Santander's public transport network, including buses, taxis, and police cars. Even city residents became moving sensors, using an app downloaded to their smartphones. The sensors quantify factors such as humidity, light, pressure, air quality, and temperature, while vehicles transmit their locations in real time. The Telefonica M2M service platform enables the sensor network to broadcast data back to the project hub as frequently as every two minutes. At the hub, Telefonica's Smart Business Control platform extracts intelligence, enabling real-time data analysis and observation by City Council employees. The City Council can view a snapshot of the entire sensor network at any time, and the system enables a wide variety of services, including air quality measurements, remote dimming of street lamps, and optimized park irrigation systems.


Face It: Instagram Pictures With Faces Are More Popular
Georgia Tech News Center (03/20/14)

A Georgia Institute of Technology/Yahoo Labs study of 1.1 million photos on Instagram determined that images with human faces are 38 percent more likely to receive likes than photos without faces, and are 32 percent more likely to draw comments. The age of the subjects whose faces are featured in the images is irrelevant to the pictures' popularity, as are the gender and the number of faces in the photos, according to the researchers. Generally, photos of children or adolescents are not any more popular than those of adults, while the odds of men and women getting likes or comments are the same. However, Georgia Tech student and study leader Saeideh Bakhshi found that too many posts leads to a feedback decline. “Posting too much decreases likes two times faster than comments,” she notes. In addition, Bakhshi says more photos uploaded by users translate into a lower likelihood that any single image will receive likes or comments. The study used face-scanning software to analyze how people react to photos with faces, without actually determining underlying reasons for such behavior. Still, this knowledge could have practical ramifications, with project adviser Eric Gilbert noting social-media sites could boost their search ranking and keep consumers onsite and active with the inclusion of human faces in their online content. The researchers' work will be presented at the upcoming ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Computers That Know What You Need, Before You Ask
NPR Online (03/17/14) Elise Hu

Anticipatory computing systems are emerging that learn to predict what users need before they ask. The technology will change human-computer interaction by shifting from tapping or typing on devices to using predictive voice control. Smart virtual assistants will grow more necessary as apps and digital functions proliferate, requiring tapping or typing for each function, says GigaOm founder Om Malik. "As we become more digital, as we use more things in the digital realm, we just need time to manage all that. And it is not feasible with the current manual processes," Malik says. "So the machines will learn our behavior, how we do certain things, and start anticipating our needs." Anticipatory devices can, for example, automatically obtain flight-delay or gate information for travelers, or sync calendar data with maps to direct users to appointments. Sensors in devices are increasing, providing input that intelligent assistants can use to predict user needs without requiring typed searches, says Expect Labs founder Tim Tuttle, whose company created the MindMeld predictive computing app. Tuttle notes predictive computing improves as it gets to know a user's needs and habits. However, he cautions the transition to anticipatory computing will take time. "There's still this mismatch of expectations that people have," Tuttle says. "They expect the Star Trek computer on day one. We may not be quite there yet, but the era of magical computing is beginning."


How Creative Can Computers Be?
Fast Company (03/14/14) Leah Hunter

Researchers are exploring computational creativity with new projects that test the ability of computers to create novel works, which could redefine human-computer interaction. IBM, for example, is using its Watson supercomputer to develop a cognitive cooking system, which it demonstrated at the recent South by Southwest (SXSW) conference in Austin, Texas. IBM created a research app that enables the public to talk with Watson, program it to create recipes, and share recipes they created. IBM research scientist Florian Pinel and his colleagues created the system by parsing repositories of food knowledge including Wikipedia, wikia.com, and New York's Institute of Culinary Education's recipe database. "We added data about the food at a molecular level, information about the flavor compounds, all these ingredients, information about people's likes and dislikes," Pinel says. For Watson's creative efforts to succeed, human participation is necessary in the form of front-end programming, evaluating options, and interpreting final recipes. In a similar project, researchers at the Stanford University Center for Computer Research in Music and Acoustics (CCRMA) are using algorithms and human-machine partnerships to create new types of sounds and tools for creating them. CCRMA writes computer programs that enable computers to compose music and to translate non-musical data into music.


Argentinian Innovation to Ease Computing for Blind People
SciDev.net (03/24/14) Martín De Ambrosio

Students at Argentina's National Technological University are developing a tablet-like computing interface for vision-impaired people that depicts information from the computer screen as raised points and enables users to control the computer. The Incendilumen prototype features a surface covered with 2,000 tiny rods that rise and fall to represent information such as buttons, windows, and graphics. "Our interface translates data so blind people can feel it by touching what appears on the screen, whether icons or words in the Braille system," says National Technological University professor Leonardo Hoet. Incendilumen's development team intends to have a final prototype ready in a few months, and launch it in about two years as a joint venture with a Buenos Aires-based technology company. "When it is ready it will be fabulous because, with it, blind people can feel what appears on the screen and interact with it," says Association for People with Visual Disabilities in Argentina president Leandro Sereno. "For instance, we can use different windows, and icons to open them or close them, something currently impossible for us." Among the challenges the device faces outside the laboratory are reducing its costs and facilitating mass production, according to Buenos Aires University researcher Ariel Lutenberg.


How to Build a Human Voice
Smithsonian.com (03/25/14) Randy Rieland

Northeastern University Center for Speech Science and Technology director Rupal Patel and researcher Tim Bunnell have been developing a method for building custom-made computerized voices that use whatever sounds a person can produce. By concentrating on the pitch and volume of those sounds and also on how the person may pronounce certain letters, the researchers aim to understand a voice's identity as much as possible, and then reconstruct it by mining recorded sounds from a donor of a similar gender, age, size, and geographical background. Specially-designed software uses the recordings to produce words in a reverse-engineered voice that is close to what a person might sound like if he or she was not afflicted by a speech disorder. Patel acknowledged in a recent TED talk that many recordings by donors are necessary for the project to reach fruition, and to that end she is moving forward with the Human Voicebank Initiative. Although just a handful of voices have been generated during the project's infancy, Patel says more than 10,000 people already have volunteered as donors, while several hundred others have enrolled to get new voices. She hopes to have collected 1 million distinctive voice samples by 2020.


The Robot Tricks to Bridge the Uncanny Valley
New Scientist (03/22/14) Vol. 221, No. 2961, P. 21 Paul Marks

Incorporating behavioral tics into robots could make humans more accepting and bridge the uncanny valley, and Plymouth University researcher Robin Read and colleague Tony Belpaeme tested this hypothesis by incorporating sounds into robots that might provoke an emotional response from people. They imbued a robot with a positive chirpy sound and a sad whine with which to respond to various actions, such as being kissed, slapped, stroked, or having its eyes covered. A survey found that people responded similarly to each sound, but they became more engaged when the robot made sounds rather then when it did not. "It is enough to choose or generate a random sound" to alert someone that something important is going on, Read says. "It seems to be an easy way to provide rich expression for robots." Among the tricks that a research team from the University of Wisconsin-Madison uses to make a humanoid robot seem alive is the introduction of random, twitching movements into a robot's head rotation motor, and programming the robot to avert its gaze from time to time when it appears to be considering the answer to a query. Meanwhile, University of British Columbia researcher Ajung Moon learned that people's comfort level with a robot handing them an object greatly depends on them locking gaze first, and then the machine must look to the point in space where it intends to make the exchange.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Augmented Reality Is About to Turn Football Into a Real-Life Videogame
Wired News (03/20/14) Marcus Wohlsen

Augmented reality facilitated by wearable technology will transform football as well as the business of sports, giving the teams with the best information technology a competitive edge, according to former Minnesota Vikings punter Chris Kluwe. "Augmented reality will be a part of sports because it's too profitable not to be," he says. Kluwe notes head-up displays and virtual reality can be used as tools for establishing empathy, enabling wearers to gain perspective on other people's lives by literally seeing the world through another person's eyes. For example, GoPro-style cameras already can put football fans in the helmets of players, and Kluwe expects the fan experience to become even more enhanced with the Oculus Rift virtual reality technology. The next big advance he foresees is when coaches and managers see the kinds of information that helmet cameras provide for fans and want that information for themselves. Kluwe says the value of wearables will greatly transcend projecting plays across players' visors, and he envisions the quarterback getting a heads-up signal of an open receiver, to name just one application. Kluwe also anticipates a combined system of wearables, bird's-eye-view cameras, helmet sensors, and accelerometers making it possible for a receiver to perceive the likely area where a wide throw will end up so he can make adjustments to ensure a catch.


UC-Merced, UC-Davis Collaborate on New Virtual Physical Therapy Software
California Healthline (03/27/14) Alice Daniel

University of California, Merced professor Marcelo Kallmann is developing virtual physical therapy software in collaboration with University of California, Davis researcher Jay Han. So far the collaborators have devised a low-cost prototype that uses Microsoft's Kinect gesture-controlled gaming system. Kallmann says the process starts with a virtual human figure displayed before the patient or user, who mimics the exercises the virtual character performs. An avatar representing the user also appears onscreen so the patient can see him or herself perform the exercises. The software features a menu of exercises the therapist can give the patient, but the program also can be tailored by adding exercises specific to a particular patient's requirements. Kallmann currently is determining what parameters a therapist might want to control, while still maximizing the system's simplicity. UC-Davis Medical Center's Linda Johnson says patients are more likely to do exercises correctly with a virtual system than with a printed handout. "Physical therapy is unique because what we do is hands on, but this is an interesting way to meld technology with therapy in general, and it may allow us to extend the time between office visits rather than just seeing them automatically," Johnson says.


Study Suggests That You Will Obey Your Future Robot Boss
IEEE Spectrum (03/17/14) Evan Ackerman

A study from the University of Manitoba's Human-Computer Interaction Lab indicates workers in the future will obey robot supervisors similarly to how they currently obey a human supervisor. The researchers conducted an experiment that asked participants to perform a dull task and observed responses using both a human and a humanoid robot as authority figures. The small, child-like humanoid robot had sufficient authority to persuade 46 percent of participants to rename files for 80 minutes, even after they signaled their desire to quit. Many participants obeyed the robot despite attempts to avoid the task or engaging in arguments with the robot. "These findings highlight that robots can indeed pressure people to do things they would rather not do, supporting the need for ongoing research into obedience to robotic authorities," the researchers write. "We further provide insight into some of the interaction dynamics between people and robotic authorities; for example, that people may assume a robot to be malfunctioning when they are asked to do something unusual, or that there may be a deflection of the authority role from the robot to a person." The researchers note many more participants (86 percent) obeyed the human authority, but the study shows people do not automatically dismiss robots as authority figures.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe