ACM SIGCHI Banner
Welcome to the September 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Explore Your Own Data for Self-Discovery and Action
The Daily (University of Washington) (08/13/14) Yebin Zhou

Research presented by scientists from the University of Washington's (UW) DUB Group at ACM's conference on Designing Interactive Systems and the Human Computer Interaction Consortium's 2014 Workshop in June focused on how life-logging data is affecting the value people see in it. The research, funded by the Intel Science and Technology Center for Pervasive Computing and the U.S. National Science Foundation, involved participants who used the Moves activity diary app to monitor their daily activity over a month. The researchers accessed this data and employed other services to add related data, including weather information and genre or category of place. During the data visualization stage, some researchers learned to use Data-Driven Documents, a set of tools based on JavaScript for interactive data visualization, and tried some different visual methods. The research determined the valuation of life-logging data is significantly influenced by the manner in which lifelogs are presented, with visual representation shaping participants' empowerment to be more active. "One example was when a participant noticed that he was most active on Tuesdays and learned that he could apply some of his daily Tuesday routine to other days of the week," says UW doctoral student Daniel Epstein. Participants also noted they liked to share interesting parts of their daily data with friends and family via social media platforms.


Want a Happy Worker? Let Robots Take Control.
MIT News (08/21/14) Adam Conner-Simons

Allowing robots to assume control over human tasks in manufacturing boosts efficiency and is preferred by workers, according to new research from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab. "In our research we were seeking to find that sweet spot for ensuring that the human workforce is both satisfied and productive," says project leader Matthew Gombolay. "We discovered that the answer is to actually give machines more autonomy, if it helps people to work together more fluently with robot teammates." The study examined groups of two humans and one robot working together in either manual, fully autonomous, or semi-autonomous conditions, and the second condition was not only the most effective for the task at hand, but also the method preferred by human workers. There was a higher likelihood of the workers saying the robots "better understood them" and "improved the efficiency of the team." Gombolay stresses giving robots control entails the delegation, scheduling, and coordination of tasks through a human-generated algorithm that also can perform on-the-fly replanning. Gombolay believes similar algorithms could one day find application in human-human collaboration, search-and-rescue drones, and one-on-one human-robot collaboration in which the robot could assist a person with discrete building and construction tasks.


A Headset Meant to Make Augmented Reality Less of a Gimmick
Technology Review (08/26/14) Simon Parkin

Boosting augmented reality's immersive experience is the purpose of a compact, lightweight headset designed by University of North Carolina at Chapel Hill Ph.D. student Andrew Maimone. He says the goal of the technology is to have the experience occur within the user's own field of vision, without having the hardware add bulk. The Pinlight Display replaces conventional optical components with an array of bright dots or pinlights. "A transparent display panel is placed between the pinlights and the eye to modulate the light and form the perceived image," Maimone notes. "Since the light rays that hit each display pixel come from the same direction, they appear in focus without the use of lenses." Small image fragments are flipped and superimposed, while software-driven image manipulation compensates for this effect. Maimone says the headset dispenses with reflective, refractive, or diffractive elements, "so we do not run into the trade-off between form factor and field of view that has been encountered in past glasses designs." He says Pinlight prototypes have a field of view of 100 degrees or more, versus the 40-degrees-or-less constraints of cutting-edge commercial counterparts. "Since part of the image formation process takes place in software, we can adjust parameters such as eye separation and focus dynamically," Maimone notes. "[Therefore] we can imagine incorporating the pinlights into the corrective lenses of ordinary glasses."


Does Your Computer Know How You're Feeling?
Scientific Computing (08/25/14)

New software developed by researchers in Bangladesh combines keystroke dynamics and text-pattern analysis to identify users' emotional states with up to 87 percent accuracy, according to a study published in Behaviour & Information Technology. Study participants were asked to note how they felt after typing passages of fixed text, as well as at regular intervals during their usual computer use, to supply researchers data about keystroke attributes associated with seven emotional states. The researchers used a standard database of words and sentences associated with those states to help them analyze sample texts. They found the combined results outperformed the separate results, as well as enhancing performance for five of the seven categories of emotion. "Computer systems that can detect user emotion can do a lot better than the present systems in gaming, online teaching, text processing, video and image processing, user authentication, and so many other areas where user emotional state is crucial," the researchers note. They believe their work represents a key step in the creation of emotionally intelligent systems capable of recognizing users' emotional states so they can adapt their music, graphics, content, or approach to learning.


New Ring-Like Device Reads Text for the Blind
The Bemidji Pioneer (08/25/14)

Singapore University of Technology and Design researcher Suranga Nanayakkara says he co-developed the "finger reader" with the Massachusetts Institute of Technology Media Lab's Roy Shilkrot to address a gap in assistive technology for the visually impaired. "There are so many smartphone apps out there that claim to be accessible and claim to be more user-friendly for people who can't see, but the reality is that if you observe a blind user trying to use a smartphone they have to go through tedious steps in getting something done," Nanayakkara notes. The finger reader is a ring-like device affixed to the user's finger, with a camera positioned at the top of the device to scan text. As the user's finger moves across the text, a program identifies what the user is pointing to and then converts that into audio that is spoken aloud by a synthesized voice. As it reads, Shilkrot says the device will produce vibrational or audio cues to help the user stay in line. The developers are still testing the finger reader with various subjects, and they expect the device to be refined over the next two years.


Developing Humanity Amid Advancing Technology
Tech Page One (08/22/14) Gail Dutton

Ubiquitous computers featuring direct, voice-enabled interaction with the Internet represent the future of computing, according to Unified Computer Intelligence CEO Leor Grebler. He projects a surge of vocal interfaces for computing and the Internet of Things as computers become increasingly mobile and wearable. Meanwhile, Applied Voice Input/Output Society executive director William Meisel envisions "full-fledged digital personal assistants that make all our devices feel like one" as the next wave of computing, and Wired magazine reports a host of companies are competing to develop artificial intelligence into a global brain capable of running millions of apps and devices. Meisel says an overabundance of different devices lacking synchronization is driving the need for a unified computing experience. Other advanced interfaces being explored include gesture-based systems as well as brain-computer interfaces, but it remains unclear how ubiquitous computing will impact human interactions. "The more we interact with computers, the more we expect all interactions to be similar, logical, and predictable," says Psychsoftpc CEO Tim Lynch. "We want answers immediately. We become more comfortable texting than talking face-to-face, thus distancing ourselves from people. We also speak quicker and express ourselves with fewer words." Lynch is concerned that increasing technological dependence will hasten the loss of our ability to determine the accuracy of technology's outcomes.


Stanford's Symbolic Systems Program Bridges the Gap Between Humanity and Technology
Stanford Report (CA) (08/21/14) Clifton B. Parker

The purpose of Stanford University's Symbolic Systems academic program is to expose students to interdisciplinary thinking, creativity, and knowledge methodologies by focusing on the integration between computers, the mind, and language. The Symbolic Systems program combines computer science, psychology, linguistics, and philosophy to enable the study of the science of the mind by examining the relationship between humans and computers. Stanford professor Kenneth Taylor calls the program a major that "self-consciously attempts to obliterate the so-called techie-fuzzy divide" between technology and the humanities. Symbolic systems are characterized as the meaningful symbols that describe the surrounding world, both through human and computer language. The university's most popular study courses in the program include cognitive science, artificial intelligence, and human-computer interaction. Symbolic Systems associate director Todd Davies says the program "matches the interests of a natural population--students who are seeking to learn about the relationships between the mind, computers, and language." Among the frequent queries Symbolic Systems students concentrate on are the nature of information and intelligence and the relationship between them, whether intelligence requires consciousness, and the role of language and meaning. Acquiring abstract-thinking capability as well as real-world skills also can foster a potentially satisfying career path.


The Computer Will See You Now
The Economist (08/16/14) Vol. 412, No. 8900, P. 63

Researchers at the University of Southern California's Institute for Creative Technologies have developed and tested a virtual psychologist named Ellie to answer the question of whether people will feel more comfortable talking to an avatar. Their experiment involved putting 239 subjects before Ellie to talk with her about their lives, with half told they would be engaging with an artificially intelligent virtual human, and the other half told Ellie was remote-controlled by an actual person. Ellie started each session with rapport-building questions, followed by more clinical queries, and finishing with questions intended to boost the participant's mood. Throughout the interview she asked follow-up questions accompanied by appropriate nods and facial expressions. Participants' faces were scanned for signs of unhappiness, and three human psychologists, who had no knowledge of the purpose of the study, analyzed session transcripts to rate how willingly participants disclosed personal details. The researchers found participants who thought Ellie was controlled by a human operator were more worried about disclosing personal information, and more cautious of their expressions during the session, than those who believed they were simply interacting with a computer. Project leader Jonathan Gratch thinks Ellie could make a positive contribution to the therapy of soldiers with post-traumatic stress disorder by confidentially informing them they might be a danger to themselves and others, and advising them about how to get treatment.


'Video-Less' 3D Games Developed for Blind Players
BBC News (08/18/14) Claire Brennan

Game designers are starting to create games that visually-impaired players can engage with by forgoing the visual elements and building an immersive, three-dimensional (3D) soundscape via binaural recording methods. Binaural recordings are generated by outfitting a dummy with condenser microphones that emulate how the human ear naturally hears sound. Every scene in the game is recorded with this method to support a more realistic, 3D experience. In the absence of graphics, players must use aural cues to navigate through game levels. One example is "Blind Legend" from Dowino Studios in France, which follows the story of a blind knight trekking through a forest to rescue his kidnapped wife. The intuitive gameplay allows for full immersion in the simulated environment, with the main character's movements controlled with a touchscreen, enabling players to move their feet or their sword using simple swiping motions on their mobile phone. Dowino's Nordine Ghachi says headphones are needed to get the full immersive effect. "You hear these sounds, and that information helps the gamer locate themselves in that environment," he notes. In Britain, the Royal National Institute of Blind People's Robin Spinks says video-less gaming helps eliminate the obstacles that blind and partially sighted people confront every day, and he cites a lack of awareness contributing to the general absence of such accessibility innovations.


U.S. Launches Smart Cities Effort
EE Times (08/26/14) Rick Merritt

The U.S. National Institute of Standards and Technology (NIST) is seeking engineers to implement the Internet of Things (IoT) in smart cities applications across the globe. Current participants in the second round of the Smart America program include ARM Holdings, Cisco Systems, Extreme Networks, IBM, Intel, Juniper Networks, and Qualcomm. "We want to see these things get out of the lab and deployed in real-world scenarios," says NIST's Sokwoo Rhee. "It's a great opportunity for companies to show off and apply their technology, and individual engineers can contribute being a member of a team." The objective of the teams is to highlight their implementations in June 2015 events in selected cities, while Rhee will independently develop a framework document outlining reference architectures, use cases, and testbeds for the IoT in smart cities. "The idea is that the next time a city wants to become a smart city, it doesn't have to start from scratch," he says. Rhee also will develop an IoT strategy document designed as "a road map for the global connectivity fabric of the IoT...[providing direction on] how different technologies work together."


StopInfo for OneBusAway App Makes Buses More Usable for Blind Riders
University of Washington News and Information (08/18/14) Michelle Ma

University of Washington (UW) researchers have developed StopInfo, a program that integrates with the OneBusAway app, which uses real-time data to track when buses will arrive, to collect and share information that blind people have identified as important when they ride the bus. StopInfo relies on bus riders using the OneBusAway app to update and provide information about each stop. "We're interested in having OneBusAway be as useful for as many people as possible," says UW professor Alan Borning. "In this case, we are looking at how we make it more user-friendly for blind and low-vision riders." The researchers launched StopInfo last spring and have completed an initial study examining its effectiveness for blind and low-vision users. The study found StopInfo is generally helpful for blind riders and can promote spontaneous and unfamiliar travel. StopInfo chooses which information to provide for each stop based on a majority voting system by OneBusAway users. "The success of this program depends in part on how fully the community participates," says UW doctoral student Caitlin Bonnar. The researchers will present their work at the ACM Special Interest Group on Accessible Computing (SIGACCESS) annual conference in October.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.