ACM SIGCHI Banner
Welcome to the November 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Beyond Siri: Researchers Are Bridging Human-Computer Interaction
SOURCE (CO) (10/26/15) Anne Ju Manning

Colorado State University (CSU) researchers seek to transform everyday human-computer interactions through the Communication Through Gestures, Expression, and Shared Perception project, partly funded by the U.S. Defense Advanced Research Projects Agency. Project leader and CSU professor Bruce Draper notes today's human-computer interfaces are limited by one-way communication, and his initiative aims to enable two-way communication. Draper's team has proposed creating a library of Elementary Composable Ideas that each contain information about a gesture or facial expression, derived from human users, along with a syntactical element limiting how the information can be read. A participant sits at a table with blocks, pictures, and other stimuli, and the researchers attempt to communicate with and record the person's natural gestures for various concepts, such as "stop," via a Microsoft Kinect interface. The aim is to enhance computers' intelligence so they can consistently identify non-verbal human cues naturally and intuitively. The researchers say the technology could enable people to communicate more easily with computers in noisy environments, or when a person is hearing-impaired or speaks another language. They note the effort concentrates on broadening human-computer communication to include gestures and expressions in addition to words.


UC3M Researches Simulator of Human Behavior
Charles III University of Madrid (Spain) (10/26/15)

Researchers at the Charles III University of Madrid (UC3M) in Spain are working on the IBSEN project, which is investigating how to build a system that recreates human behavior. The researchers say the technology could be used to anticipate behavior in a socioeconomic crisis, create more human-like robots, or develop avatars of artificial intelligence. "We are going to lay the foundations to start a new way of doing social science for the problems that arise in a society that is very technologically connected," says IBSEN project leader Anxo Sanchez. The researchers say they are preparing experiments that will simultaneously present certain problems of cooperation, social problems, and economic games to thousands of people in order to discover hidden patterns in their decisions. The researchers will use this information to create a simulator of human behavior. "The greatest difficulty is to design a new experimental protocol that allows us to ensure that all the participants in the experiment are available at the same time and really interact, because you are not observing them in a laboratory," the researchers note. They say the goal is to obtain a repertoire of human conduct that makes it possible to simulate the behavior of a person and apply it to a robot.


Scientist Eats, Drinks, and Paints Simultaneously
Reuters (10/23/15) Matthew Stock

Imperial College London researchers have created software that enables the eye-controlled operation of a robotic arm for painting, as a demonstration of the technology's potential to help people multitask. The system developed at the college's Brain and Behavior Lab uses a robot arm guided by a tracker that follows the direction of the user's eyes, with the gaze path decoded by an algorithm into instructions for the appendage. A recent test enabled a researcher to paint a picture while concurrently eating food and drinking coffee. "In general, it's very intuitive because I don't have to think about commands or something like this," says graduate student Sabine Dziemian. "I simply think about where I want to draw or which color I want to take. And by thinking, a person usually looks at that color." The researchers say the system could significantly improve the lives of severely handicapped people, and project leader Aldo Faisal envisions the technology one day augmenting the body so everyone can multitask and extend the range of their abilities. "We are following a non-invasive approach where you don't have to put technology into the head, but you can...just take it on and off like a pair of glasses," he says.


What Will it Take for Humans to Take Advice From a Robot?
New Scientist (10/21/15) Aviva Rutkin

Researchers at the Institute for Intelligent Systems and Robotics are studying how much humans trust robots by inviting 56 adults to spend time with iCub, a humanoid research robot. "We're specifically interested in knowing what they expect that's not what we, as roboticists, would think of," says lead researcher Serena Ivaldi. The researchers first asked the human participants both simple and nuanced questions. The robot then answered the same questions, but its answers were controlled to always be contrary to the humans' answers. After hearing the robot's responses, the volunteers were asked if they wanted to change their answers. Although most of the participants stuck to their original response, some of them started talking to the robot, trying to convince it to change its mind. Some people did change their answers after hearing the robot's response. Thirteen of the participants tended to go with the robot's choices on measurement-based questions, but only three were influenced by the robot's responses on ambiguous social questions, indicating robots are still far from being seen as an authority. Ivaldi says robots might be seen as easier to trust if they are designed so their functionality is more obvious.


Brain Research Meets Industrial Engineering
Fraunhofer IAO (10/27/15)

Fraunhofer IAO's newly opened NeuroLab offers a testbed for neuro-ergonomical research into how the human brain responds when people use technical devices. The derived insights will be used by researchers to inform the development of human-computer technologies that can recognize and adapt to users' mental and emotional states. NeuroLab researchers Kathrin Pollmann and Mathias Vukelic emphasize the need to concentrate research on users in order to bring brain-computer interfaces into working environments. Their current area of investigation is determining the activity human brains undergo when they work with assistance systems, via electroencephalography and functional near-infrared spectroscopy. The researchers also record facial muscle activity so they can further ascertain users' gaze direction or detect when users screw up their eyes or smile. They expect this research eventually will support the development of an interface that identifies certain brain states and adapts the behavior of assistance systems accordingly. "By building a bridge to neuroscience, we're raising work research to a whole new level," says Fraunhofer IAO director Wilhelm Bauer. "New findings about people's workplace experience, motivation, stresses, and strains help us design working environments to be much more user-friendly and, in turn, to place the devices successfully on the market."


Jazz-Playing Robots Will Explore Human-Computer Relations
LiveScience (10/21/15) Charles Q. Choi

University of Illinois at Urbana-Champaign professor Ben Grosser and University of Arizona professor Kelland Thomas are developing the Musical Improvising Collaborative Agent (MUSICA) to yield a musical device that can improvise a jazz solo in response to human partners. The U.S. Defense Advanced Research Projects Agency (DARPA) is funding MUSICA's development to investigate new forms of human-machine interaction. Grosser says non-verbal communication is the focus of his collaboration with Thomas. "That could make interactions between humans and machines a lot deeper," he notes. "When it comes to jazz, you feel the music as much as you hear and think about it--you react instinctively to things that are going on." The researchers will create a jazz solo database, which will be analyzed by software to determine the various processes that are active during improvisation. They will then develop a system that analyzes the elements of human jazz performances--beat, pitch, harmony, and rhythm--while also considering the knowledge it has gleaned from jazz solos to communicate and respond musically in real time. "Our goal is to by next summer present a 'call-and-answer' system to DARPA, where I can play a line of music, and the system will analyze that line and give an answer as close to real time as possible," Grosser says.


IBM, Carnegie Mellon Team on Cognitive App for the Blind
eWeek (10/18/15) Darryl K. Taft

Researchers at the Carnegie Mellon University (CMU) Robotics Institute and IBM Research have announced a new open platform to support the development of smartphone apps that can enable the visually impaired to better navigate their surroundings. NavCog analyzes signals from Bluetooth beacons deployed throughout the CMU campus and smartphone sensors to notify blind people about their environment by "whispering" into their ears via ear buds or by causing their phones to vibrate. The researchers also are investigating additional capabilities for future NavCog versions to identify who is approaching and what their mood is. IBM has made the first set of cognitive assistance tools available to developers via the cloud through its Bluemix platform as a service. The toolkit includes an app for navigation, a map-editing tool, and localization algorithms that can help the visually impaired identify where they are, which direction they are facing, and additional environmental information in close to real time. The computer-vision navigation app tool converts smartphone images of the surrounding environment into a three-dimensional space model to help enhance localization and navigation. "With our long history of developing technologies for humans and robots that will complement humans' missing abilities to sense the surrounding world, this open platform will help expand the horizon for global collaboration to open up the new real-world accessibility era for the blind in the near future," says Robotics Institute director Martial Hebert.


No Smart City Is an Island
Federal Computer Week (10/23/15) Zach Noble

At a recent presentation to the Information Security and Privacy Advisory Board, cyber-physical systems experts from the U.S. National Institute of Standards and Technology (NIST) emphasized the risks smart cities run should they fail to adopt a unified language across and within municipalities. They recommend adherence to NIST's draft Framework for Cyber-Physical Systems, which are described as "smart systems that include engineered interacting networks of physical and computational components," of which the Internet of Things is a subset. NIST's Chris Greer offers examples of smart-city projects that could be hamstrung by the absence of a common lexicon, such as London's smart energy, water, and transportation initiatives. Greer also says cities face the same potential problem. "You might have cities that are side by side that are solving transportation management through separate contractors and in completely different ways," he notes. "If D.C. and Virginia and Maryland all have a different solution, the Beltway isn't going to get any better anytime soon." Greer cautions the application of a narrow, customized strategy for implementing smart technologies can isolate municipalities and block contributions from small businesses. "[NIST is] supposed to use standards to promote competitive environments, and that's what we're trying to help with here," he says.


Daniel McDuff on Reading Emotions With Computers
Wired.co.uk (10/12/15) Catherine Lawson

In an interview, Massachusetts Institute of Technology engineer Daniel McDuff says his research in emotive computing concentrates on using computers to understand and interpret human emotions. "My work has been collecting a huge database of emotion measurements and essentially I'm mining that database to try to understand how cultures influence how people express themselves," he notes. McDuff says the insights and algorithms built upon this data are fed into a software development kit, whose potential areas of application include education and healthcare. He predicts incorporating emotive interpretation into interactive technology "will actually allow us to spend less time looking at screens and more time behaving more naturally." McDuff envisions the tangible realization of the Internet of Things within five years, and he says emotion linkage between humans and devices will play a key role. Among the game-changing drivers of emotive computing McDuff is watching for are interrelated advances in computer vision and deep learning, which should move forward thanks to a vast amount of data and the ability to build algorithms that boast strong performance in everyday environments. Large-scale deployment of emotive-computing technology is of particular interest to McDuff, who says it "will naturally help improve the technology and also open up new applications for it."


Deep Learning Machine Predicts Human Activity
R&D Magazine (10/13/15) Greg Watry

Researchers at the Georgia Institute of Technology's (Georgia Tech) School of Interactive Computing and the Institute for Robotics and Intelligent Machines have developed a model that enables a computer to predict a person's daily activities. The basis of the system is more than 40,000 images recorded over six months by a smartphone and categorized into 19 categories, including working, eating, biking, and socializing. A deep-learning method was applied to the images to develop a model that could recognize small details, and then use that information to correctly match the image to the applicable activity. The research team also devised an umbrella model that covered both time and image input models so the system could account for a person's schedule when determining the activity. Georgia Tech Ph.D. candidate Daniel Castro says the model can predict human activity with more than 83-percent accuracy. The researchers think such a model incorporated into future wearable devices would make humans an integral element of the technology's function. The research was presented in September at the ACM 2015 International Joint Conference on Pervasive and Ubiquitous Computing in Japan.


How to Engineer the Unexpected
Australian Broadcasting Corp. News (10/20/15) Antony Funnell

Researchers are striving to engineer serendipity into human-computer interactive systems in an effort to enhance the user experience. One researcher pursuing this goal is City University London's Stephann Makri, who developed a "semantic sketchbook" to analyze app images and text entered into a mobile device and then suggest possible unexpected links. Makri also emphasizes incorporating creative professionals' inventive characteristics and strategies into the product's design. Former Massachusetts Institute of Technology researcher Kevin Ashton warns against assuming serendipity is a somehow magic process, when it is in fact very deductive. Meanwhile, King's College London's Toby Burrows envisions the Humanities Networked Infrastructure (HuNI) databank, which integrates datasets from several dozen universities, fostering serendipitous discoveries via new and inventive collaborations. "[HuNI] will build up into...a networked graph of connections, of links, of trails that people subsequently can follow, and that will, as a by-product, encourage those kinds of serendipitous discoveries," Burrows says. Of vital importance to this architecture is keeping digital information open and accessible, notes the Australasian Open Access Support Group's Virginia Barbour. Also of note is the Spacecubed facility founded by Australian entrepreneur Brodie McCulloch, who designed it to promote "accelerated serendipity" as a shared physical-digital workspace environment for creative collaboration between different professionals and organizations.


HCII Professor Studies Technology in Education
The Tartan (10/12/15) Sharon Wu

Carnegie Mellon Human-Computer Interaction Institute (HCII) professor Amy Ogan seeks to create learning technologies that engage people from diverse cultural backgrounds. "For example, how do we use technology to support students whose first language is Tagalog, but who learn math in English, as is the policy in the Philippines?" she asks. "Or students who live in cultures that strongly value collaboration, when the technology we build here is intended to be very personalized?" Ogan spoke to the U.S. State Department at a recent workshop about developing various technologies to promote English learning, including mobile technology, natural-language processing, open learning platforms, and massive open online courses (MOOCs). HCII work Ogan tapped to participate in the workshop included techniques for helping international student teaching assistants adapt to what may be a new learning environment in the U.S. She also has explored making MOOCs more accessible to underrepresented demographics, including those in the developing world. "I love the HCII because it lets me combine so many aspects of what I find fascinating about the world--[t]he way people learn, the way they interact with one another, the way they use technology, and how it can support better outcomes for the things that people care about," Ogan says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.