ACM SIGCHI Banner
Welcome to the May 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Pay Attention, Robot
Slate (04/29/15) Chris Berdik

Education researchers are applying the notion that students learn better when asked to help another learner to the development of computer programs paired with human students to teach them various subjects. "There's not really just one reason why learning-by-teaching works so well," says Stanford University professor Daniel Schwartz. "It's a happy confluence of forces that help students learn." One teachable agent being developed by Schwartz and Vanderbilt University researchers, called Betty's Brain, displays each step in its thought process onscreen as it learns. The process starts with a student building a map of the subject knowledge in Betty's brain by linking words with lines that signify specific relationships; the brain is gradually filled with a schematic of systems, and when Betty is quizzed by another agent, the words and associated connections are highlighted as she weighs her answers. Meanwhile, researchers at Lund University's Educational Technology Group are developing Time Elf, a game-based agent that occasionally disagrees with its teacher, even when it is in error. At Carnegie Mellon University, Noboru Matsuda, a scientist who studies human-computer interaction, helped create SimStudent, a teachable agent with an appearance students customize as they learn increasingly difficult algebra lessons. "If students care more about the agent, then they'll be more engaged, and so they'll learn more," Matsuda says.


Gazing Into the Future
University of Cambridge (04/23/15)

Researchers at the University of Cambridge's Department of Engineering have developed a computer control interface that employs a combination of eye-gaze tracking and other inputs. "When clicking a mouse isn't possible for everyone, we need something else that's just as good," says Cambridge professor Pradipta Biswas. His team created two key augmentations to a standalone gaze-tracking system. The first enhancement involves sophisticated software interpreting variables such as velocity, acceleration, and bearing to yield a prediction of the user's intended target. The second enhancement entails the use of a second mode of input, such as a joystick. To address the problem of indicating the target for selection beyond user eye-blinks, the researchers experimented via manipulation of joystick axes, enlargement of predicted targets, and the use of a spoken keyword to indicate a target. They found a multimodal strategy integrating eye-gaze tracking, predictive modeling, and a joystick is almost as fast and as cognitively easy to use as a computer mouse. Moreover, the intelligent multimodal approach can be faster when testing computer novices who have received sufficient training in the system. The researchers say their findings could lead to systems whose performance is equal, or even superior, to that of a mouse.


Zensors App Lets You Crowdsource Live Camera Monitoring
IDG News Service (04/24/15) Tim Hornyak

Researchers at Carnegie Mellon University and the University of Rochester have developed Zensors, a smartphone app that can use a camera, crowdsourced workers, and artificial intelligence to monitor an area of interest. The underlying concept of Zensors is the ability to use any camera in a fixed location to spot changes in the subject being tracked and automatically alert users. The image sensor in any mobile device could serve as the camera, as could a webcam or any other connected camera, which would be programmed to capture images at user-set intervals. A plainly-worded question is entered into the app, and image monitoring is outsourced to the Internet. When monitors decide the question has an affirmative answer, this triggers a graphical change in the app, which also could issue notifications to users. However, after a period of human monitoring, machine-learning algorithms in the software can learn when certain criteria have been satisfied; periodic system checks by workers would help guarantee the accuracy of the algorithms. The addition of computer-vision tools to the data processing could enable the system to conduct tasks such as counting cars or persons in a specific area. The Zensors project was presented at the ACM CHI 2015 conference in Seoul, Korea.


Certain Interactive Tools Click With Web Users
Penn State News (04/21/15)

Pennsylvania State University researchers detailed their findings on how people interact with content using several Web navigation tools at the ACH CHI 2015 conference in Seoul, Korea. Professor S. Shyam Sundar and his team suggest interactive tools not only influence how people use a website, but also their feelings toward the site and its content, and what information they retain afterwards. Sundar says they conducted a series of studies to examine such engagement via clicking, sliding, zooming, hovering, dragging, and flipping, as well as combinations of those tools. The researchers quantified information retention during the sessions as a measurement of how absorbed participants were during the task. Subjects indicated the slider, which enabled them to scroll along a timeline to view images and text about a historical event, was a superior memory aid to other tools, including the three-dimensional (3D) carousel, with which users can rotate images. "The 3D carousel looks attractive, but in terms of encoding information, it was not effective," Sundar observes. The researchers also found clicking continues to be a popular navigation tool for users, while the degree of a user's Web experience shapes the impact tools have on users' attitudes toward content. For example, expert Web users ascribed greater preference and credibility to content when the site used simple clicking and mouse-over tools, versus less-intuitive tools such as the 3D carousel and drag.


Eelke Folmer Uses Human-Computer Interaction Research to Help Blind and Visually Impaired People
Nevada Today (04/16/15) Mike Wolterbeek

University of Nevada, Reno professor Eelke Folmer received the 2015 Board of Regents of the Nevada System of Higher Education's Rising Researcher Award for using human-computer interaction to provide assistive technologies for visually impaired people. Folmer says such technologies can be created by combining the functions of smartphones and wearables. "Wrap a smart watch with a camera around a smartphone and now you have two cameras facing the same direction and you can perform depth sensing using stereovision," he notes. "Depth information is something sighted people take for granted, but is incredibly useful for blind people whose sensing range is limited by the length of their cane. We continue to explore the use of wearables for developing assistive technology by combining them in novel ways." One project Folmer led, with funding from the U.S. National Science Foundation, investigated the use of non-visual exergames to create new exercise opportunities for blind, obese children. His Human+ lab developed a series of such games and released them for free, and Folmer says the project illustrates his aim of reducing the cost of assistive technology; the games were downloaded 20,000 times because the only necessary hardware was a $15 game controller. Folmer also recently earned a Google Faculty Research award to support work on using wearable computing to help the blind navigate in indoor environments.


Thumbnail Track Pad
MIT News (04/17/15) Larry Hardesty

Researchers at the Massachusetts Institute of Technology Media Laboratory are developing wearable technology that transforms the user's thumbnail into a wireless track pad. The unobtrusive NailO device employs capacitive sensing to register touch, and it can tolerate a thin, nonactive layer between the user's finger and the underlying sensors. The prototype NailO bundles capacitive sensors, a battery, a microcontroller, a Bluetooth radio chip, and a capacitive-sensing chip in a very small space. The researchers constructed the prototype's sensors by printing copper electrodes on sheets of flexible polyester, which enabled them to experiment with various electrode layouts. However, they currently are using off-the-shelf sheets of electrodes for ongoing experiments, and have found a technology that may lead to a 0.5-millimeter-thick battery. The thumbnail is advantageous for the device as it would not inhibit movement or cause discomfort, and it also eases device access by the other fingers. The researchers say a usability study found the NailO could be shielded against unintentional activation and deactivation by requiring surface contact with the operator's finger for only a few seconds. The prototype NailO was detailed in a paper presented at the ACM CHI 2015 conference in Seoul, Korea.


Increasing Ecological Understanding With Virtual Worlds and Augmented Reality
National Science Foundation (04/24/15) Aaron Dubrow

Harvard University professor Chris Dede's current area of concentration involves the use of virtual and augmented reality (AR) technologies to teach students about ecological stewardship. Dede and colleagues developed the EcoMUVE virtual world-based curriculum geared toward middle school students, with support from the U.S. National Science Foundation and Qualcomm's Wireless Reach initiative. EcoMUVE employs a three-dimensional Multi User Virtual Environment (MUVE) to replicate actual ecological settings within which students explore and collect data. In the course of the curriculum, participants investigate a fish die-off to ascertain the underlying causal relationships, and then present their findings and the accompanying research at a mini-conference. Dede and his team believe the tools and context supplied by the virtual environment help support student interest in modeling practices connected to ecosystem science. The follow-up EcoMOBILE project is designed to be applied to real-world environments enhanced by digital tools and location-based AR. The team developed a smartphone-based AR game played in the field, and initial studies indicate the enhancements significantly improve learning compared to a normal field trip. The project uses information on student behavior gleaned by the digital tools to measure its progress in teaching students core concepts.


Technology Can Transfer Human Emotions to Your Palm Through Air, Say Scientists
University of Sussex (United Kingdom) (04/20/15)

University of Sussex researchers led a study demonstrating that human emotion can be communicated by stimulating different parts of the hand contactlessly via next-generation technologies. For example, short, sudden bursts of air to the area around the thumb, index finger, and middle of the palm generate excitement, while sad feelings are created by slow and moderate stimulation of the outer palm and the area around the pinky. Sussex researcher Marianna Obrist says the Ultrahaptics system offers "huge potential" for innovations in communication. "A similar technology could be used between parent and baby, or to enrich audio-visual communication in long-distance relationships," she notes. "It also has huge potential for 'one-to-many communication'--for example, dancers at a club could raise their hands to receive haptic stimulation that enhances feelings of excitement and stability." Obrist has received a grant from the European Research Council for a five-year project to broaden the research into taste, smell, and touch. The goal of the SenseX effort is to deliver a multisensory framework for inventors and innovators to design richer technological experiences, and Obrist says a long-term objective targets the enhancement of sensory-impaired people's experience. She presented her work with the Ultrahaptics system at the ACM CHI 2015 conference in Seoul, Korea.


Beyond the Touchscreen: Carnegie Mellon, Disney Researchers Develop Acoustically Driven Controls for Handheld Devices
Carnegie Mellon News (PA) (04/20/15) Byron Spice

A collaborative project between Carnegie Mellon University (CMU) and Disney Research has yielded a substitute for touchscreen interfaces via a set of physical knobs, sliders, and other mechanisms that can be integrated with any device. The Acoustruments concept involves the use of pluggable plastic tubes and other structures to link the smartphone's speaker with its microphone, enabling control of the device by acoustically altering sounds as they pass through the system. The smartphone speaker generates continuous sweeps of ultrasonic frequencies, and the interactions that block, open holes, or change the length or diameter of the plastic tubes connecting the speaker to the microphone modify the acoustic signal. Proximity and pressure sensor functionality are just some of the features the Acoustruments can add to smartphones, and the lack of electrical power required to do so means the Acoustruments can be rapidly and inexpensively manufactured. "Using smartphones as computers to control toys, appliances, and robots already is a growing trend, particularly in the maker community," notes CMU Ph.D. student Gierad Laput. "Acoustruments can make the interactivity of these new 'pluggable' applications even richer." The researchers used Acoustruments to construct an interactive doll that responds when its tummy is poked, along with a smartphone case that senses when it has been placed on a table or is being carried in hand. The project was detailed at the ACM CHI 2015 conference in Seoul, Korea.


Stanford Team Makes Biotechnology Interactive With Games and Remote-Control Labs
Stanford Report (04/21/15) Tom Abate

Stanford University researchers led by professor Ingmar Riedel-Kruse have developed three interactive biotechnology projects whose goal is to enable people to engage with biological materials and conduct experiments the way they interact with computers. One project involved creating an arcade-like kiosk that let visitors to San Jose's Tech Museum interact with microorganisms. Stanford postdoctoral fellow Seung Ah Lee says the interactive display exploits the organism's responses to light, via a micro-aquarium linked to a touchscreen display. Visitors drew patterns in blue, green, or red light on the screen to see the organism's reactions. The second project sought to educate students on bioengineering device design using biotic games that incorporate cells. Project leader Nate Cira says the aim was to produce a biotech version of popular robotic and video game challenges, and the team intends to create inexpensive kits so hobbyists can build their own interactive micro-aquariums. The third project entailed developing a robotic biology cloud lab for conducting remote-controlled experiments, or biotic processing units (BPUs); BPUs are tools that can store and repeatedly stimulate biological materials, and measure their responses. Two of the projects were presented at the ACM CHI 2015 conference in Seoul, Korea.


Digital Tattoo Lets You Control Devices With Mind Power Alone
New Scientist (04/23/15) Hal Hodson

A stick-on electroencephalogram (EEG) that eliminates the tangle of wires and electrodes typical of conventional EEG equipment has been built by University of Illinois at Urbana-Champaign researchers led by John Rogers. The device adheres to the skin behind the ear via van der Waals force, and only falls off when dead skin accumulates underneath, which loosen its grip. The flexible device uses a small array of gold electrodes on and behind the ear, and the researchers say it could find potential use beyond brain scans as a passive remote controller of certain appliances. One example is a coffee pot that the device instructs to start brewing when brain readings signal the wearer is waking up. Meanwhile, University of Oldenburg researcher Stefan Debener's lab has developed its own version of an in-ear EEG system. Debener is focusing on a method for quantifying the attention of a hearing aid user via EEG, and then tuning the hearing aid to the voice they are concentrating on. "The limitation of EEG so far has been that we didn't have the technology to study the brain in nature non-invasively," he notes. "If you could do this with EEG on the street, driving a car, that would make a big difference."


Contextual Sensing: How Smartphones Will Learn About You
Intel Free Press (04/14/15)

Mobile devices are becoming more sensitive to users' needs via new software development tools that utilize sensors as progress in contextual computing technology gives rise to more intelligent systems. Intel's introduction three years ago of a low-power sensor hub that collects data from multiple sensors has helped hasten the development of contextual-sensing systems. "The demand for the sensor hub is the awakening of contextual sensing where always-on sensing is required without [the smartphone] being engaged," notes Intel's Claire Jackoski. Intel Anticipatory Computing Lab director Lama Nachman thinks user adoption of contextual-sensing technology relies on its ability to learn appropriate behavior from users via education and training with contextual awareness. "Humans are very contextual by nature," Nachman says. "It's very hard to come into somebody's world without understanding the context." Intel is giving developers the toolset for developing context-aware applications with a contextual-sensing software development kit and underlying hardware. "If a developer wants to know everything that a user is doing, [they] need to know the user's context and create a narrative of the user's day," says Intel's Ned Hayes. "Our system allows developers to have a holistic view of this user's behavior." Intel also has applied algorithms considered useful for understanding user behavior to the development of on-device rules and context engines running within the sensor hub.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.