ACM SIGCHI Banner
Welcome to the March 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


The Future of Tangible Interfaces: 5 Insights Backed by Science
Co.Design (02/24/16) Kelsey Campbell-Dollaghan

Researchers explored insights about tangible interfaces backed by scientific research at the recent ACM Tangible, Embedded, and Embodied Interaction (TEI 2016) conference in the Netherlands. A Sheffield Hallam University team demonstrated the need to understand how the material, size, and shape of interfaces impact a user's perception. Their work revealed people's tendency to prefer spherical over cubical objects and fabric over plastic objects. Meanwhile, a TEI team found temperature to be an important perceptive influence that is underutilized as a design material. A multi-university project developed and showcased a design method that prompts designers to employ archetypal human personalities as templates for interactive products. The goal of their experiments was to support bonding between people and the interactive objects. An insight highlighted by Wurzburg University researchers was the finding that color can have a tangible effect, noting in their study "by color-to-abstract mappings in sensorimotor experience, designers are able to use color in a way that is possibly valid across languages and cultures, i.e., designers do not have to rely on symbolic meaning that is highly culturally dependent alone." Meanwhile, Eindhoven University of Technology researchers made a case for "embodied interaction" design to enhance the richness of the user experience, instead of emphasizing ease of use above all else.


Wesley A. Clark, Legendary Computer Engineer, Dies at 88
TechRepublic (02/23/16) Evan Koblentz

Distinguished computer engineer and ACM-IEEE CS Eckert-Mauchly Award winner Wesley Allison Clark, whose work helped define personal computing, computer graphics, and the Internet, died Monday at the age of 88. As a member of the Massachusetts Institute of Technology's (MIT) Lincoln Laboratory, Clark co-invented an early transistor computer, the TX-0, which could be operated by a single person. He embedded that logic within technical partner Ken Olsen's hardware, creating the first minicomputer. Clark followed that up with hardware concepts for the TX-2 in collaboration with MIT graduate student (and 1988 ACM A.M. Turing Award recipient) Ivan Sutherland, which led to precursors for the computer mouse and other contemporary interfaces. As a researcher at Washington University in St. Louis, Clark help lay the foundations for modular computer networking with the LINC macromodule project. His proposal of using these systems as a platform for Interface Message Processors was deployed by Bolt Beranek and Newman on Honeywell minicomputers as the spine of the Internet's progenitor, the ARPAnet. "With his work...designing the TX-0, the TX-2, and the LINC, Wes Clark created the first experience of what we today call 'personal, interactive computing,'" says DigiBarn owner Bruce Damer. "The LINC is considered to be the first workstation, built by the user from a kit, then transported to a lab, office, and even used in a home."


How Armbands Can Translate Sign Language
Technology Review (02/17/16) Rachel Metz

The University of Arizona's Sceptre project enables sign language translation via gesture-controlled armbands in combination with a smartphone or computer. The project employs the armbands to teach software American Sign Language (ASL) gestures, so signs made by a person wearing the armbands can be matched up with their corresponding words or phrases in Sceptre's database and displayed as text on a screen. The armbands feature both an inertial measurement unit for tracking motion and electromyography sensors for muscle sensing. The researchers trained software to recognize various ASL gestures, along with the signs for individual letters and the numbers 1 through 10, all performed by someone wearing the armbands. Once training was completed, the Sceptre software could decipher signs correctly with nearly 98-percent accuracy. Arizona State University graduate student Prajwal Paudyal says the device's translations do not have to be restricted to text, as they also could be vocally expressed by an app. Texas A&M University professor Roozbeh Jafari notes consumer use of a Sceptre-like system requires addressing several issues, such as accounting for variations that naturally occur in the ways people sign.


This New Mind-Reading Tech Helps You Learn to Play Instruments Faster
ScienceAlert (02/12/16)

Inexperienced piano players can learn to play more skillfully via Tufts University's Brain Automated Chorales (BACh), a new brain-scanning system that can determine how hard peoples brains are working and make appropriate difficulty adjustments. BACh employs functional near-infrared spectroscopy to measure oxygen levels in the prefrontal cortex, with higher- or normal-level readings prompting the software to provide volunteers with easier or harder exercises. BACh was tested on 16 inexperienced piano players tasked with learning two Bach chorales of similar style and difficulty. For one of the pieces, they spent 15 minutes learning it with BACh's assistance, while the other chorale was presented without the program. "We found that learners played with significantly increased accuracy and speed in the brain-based adaptive task compared to our control condition," say Tufts researchers. "They could play with faster speed with BACh compared to a control where they learned the way they normally would." The results of the tests will be presented at the ACM CHI 2016 conference, which takes place May 7-12 in San Jose, CA. The BACh researchers are now exploring the addition of emotion sensing, which means lessons will be personalized according to a person's cognitive load and emotions.


Made to Be Like Us: Using Robots to Enhance Human Interactions
The Badger Herald (WI) (02/22/16) Anne Blackbourn

University of Wisconsin-Madison (UW-Madison) researchers at the Wisconsin Human-Computer Interaction Laboratory are programming robots to communicate and engage socially with people. An example of their work is Mini, a robot that can help children learn to read using different colored cards. UW-Madison professor Bilge Mutlu says robotics technology research is highly task-driven, and his lab focuses on programming robots to complete tasks by communicating and coordinating with humans. "One of the things that robotic technology allows us to do to give us interaction paradigms that resemble human interactions...it does [tasks] similar to how a human may do it," Mutlu notes. Graduate student Sean Andrist observes some children tend to interact with robots more easily than they do with adults, because dealing with robots is less stressful. Andrist is concentrating on "social gaze" as a driver of motivation, which involves how a robot might use human-like social cues to look at different places at different times when communicating with someone. Mutlu predicts it will be years before all the major robotic components come together and operate seamlessly. "It's going to take a while for us to work through the kinks of getting this complex technology into the human environment," he says.


Augmented Reality Looks to Future Where Screens Vanish
Agence France-Presse (02/19/16) Glenn Chapman

Microsoft inventor Alex Kipman envisions the obsolescence of computer screens thanks to the advancement of augmented-reality (AR) technology such as the HoloLens headset. In a demonstration at the TED 2016-Dream event in Vancouver, British Columbia, he showcased the HoloLens as the first fully-untethered holographic computer, with a camera revealing to attendees the three-dimensional AR environments Kipman navigated through and interacted with via gestures. "I am talking about freeing ourselves from the two-dimensional confines of traditional computing," he said. "We are like cave people in computer terms; we barely discovered charcoal and started drawing the first stick figures in our cave." Among the AR functions Kipman highlighted at the demo were making video phone calls and holographical telepresence. "I believe our children's children will grow up in a world devoid of two-dimensional technology," he predicted. Kipman said Microsoft intends to have a slate of applications, games, and other AR experiences available prior to HoloLens' market debut. Also at the TED event were demos of AR technology from the startup Meta, with CEO Meron Gribetz expecting AR headgear being streamlined to strips of glass eyewear within about five years. "In the next few years, humanity is going to go through a shift," Gribetz said. "We are going to start putting a layer of information over the world."


Ten Fingers Not Needed for Fast Typing
Aalto University (02/09/16)

The number of fingers used while typing is not a determinant of typing speed, according to a new Aalto University study to be presented ACM CHI 2016, May 7-12 in San Jose, CA. The researchers used an optical motion-capture system featuring 12 high-speed infrared cameras to record the finger movements of 30 people covering a wide spectrum of age and skill. The researchers say this is the first study to explore how people type if they never learned the touch-typing system. "We were surprised to observe that people who took a typing course performed at similar average speed and accuracy as those that taught typing to themselves and only used six fingers [instead of 10] on average," reports Aalto University doctoral researcher Anna Feit. Other factors predicting typing speed uncovered by the motion-capture data included fast typists keeping their hands fixed on one position and more consistently using the same finger to type a certain letter. The research also found most participants used their left and right hand differently. For example, the left hand was always kept on the same position while the right would frequently move from one side to the other, covering a large number of keys.


The Next Great Display Technology? Water
Co.Design (02/22/16) Kelsey Campbell-Dollaghan

Researchers from the Massachusetts Institute of Technology's (MIT) Tangible Media Group presented the prototype of their HydroMorph display at the recent ACM Tangible, Embedded, and Embodied Interaction (TEI 2016) conference in the Netherlands. The device converts a flowing stream of water into a mutable "membrane" that can instantly configure itself into the shape of a flower or a bird, using actuators and sensors placed under the flow. The HydroMorph sits beneath a running tap using 10 narrow plastic structures, or "blockers," to manipulate how water bounces off the device's surface. Servo motors under each blocker control the movement of the individual plastic pieces, while software produces shapes based on the way water reacts to the height of each blocker. The MIT team also outfitted a camera that makes the HydroMorph interactive, changing according to how users play with it. The goal of the device is to support digital displays requiring less intensive focus. "We envision a world filled with living water that conveys information, supports daily life, and captivates us," the researchers say. They imagine the HydroMorph functioning as an interactive piece of furniture in the home that communicates low-level information unobtrusively, as well as a dynamic sculpture.


Lifelike Avatars Provide New Opportunities for Aged Care
Australian Ageing Agenda (Australia) (02/22/16) Natasha Egan

Sydney University Ph.D. candidate Mike Seymour, a veteran of the film and TV effects industry, will present his vision of using lifelike human avatars in human-computer interfaces and assistive technology at the upcoming Future of Aging National Play Up Convention hosted by the Arts Health Institute. Seymour says the latest virtual-reality innovations for generating photoreal faces "has opened up a wealth of opportunities for things like aged care, dealing with people with dementia, or people who have had a stroke." He believes people will engage with realistic-looking digital assistants or companions that can evoke lifelike reactions. One such interface would enable a senior to "check in" via a normal human interaction process such as saying hello and answering a few simple questions, which would alert a contact in the case of a potential problem. "Initial research, for example, suggests if you had an avatar of your grandchildren then you are actually quite happy to say hello to it in the morning when you are making your cup of tea," Seymour notes. He also says the interface "would just check that you are okay and perhaps it could also remind you of a course of medicine or things that were happening." Seymour says it would do this using actual, normal voice communication with facial expressions in order to increase the user's comfort level.


Motion-Controlled Video Games May Improve Real World Skills
Penn State News (02/18/16) Matt Swayne

Motion-controlled video games may help improve players' skills in real-world competitions, according to researchers led by former Pennsylvania State University doctoral student Edward Downs. He reports participants in a study who played 18 rounds of a video golf game using a motion controller to simulate putting did real-world putting better than a group that played a video game with a push-button controller, and also outperformed a group with no video-game training. "What we can infer from this is that the putting motion in the game maps onto a real putting behavior closely enough that people who had 18 holes of practice putting with the motion controllers actually putt better than the group that spent 45 minutes or so, using the push-button controller to make putts," Downs says. The researchers suggest video games are being transformed into simulations because of motion-controlled video games and future virtual-reality devices. "These games are getting people up and physically rehearsing, or simulating motion, so we were trying to see if gaming goes beyond symbolic rehearsal and physically simulates an action closely enough that it will change or modify someone's behavior," Downs says. He notes further areas of exploration should include learning to what extent consoles with motion controllers can be employed as simulation instruments to large-motor coordination.


Revolutionary Flexible Smartphone Allows Users to Feel the Buzz by Bending Their Apps
Queen's University (Canada) (02/16/16) Chris Armes

Researchers at Queen's University's Human Media Lab say they have created the world's first full-color, high-resolution, and wireless flexible smartphone integrating multitouch and bend input. The ReFlex phone imparts physical tactile feedback to users as they interact with their apps via bend gestures. "When this smartphone is bent down on the right, pages flip through the fingers from right to left, just like they would in a book," says Human Media Lab director Roel Vertegaal. "More extreme bends speed up the page flips. Users can feel the sensation of the page moving through their fingertips via a detailed vibration of the phone. This allows eyes-free navigation, making it easier for users to keep track of where they are in a document." Behind the display are bend sensors that detect the force with which a user bends the screen, which is fed to apps as input. ReFlex also is equipped with a voice coil so the phone can simulate forces and friction via the display's vibrations. Vertegaal expects bendable, flexible smartphones to be available to consumers in five years. The researchers recently presented the ReFlex prototype at the ACM Tangible, Embedded, and Embodied Interaction (TEI 2016) conference in the Netherlands.


An NYU Research Project Is Trying to Make Virtual Reality Less Lonely
Motherboard (02/15/16) Roy Graham

New York University's Holojam virtual-reality (VR) project is described by principal investigator Ken Perlin as "a set of practical experiments to prototype a future that does not yet exist," with an outlook that is one to two decades ahead. In this future, "people in the morning can pop in their contact lenses and see whatever they want," Perlin says. "What happens between people? We're interested, not about when it's cutting edge and exciting, but when it's boring." Using a combination of optical markers, Samsung Gears, gloves, and motion-tracking cameras, Holojam renders a cartoony environment designed to evoke fun and playfulness, with users represented as cartoon avatars. "Seeing somebody in VR gives you a sense of them that you don't get from video," notes Holojam artist David Lobser. "The way that they express themselves becomes more cartoony. Certain things are simplified, and certain things are amplified." Lobser says so far the project has yielded primarily participatory experiences, and he notes "right now, we're making something that will be designed for two people, in VR, to give a performance in."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.