Welcome to the January 2017 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
HEADLINES AT A GLANCE
Future 'Smart Cities' Should Be Super-Connected, Green, and Resilient
Rutgers University (12/22/16) Todd B. Bates
Rutgers professor Narayan B. Mandayam is spearheading research into smart cities "where every device, every entity, and every object can connect for whatever the needs," he says. "Wireless connectivity is the glue that holds everything together, and the bottom line is to improve the quality of life in cities and quality of the planet." In partnership with Rutgers professors Janne Lindqvist and Arnold Glass, Mandayam is pursuing a four-year initiative funded by the U.S. National Science Foundation called the Critical Resilient Interdependent Infrastructure Systems and Processes (CRISP) project. CRISP's goal is to integrate critical infrastructures, including smart transportation, wireless systems, water networks, and power grids, into a unified smart city. Mandayam says a key challenge involves defending smart cities from failures and damage, and this requires making critical infrastructures resilient and reallocating resources toward recovery efforts. "Understanding how people behave is integral to a smart city and that's why the project is interesting on so many levels," Mandayam notes. He says the aim of the project is to nudge people to practice better habits and make the right choices that support sustainability. "Pieces or portions of cities perhaps are smart now," Mandayam says. "But the question is whether you can put all the pieces together holistically."
Swarms of Tiny 'Zooid' Robots Can Be Controlled by Simple Hand Gestures
Inquisitr (12/18/16) Darien Cavanaugh
Researchers at Stanford University and the University of Paris-Saclay in France have developed one-inch micro robots called Zooids, which can be controlled individually or in swarms by hand gestures. Each Zooid is constructed with wheels, a battery, a touch sensor, a gyroscope, and an optical sensor, and interested parties can build the machines using information available on GitHub. When used in conjunction with a radio base station, a high-speed digital light processing structured light projector for optical tracking, and software for application development and control, the robots can follow gestures and interact collectively. "In terms of artificial intelligence, the Zooids are straight-up dumb," writes Andrew Liszewski in an article in Gizmodo. "But using an overhead projector that allows a separate computer to track and monitor their positions at all time, the micro robots can be sent complex instructions to perform complicated tasks by working together." Among the collective tasks the Zooids can execute are moving objects like a swarm of ants, helping display data, or delivering reminder notifications. "The Zooids can work together for a variety of different purposes such as acting like animated pixels in an interactive display, or more mundane chores like bringing you your smartphone when you simply don't feel like reaching for it," Liszewski notes.
New Virtual Reality Technology May Improve Motor Skills in Damaged Limbs
American Friends of Tel Aviv University (12/14/16)
Researchers at Tel Aviv University (TAU) in Israel say new virtual reality (VR) training could improve the rehabilitation of a damaged hand by having the user's undamaged hand lead by example. TAU professor Roy Mukamel and student Ori Ossmy conducted experiments in which 53 healthy participants performed baseline tests to assess the motor skills of their hands, then wore VR headsets that showed simulated "mirror image" versions of their hands. The first trial had participants complete a series of finger movements with their right hands, while the screen showed their virtual left hands in motion. The second experiment had participants placing motorized gloves on their left hands and moving their fingers to match the motions of their right hands. The researchers found when participants practiced finger movements with their right hands while observing their left hands on the headsets, they could use their left hands more efficiently following the exercise. However, the most significant improvements occurred when the VR screen displayed the left hand moving while in reality the motorized glove moved the hand. "Technologically, these experiments were a big challenge," Mukamel says. "We manipulated what people saw and combined it with the passive, mechanical movement of the hand to show that our left hand can learn even when it is not moving under voluntary control."
Musical Table Teaches Basics of Computer Programming
Georgia Tech News Center (12/14/16) Jason Maderer
At the Edge of a Cognitive Space
RPI News (12/12/16) Mary L. Martialay
In collaboration with IBM Research, Rensselaer Polytechnic Institute's Cognitive and Immersive Systems Laboratory (CISL) has built a prototype "situations room" that marks a milestone in human-machine interaction and is on the brink of advancing cognitive and immersive environments for collaborative problem-solving. "This new prototype is a launching point--a functioning space where humans can begin to interact naturally with computers," says CISL director Hui Su. "At its core is a multi-agent architecture for a cognitive environment created by IBM Watson Research Center to link human experience with technology." Su says the architecture can combine technologies that register different types of human behavior tracked by sensors as individual events and deliver them to the cognitive agents for interpretation. The environment understands and registers speech, three gestures, the positions of people in the room, their roles, and their spatial orientation. The interactions trigger the appropriate cognitive computing agents to take action and bring data and information relevant to the discussion into the space in real time. The CISL framework enables the computer to register and monitor activities from multiple sensors for interpretation by multiple cognitive technologies via a message queue. "In terms of interpreting behavior, we are at the very beginning, but from here the terrain gets very interesting," Su says.
Pairing Devices Is About to Get a Lot Easier
Technology Review (12/16/16) Signe Brewster
Carnegie Mellon University (CMU) researchers' new CapCam system enables users to pair mobile devices with touchscreens by pressing them together. Touchscreens can track devices as they move across their surface, while the devices copy data by reading a flashing color pattern. CapCam is designed as an alternative to manual pairing technologies such as Bluetooth, without the hardware requirements of options such as near-field communication. Provided a mobile device has a camera, it can be paired with another touchscreen device using CapCam. Coupling devices starts by pressing a mobile device against a touchscreen device; outfitted with CapCam's software, the screen senses the shape and location of the device and begins flashing a unique color pattern underneath it containing pairing instructions. The mobile device also requires CapCam software for its camera to perceive the pattern and establish a wireless connection. CMU human-computer interaction researcher Robert Xiao believes CapCam could be useful in places with public information kiosks. "If you have CapCam, you just press your phone to the screen and the itinerary you picked out is downloaded to your phone and is in Google Maps," he says.
Engineering Students Create Tech for Blind Teen
University of Michigan News (12/22/16) Nicole Casal Moore
India West, a teenager who has been visually impaired most of her life, collaborated with University of Michigan students to develop assistive technologies for people with similar impairments. The students in Michigan instructor David Chesney's software engineering class devised concepts for innovations such as Smart Shelf, a talking piece of furniture that can tell West what is on it and where, as well as describes the locale of open areas. Smart Shelf was built from a Raspberry Pi computer connected to an Amazon Echo smart speaker and an Amazon Alexa intelligent personal assistant. Other projects include a Google Map-based iOS navigation application that stores preferred routes, plays sound in the direction the user is supposed to travel, and adjusts distances to the number of steps. Another initiative is yielding tablet-based games that rely solely on audio cues. A fourth project underway is focused on developing a standard for putting radio-frequency identification tags on objects so they can announce themselves to a blind person's smartphone as they navigate through a building. "Students are taking this interesting off-the-shelf technology and combining it in ways the producers of the technology would never envision," Chesney says. "That's what we try to do here."
UMN Research Shows People Can Control Robotic Arm With Their Minds
University of Minnesota News (12/14/16) Lacey Nygard
Researchers at the University of Minnesota (UMN) say they have achieved a milestone in enabling thought-based control of a robotic arm. "This is the first time in the world that people can operate a robotic arm to reach and grasp objects in a complex [three-dimensional] environment using only their thoughts without a brain implant," says UMN professor Bin He. The noninvasive electroencephalography (EEG)-based brain-computer interface records subtle electrical signals of the subjects' brain via an electrode-studded EEG cap and converts these signals into action via advanced signal processing and machine learning. Participants began by learning to control a virtual onscreen cursor and then proceeded to train a robotic arm to reach and grasp objects in fixed locations on a table. They eventually were able to move the arm to reach and grip objects in random locations on a table and transfer objects from the table to a three-layer shelf by only imagining these movements. The researchers say the interface works because of the geography of the motor cortex. He expects the next phase of the research will entail realizing a brain-controlled robotic prosthesis attached to a person's body or examining how this technology could work with a stroke or paralysis victim.
The Hype and Hope of Experience Design
Forbes (12/13/16) Manuel Vellon
Level 11 chief technology officer Manuel Vellon characterizes user experience as the latest thinking in human-computer interaction, which transcends marketing hype about websites and applications. "In the early days of user interface [UI] design, the main question considered was, 'How do we arrange the UI's functionality in a logical form?'" Vellon says. "Then, as we became focused on the user experience, the question became, 'how can we better understand the intent of the user and provide the functionality they need in the UI?'" To better guide customer experiences, a designer will employ diverse tools, including "journey mapping," to more deeply understand interactions. "Moreover, applying specific organizational frameworks to customer journeys can also be useful," Vellon notes. "At my company, our specialists frequently find it useful to use what they call the 'PAER' framework. This is shorthand for 'plan, arrive, experience, remember.'" Vellon says the last step in experience design is gaining a finer understanding of how customers experience a company, and taking action to actively guide these experiences. He also suggests treating the user experience as a narrative or a story arc can go a long way toward understanding and directing the experience.
Singapore's 'City Brain' Project Is Groundbreaking--but What About Privacy?
Computerworld (12/12/16) Matt Hamblen
Singapore plans to improve its citizens' lives with a "city brain," a centralized dashboard view of sensors implemented across a distributed network. The ambitious project has many potential benefits, but it also may have implications for personal privacy. The country envisions the city brain as harnessing artificial intelligence and analytics to process data from diverse silos and present it, in most instances, on a single console. Potential benefits include holistic, optimized city traffic and improved pollution management. Singapore minister for foreign affairs Vivian Balakrishnan, who is in charge of the smart-nation initiatives, says the goal is to create a "national operating system for 100 million smart objects" in the next five years. Among the projects the city brain could enhance are elder services; for example, it could aggregate information from medical or emergency alerts to better identify patterns impacting seniors' health. However, one possibility is that a city brain eventually could be upgraded to intrude less or more on personal privacy, depending on Singapore's political climate. "Of course, there will be loss of privacy or, worst case, the chance of data being hacked," notes Gartner analyst Jacqueline Heng. Nevertheless, ZK Research analyst Zeus Kerravala thinks a centralized city brain model is the optimal and most cost-effective way to facilitate interlinked, actionable data and smart systems.
Posture Could Explain Why Women Get More VR Sickness Than Men
New Scientist (12/09/16) Gian Volpicelli
New studies explore why women experience more motion sickness than men while using virtual reality (VR). University of Minnesota professor Thomas Stoffregen and colleagues ran experiments on 36 people--half of them men, half of them women--using the Oculus Rift headset. A game that involved taking a virtual stroll around a haunted house triggered feelings of sickness in 14 of the 18 women and only six of the 18 men. Participants who reported experiencing VR sickness showed a wobblier posture. Stoffregen says women tend to be smaller than men, have a different body shape, and have smaller feet than men of comparable height. "In a purely physical sense, there's reduced stability in the female body, so there's an increased likelihood that any sort of disturbing motion stimulus will lead to instability," Stoffregen says. However, University of Wisconsin-Madison professor Bas Rokers says it is a commonly held belief that motion sickness is caused when your senses provide conflicting information; his team found people are more likely to experience motion sickness when their eyes tell them something different than their balance system. "And, on average, women are better at picking subtle visual differences than men, when taken as a group," Rokers says.
Hoque Receives World Technology Award
University of Rochester NewsCenter (12/12/16) Bob Marcotte
University of Rochester professor Ehsan Hoque has been honored with a World Technology Award from the World Technology Network for his innovative work in human-machine interaction. Hoque's area of interest is comprehending and modeling the ambiguity that language, facial expressions, gestures, and intonation bring to human communication. "Ehsan's research strikes at the core of what has remained elusive so far in human-machine interaction--that of emotion detection," notes Sandhya Dwarkadas, chair of Rochester's Department of Computer Science. Among Hoque's milestones is a system that enables individuals to practice speaking and social skills and get feedback in a repeatable, objective, and respectful manner. Hoque also has developed systems to help musicians practice singing vowels and to deliver live feedback to public speakers while they are engaged with audiences. He stresses the importance to students of human-machine interaction of not only publishing papers, but also producing technical prototypes. "As others use it, we get more data and insights back, which takes our research to new heights," Hoque says. He also notes deploying his students' work in the real world affords them more visibility and impact.
Abstract News © Copyright 2017 INFORMATION, INC.