Welcome to the September 2017 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
Advancements in Virtual Reality Device Development
August 29, 2017
Since the coining of the term "virtual reality" (VR) in the 1930s, advancements in the VR realm have moved forward, with even more realistic visuals and better user experiences on the horizon, according to a new report from IDTechEx. The study found developers are focused on two potential breakthroughs for improving usability. One milestone, focus tunable displays, involves projecting virtual content in multiple focal planes, augmenting user experience by remedying visual discomfort. A second approach is foveated image rendering via eye-tracking technology embedded in the VR headset, which reduces image quality in the wearer's peripheral vision. In terms of functionality enhancements, engineers must find ways to streamline tethering, make VR devices lighter and more ergonomic, and refine visualization and user experience innovations. Unique VR headset designs highlighted by IDTEchEx include a standalone, battery-powered headset consisting of goggles attached to a large back pad by flexible plastic arms, and featuring a 120-degree field of vision.
You Can 'Talk' to This New Computer Interface With Your Eyes
August 15, 2017
Researchers at Chongquing University in China have developed a device affixed to eyewear to give locked-in patients a means of communication via eye blinks. The team says the device can sense pressure, in the form of an electrical signal, as the skin presses against it during a blink. Unlike other eye-tracking tools, which rely on electroencephalogram-like instruments to read the body's electricity, Chongquing's triboelectric nanogenerator (TENG) reads electricity generated by friction. The TENG produces little energy, but the voltage is substantial enough to be quantified by a computer and used as input. The TENG also is inexpensive, and needs no energy to run. These and other qualities make the device useful as an eye sensor, and it has been programmed to react to a two-blink "double click." A scrolling keyboard enables users to blink once, twice, or three times to select one of three letters within each row, although more elaborate typing systems could be built in the future.
'Alexa, Be More Human'
August 29, 2017
Amazon wants to make the Amazon Echo's Alexa voice assistant more human-like by imbuing it with enhanced intelligence and conversational ability. A Cornell University study illustrates this point by observing that people who personify the device tend to be more satisfied with it. "As these voice recognition and voice production technologies improve, those relationships are going to be happening a lot more frequently and become a lot stronger," predicts Cornell's Jessie Taft. Amazon developers such as Daniel Rausch expect Alexa to become "a fabric in the home," but the Echo and similar interfaces need to assuage fears over privacy infringement. Meanwhile, university students are participating in this year's first-ever Alexa Prize contest to develop a bot that can engage with people in conversation on popular subjects. A major prize is offered for teams that create a bot that can carry on a dialogue for 20 minutes. Amazon's Ashwin Ram believes a more conversational voice interface could improve engagement.
The Medici Effect: Highly Flexible, Wearable Displays Born in KAIST
August 24, 2017
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) say they have developed flexible and wearable displays of unmatched reliability by integrating organic light-emitting diodes (OLEDs) within fabrics. The displays were created by following two approaches--fabric-type and fiber-type. Two years ago, the team laminated a thin planarization sheet thermally onto fabric to form a surface that interoperates with the OLEDs approximately 200-nm thick, and one year later they unveiled a dip-coating method for uniform layer deposition in order to build very bright polymer light-emitting diodes. The researchers say their wearable device can support the operation of OLEDs even at a bending radius of 2mm. "Having wavy structures and empty spaces, fiber plays a significant role in lowering the mechanical stress on the OLEDs," says KAIST's Seungyeop Choi. KAIST professor Kyung Cheol Choi predicts light-emitting apparel "will have considerable influence on not only the e-textile industry but also the automobile and healthcare industries."
The Secret to a Good Robot Teacher
The New York Times
David DeSteno; Cynthia Breazeal; Paul Harris
August 26, 2017
The poor performance of educational technologies is partly due to the fact that their designers follow false assumptions concerning how the brain learns, according to a group of computer scientists. "We can expect the mind to be socially tuned, meaning that it should rely on and incorporate social cues to facilitate learning," the researchers note. They say designers mostly ignore this fact when they create educational technology. The researchers detail an experiment involving young children, in which they heard a story read by a robot that either used or did not use facial animation for emotional expression. The experiment found social cues heightened the children's engagement and learning, with long-term retention most pronounced in subjects who heard the story from the expressive robot. Based on the results of the experiment, researchers contend, "If we want to use technology to help people learn, we have to provide information in the way the human mind evolved to receive it."
Researchers Create Virtual Assistant Prototype to Help People With Alzheimer's Disease
August 14, 2017
Researchers at the University of Waterloo in Canada are developing a prototype virtual assistant for people with Alzheimer's disease that integrates artificial intelligence with social psychological models. The ACT@Home prototype will prompt Alzheimer's sufferers to complete day-to-day tasks, such as handwashing, in a manner calibrated with their feelings and thoughts at a given moment by picking up emotional cues. "This prototype will work by building a model of what's going on emotionally in the mind of someone with cognitive difficulty and then prompting them to complete an activity of daily living in a way that makes sense to them in that moment," says Waterloo professor Jesse Hoey. He notes the project's goal "is to help people maintain some independence while lessening the burden on their caregivers. The person they live with usually has to step in to help, but we are hearing that the amount of assistance and patience required can become overwhelming."
Tricking the Eye to Defeat Shoulder Surfing Attacks
NYU Tandon School of Engineering
August 22, 2017
Researchers at New York University's (NYU) Tandon School of Engineering have developed IllusionPIN, which they say is the first-ever app to thwart shoulder-surfing, or the theft of sensitive information off of device screens via live or camera surveillance. NYU professor Nasir Memon says IllusionPIN uses a hybrid-image keyboard that appears one way to the close-up user and differently to an observer at a distance of three feet or more. Memon notes IllusionPIN combines one image of a keyboard configuration with high spatial frequency and a second, totally different configuration with low spatial frequency. Each image's visibility is reliant on the distance from which it is observed. Memon says the system rearranges the keypad for each authentication or login attempt, and tests of IllusionPIN on 84 attempted shoulder-surfing attacks were 100-percent successful at foiling observers. In addition, the researchers found IllusionPIN made the theft of PINs or other identification information via surveillance footage almost completely impossible.
Are We Just a Click Away From the Age of 'Enriched Reporting?'
August 21, 2017
The European Union-funded Innovative Journalism: Enhanced Creativity Tools (INJECT) project is piloting a digital toolkit at three Norwegian newspapers to determine how effective the tool's algorithms are in adding value to reporters' output. INJECT's underlying algorithm is designed to help journalists not only pinpoint information, but also explore options for crafting a story's focus. It achieves this on one level by displaying "Creative Sparks," or suggested approaches for writing a story, when a reporter hovers their mouse over keywords. The INJECT toolkit also provides explanatory and interactive fact cards or footnotes, which help journalists build their articles with clear references. "INJECT aims to do something which no other software for journalism attempts: to combine creative search techniques and ways of telling the reader more about background and sources," says University of London professor George Brock. INJECT employs natural-language processing and other methods stemming from research into artificial intelligence and human-computer interaction so journalists can search across multiple databases.
Novel Software Can Recognize Eye Contact in Everyday Situations
August 11, 2017
Researchers at Saarland University in Germany have developed deep-learning, neural network-based algorithms to estimate gaze direction. The process entails clustering the estimated gaze directions, and then identifying the most likely clusters with gaze direction estimates, which are fed to a target-object-specific eye contact detector. The researchers note this protocol can be conducted without user involvement, and the longer the camera remains next to the target object and records data, the better the method becomes. "Our method turns normal cameras into eye contact detectors, without the size or position of the target object having to be known or specified in advance," says Saarland's Andreas Bulling. He says tests found the technique is robust, even under variable lighting conditions, camera position, and the number of people involved. Bulling notes the method "paves the way not only for new user interfaces that automatically recognize eye contact and react to it, but also for measurements of eye contact in everyday situations."
We Don't Want AI That Can Understand Us--We'd Only End Up Arguing
Constantine Sandis; Richard Harper
August 21, 2017
University of Hertfordshire professor Sandis Constantine and Lancaster University professor Richard Harper in the U.K. contend artificial intelligence (AI) that can truly understand humans would likely be less helpful than many people think. "One of the key things that makes artificial personal assistants such as Amazon's Alexa useful is precisely the fact that our interactions with them could never justify reactive attitudes on either side," Constantine and Harper note. "This is because they are not the sort of beings that could care or be cared about." Constantine and Harper say although the accuracy and contextual sensitivity of the assistant's voice-recognition software is highly valued, "we hardly want it to be capable of understanding--and so also misunderstanding--us in the everyday ways that could produce mutual resentment, blame, gratitude, guilt, indignation, or pride." The researchers suggest AI-based companions for senior citizens may be the only workable exception to this general rule.
New Study Challenges Long-Accepted Views on Human-Autonomy Interaction
U.S. Army Research Laboratory
August 9, 2017
U.S. Army Research Laboratory (ARL) scientists have guided a multidisciplinary research team in developing a novel, general-purpose principled framework for human-autonomy interaction. The team's Privileged Sensing Framework (PSF) seeks to dynamically match human and autonomous agents based on their individual characteristics. ARL's Amar Marathe says the PSF is designed to retain humans as the primary, critical, and central authority while also enabling robots and other technical systems to identify and mitigate when people's judgments or actions would lead to dysfunctional or even disastrous consequences. Marathe says the framework is founded on the concept of appropriately "privileging" information during the process of integration so special rights are conferred upon specific agents based on their capabilities within the current task context, as well as performance goals. He notes a series of simulations demonstrated that "the PSF significantly improved joint human-autonomy performance without sacrificing the gains to be made from incorporating human strengths."
Social Cybersecurity: Influence People, Make Friends and Keep Them Safe
August 11, 2017
In an interview, Carnegie Mellon University (CMU) professor Jason Hong discusses applying psychology to cybersecurity to enable what he terms social cybersecurity. Hong says this involves using peer pressure and what he defines as social proof so people are encouraged to practice sound security. Hong thinks this framework could be used to persuade people to use two-factor authentication, or practice correct software updating. "One of the things we're looking at is how do we make some of these cybersecurity practices more visible--in a safe way--so that we have better adoption of best practices," Hong says. He believes CMU's Human-Computer Interaction Institute is an excellent incubator for this concept, as software designers, computer scientists, and psychologists work in close quarters. "We're looking at people and computers together and also looking at how we improve the system in terms of usability, desirability, utility, and so on," Hong says.
Calendar of Events
MobileHCI '17: 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Ubicomp '17: The 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Maui, Hawaii, USA
AutomotiveUI '17: 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
CHIPLAY '17: The annual symposium on Computer-Human Interaction in Play
SUI '17: Symposium on Spatial User Interaction
Brighton, United Kingdom
ISS '17: Interactive Surfaces and Spaces
UIST '17: The 30th Annual ACM Symposium on User Interface Software and Technology
VRST '17: 23rd ACM Symposium on Virtual Reality Software and Technology
ICMI '17: International Conference on Multimodal Interaction
SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.
ACM Media Sales
If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.
Association for Computing Machinery
2 Penn Plaza, Suite 701
New York, NY 10121-0701