Welcome to the August 2017 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
For Computers, Too, It's Hard to Learn to Speak Chinese
Technology Review
Yiting Sun
July 25, 2017


Computer users in China are rapidly adopting voice-based computers, and some researchers are calling 2017 the year of the conversational computer. However, for Chinese computers to successfully incorporate conversational-based artificial intelligence (AI) systems requires overcoming the Chinese language's unique quirks and complexities. Chinese characters can carry different meanings based on their order or when book-ended by different things, and written Chinese lacks spaces naturally dividing words. These and other qualities mean natural language processing scientists must teach algorithms where to position spaces in order to establish the proper meaning of a particular combination of characters. A lack of Chinese verb tenses also compounds a computer's challenge to deciphering a sequence's timeline. Tsinghua University researchers note computers must detect intonation, stress, and emotions to understand the intent of and communicate with human speakers. Gang Wang at Alibaba's AI Lab says scientists must design neural networks that do not need a large volume of data to learn language more efficiently.

Full Article
New Research Aims to Improve Touchscreen Use in Vehicles
University of Canterbury (New Zealand)
Margaret Agnew
July 26, 2017


Researchers at the University of Canterbury in New Zealand have received funding to pursue advances in in-vehicle touchscreens designed to improve user performance and lessen attentional demands. "This project will develop new fundamental understanding of touchscreen interaction during vibration, and it will develop methods that improve interaction with touchscreens in vibrating environments," says Canterbury professor Andrew Cockburn. He notes the team will be exploring two specific improvement strategies. One method to be investigated is the use of new finger force-sensing capabilities, while the other area of concentration is the use of transparent overlays, including three dimensionally-printed physical components, such as buttons and sliders, that rest on top of the touchscreen. "These physical artifacts will assist the user in controlling the underlying touchscreen by feel, and they will assist mechanical limb stabilization," Cockburn says.

Full Article

Artificial Empathy: The Next Frontier Artificial Empathy: The Next Frontier
Asian Scientist
Christopher Lum
July 17, 2017


Researchers at Waseda University in Japan are moving toward the development of friendly and empathetic human-computer interactions via projects such as the SCHEMA robot. SCHEMA is designed to simulate motor mimicry and emotional contagion, using a refined algorithm that enables it to choose the most natural response based on datasets. A key challenge for projects such as SCHEMA is imbuing the machines with the ability to differentiate between their own emotional state and that of others. This becomes a necessity when motor mimicry can no longer handle this task, and a distinction between cognitive and emotional empathy must be made. The emergence of machine learning, big data, and artificial intelligence promises to facilitate the establishment of a better and more effective reference point for an artificial empathy circuit. "Emotions will be critical in making machine intelligence more compatible with our own," says Intelligent Future Consulting executive director Richard Yonck.

Full Article

Will Voice User Interfaces Usurp the Traditional UI? Will Voice User Interfaces Usurp the Traditional UI?
TheServerSide.com
George Lawton
July 25, 2017


Author John Allsopp says voice user interfaces (UI) could replacing traditional UIs and facilitate the next major paradigm shift in application development. Allsopp cites the work of Lawrence Berkeley National Laboratory's Jonathan Koomey, who determined the ratio of computation power for each unit of energy grows about 100-fold every decade. By this reckoning, wireless earbuds could power voice interaction by themselves in a few years. Speech-recognition improvement challenges are being met with deep-learning algorithms, which Allsopp thinks are edging ever closer to outperforming humans. Allsopp also believes developers need to rethink how to use voice interfaces. "If all we are recognizing are a few keywords, all we will be doing is replacing clicking and tapping with our voice," Allsopp says. "What is happening now are ways of extracting deep and interesting meaning from speech." Allsopp notes entity and sentiment analysis, emotion recognition, translation, and other capabilities currently are available via application programming interfaces.

Full Article
Robotic Teachers Can Adjust Style Based on Student Success
R&D Magazine
Laura Panjwani
July 25, 2017


Researchers at Yale University are developing socially assistive robotics as part of the U.S. National Science Foundation's Expeditions in Computing initiative. One of the initiative's core research concentrations focuses on robots that engage with children, including those who are not fluent in English and those with mental or physical difficulties. Yale professor Brian Scassellati says the project aims to complement and not replace educators. The goal is to facilitate the design, implementation, and assessment of robots that inspire social, emotional, and cognitive growth in children by adapting their teaching style and approach to each individual child. Scassellati's team is working to design robots capable of identifying and responding to the smallest variations in learning styles, motivation, and personality based on their cumulative experiences with a child. Scassellati notes in one scenario, a socially assistive robot could determine if a child is motivated by collaborative or competitive goals.

Full Article

Glove Turns Sign Language Into Text for Real-Time Translation Glove Turns Sign Language Into Text for Real-Time Translation
New Scientist
Timothy Revell
July 12, 2017


Researchers at the University of California, San Diego have developed a glove that can convert all 26 letters of American Sign Language (ASL) into text on a smartphone or computer screen. The device integrates a standard sports glove with nine flexible strain sensors placed over different knuckles. When the user bends their fingers or thumb, the sensors stretch, and their electrical resistance increases. The software uses these signals to determine the hand's configuration, while motion sensors on the back interpret whether the hand is stationary or in motion, a necessity for differentiating between similar ASL letters. The team says these signals are relayed via Bluetooth to an application on the phone, which will display the intended speech. The researchers note the glove currently can only enable people to spell out words letter by letter, but whole word and phrase translation will be necessary in order for the device to become truly convenient.

Full Article

Smartwatch Could Inspire More Frequent Physical Activity Smartwatch Could Inspire More Frequent Physical Activity
Augusta Free Press
July 8, 2017


Researchers at Virginia Polytechnic Institute and State University (Virginia Tech) have conducted a study suggesting a smartwatch application could encourage users to be more physically active via friendly competition. Virginia Tech's Andrey Esakia worked with multiple departments to incorporate the watch's hardware and software elements within the FitEx initiative. "Because of the social media and just plain social aspect of the FitEx initiative, we thought it would be a good option to try something a la the Fitbit, but better suited for groups," says Virginia Tech professor Scott McCrickard. The watch was used in conjunction with a companion Android app, as well as a website to track the number of steps users took daily. "Engagement with the watches encouraged friendly competition within the groups we studied," notes Virginia Tech professor Samantha Harden. The study was presented in May at the ACM Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.

Full Article
Robots Learn to Speak Body Language
IEEE Spectrum
Alyssa Pagano
July 22, 2017


Researchers at Carnegie Mellon University have developed a system called OpenPose that tracks body movement in real time, with the potential to make human-robot interaction easier. OpenPose employs computer vision and machine learning to process video frames, and it can simultaneously monitor multiple people. Individual finger tracking also is one of OpenPose's capabilities, achieved by capturing two-dimensional camera images of body poses at various angles for inclusion in a training dataset. The images are passed through a keypoint detector to identify and label specific body parts, and OpenPose also learns to connect the body parts with individuals to enable multiple people tracking. The researchers have triangulated the detected keypoints in three dimensions to help their algorithms understand how each pose appears from different perspectives. With the dataset, OpenPose can operate with a single camera and a laptop, and the researchers believe the system could enable more natural and intuitive human-machine interactions.

Full Article
Aberdeen Researchers in Initiative to Beat Hackers and Improve Online Security
The National (Scotland)
Greg Russell
July 18, 2017


Researchers at the University of Aberdeen in Scotland aim to improve cybersecurity by evaluating how artificial intelligence (AI) and persuasion methods can make computer users more likely to follow online security advice. The team has received a grant from the U.K. Engineering and Physical Sciences Research Council to fund their Supporting Security Policy with Effective Digital Intervention initiative. Aberdeen's Matthew Collinson notes phishing emails are the most common way cybercriminals coax users to infect their computers with malware. "In terms of AI, we will investigate how intelligent programs can be constructed which can use dialogue to explain security policies to users, and utilize persuasion techniques to nudge users to comply," Collinson says. "In addition, we will be using sentiment analysis to detect people's attitudes to security policies through natural language, for example through their email correspondence." Collinson says the goal is to use these techniques to recommend remedies to non-compliance issues.

Full Article
Award-Winning Research Could Make Wristwatches Smarter Than Smartphones
Stony Brook Newsroom
July 13, 2017


Stony Brook University professor Xiaojun Bi has contributed to award-winning research outlining the design, decoding algorithm, and deployment for COMPASS, a rotational keyboard used to input text into smartwatches without requiring a touchscreen. COMPASS uses the watch's bezel so users can rotate three cursors to select which letter they want to type. Afterwards, the locations of the cursors are dynamically optimized to reduce the distance of the next rotation. Studies demonstrated that within 90 minutes of practice, COMPASS users boosted their typing speed. An advantage of the circular layout that differs from traditional QWERTY T9 keyboards is that it permits the remaining screen area to be in a round configuration, so the screen contents can be scaled to fit within the inner area without altering the interface's look and feel. The research won an Honorable Mention Award in May at the ACM Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.

Full Article
Towards a High-Resolution, Implantable Neural Interface
DARPA News
July 10, 2017


The U.S. Defense Advanced Research Projects Agency has awarded contracts to five research groups and one company to advance the Neural Engineering System Design (NESD) program. NESD's goal is developing an implantable system that provides precision communication between the brain and the digital world. The neural interface would convert the brain's electrochemical signaling into the ones and zeros of information technology processing, and at a far greater scale than currently is possible. "The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and a precision sensory interface for therapeutic applications," says NESD program manager Phillip Alvelda. The research effort will partly focus on understanding how the brain processes hearing, speech, and vision concurrently with individual neuron-level precision, at a sufficient scale for representing detailed imagery and sound. The teams will use insights into these processes to develop approaches for interpreting neuronal activity quickly and efficiently.

Full Article

The iPhone, Siri, and Conversation The iPhone, Siri, and Conversation
The UC Santa Barbara Current
Sonia Fernandez
July 7, 2017


Accelerating innovation in the field of natural language processing (NLP) holds the promise of realizing seamless human-computer communication. "Machines need to not just understand human language, but learn how to generate human language," says University of California, Santa Barbara professor William Wang. The subjects Wang teaches focus on data analytics and machine translation, which he says demand a great deal of data as well as several layers of concurrent processing. Wang predicts the potential applications for NLP will expand as the training methods for artificial neural networks become more sophisticated, with the capability possibly implemented in smartphones everywhere. In addition, Wang is teaching a class on reinforcement learning, a method in which machines are basically taught to teach themselves. "How we actually design more intelligent machines that can understand humans and also facilitate natural language generation--I think that will be really useful for the future generations of technologists," Wang says.

Full Article
Calendar of Events
RecSys '17: 11th ACM Conference on Recommender Systems
Aug. 27-31
Como, Italy

MobileHCI '17: 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Sept. 4-7
Vienna, Austria

Ubicomp '17: The 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Sept. 11-15
Maui, Hawaii, USA

AutomotiveUI '17: 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Sept. 24-27
Oldenburg, Germany

CHIPLAY '17: The Annual Symposium on Computer-Human Interaction in Play
Oct. 15-18
Amsterdam, Netherlands

SUI '17: Symposium on Spatial User Interaction
Oct. 16-17
Brighton, United Kingdom

ISS '17: Interactive Surfaces and Spaces
Oct. 18-20
Brighton, UK

UIST '17: The 30th Annual ACM Symposium on User Interface Software and Technology
Oct. 22-25
Quebec City, Canada

VRST '17: 23rd ACM Symposium on Virtual Reality Software and Technology
Nov. 8-10
Gothenburg, Sweden

ICMI '17: International Conference on Multimodal Interaction
Nov. 13-17
Glasgow, UK


About SIGCHI

SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

Association for Computing Machinery
2 Penn Plaza, Suite 701
New York, NY 10121-0701
Phone: 1-800-342-6626
(U.S./Canada)

To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.