Welcome to the June 2017 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
South Korean Research Team Develops Smart Table Clock
The Korea Bizwire
May 29, 2017
Researchers at the Ulsan National Institute of Science and Technology (UNIST) in South Korea have developed a table clock that receives information from other devices to alert users of their schedule. The concrete-and-wood "Cuito" clock links itself to other smartphones or computers to acquire information via Wi-Fi at the push of a button. When the concrete element is pressed, the clock hands start moving and a light is illuminated to indicate the user's next scheduled activity. Cuito also has a feature to keep users up to date with their plans, sometimes announcing how much time is left between the current plan and the next one. The UNIST team predicts Cuito will help users "spend the day more effectively while allowing people to understand the concept of time and plan their schedule better." The project won a thesis award at last month's ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.
MIT Student Team Creates Real-Time Text to Braille Converter
May 25, 2017
An all-female student team at the Massachusetts Institute of Technology (MIT) is developing a real-time text-to-braille converter that enables people with vision impairments to transcribe an image of printed text into braille on a refreshable display. "If someone opens up a page in a textbook and they would like to understand what that page is saying, they would place our device on the page and hit a button, and the device would capture that printed text through text recognition and convert that into the corresponding braille letters," says MIT's Jessica Shi. A initial proof-of-concept prototype of the device in February 2016 encouraged further development, which thus far has yielded six follow-up prototypes. The team is exploring alternatives to piezoelectric technology used by most refreshable displays, including microfluidics and refined electromagnetic actuation. The team envisions a smartphone-sized, portable device whose braille cells are much less expensive than cells currently on the market.
Microsoft Researchers Are Trying to Humanize Virtual Assistants by Studying Multilingual Speech
Saheli Roy Choudhury
May 29, 2017
Efforts are underway to make virtual assistants multilingual and their conversational mode more human-like by applying machine learning and real-time big data analytics. For example, Microsoft researchers in India have begun Project Melange, studying code-mixing by bilingual Indians online to determine how virtual assistants might be trained to respond to a user switching between different languages in a conversation. "To have a digital assistant, something like Cortana, you have to be able to understand [the user base]," says Microsoft's Kalika Bali. Picking up multiple languages in a single dialogue is beyond current systems' abilities, and Microsoft's Sundar Srinivasan says Cortana "needs to really understand human beings, and the bar for that is very high because that means we have to train the assistant with lots of human speech data." Bali says the project team studies every facet of code-mixing, and the biggest challenge it currently faces is obtaining access to sufficient datasets.
Electronic Tattoos: Using Distinctive Body Locations to Control Mobile Devices Intuitively
May 17, 2017
Researchers at Google and Saarland University in Germany have developed temporary electronic skin tattoos for intuitive control of mobile devices. The researchers spent time experimenting to find the ideal mix of conductive inks and printing processes so they could print the conductive traces and the electrodes as compactly and thinly as possible on tattoo paper. The conductive plastic EDOT:PSS enabled the team to print the tattoo thinner than a hair's breadth, thus ensuring it would fit onto knuckles and wrinkles, while also having the flexibility to withstand compression and stretching. The water-applied "SkinMarks" can be printed in 30 to 60 minutes under laboratory conditions. In addition, each SkinMark is linked to a nearby minicomputer via conductive copper tape. "We are convinced that in the future, everyone will be able to make their own e-tattoo in less than a minute on a standard, commercially available printer," says Saarland professor Jurgen Steimle.
Your Camera Wants to Kill the Keyboard
May 22, 2017
Google officials used the company's annual developer conference to highlight technologies designed to outmode the keyboard and enable more natural and emotive interaction. One example is Google Lens, a computer-vision system that transforms smartphone cameras into input devices. With Lens, users can point the camera at objects and locations and summon relevant information, and in some cases perform commands, such as booking reservations at restaurants the camera faces or logging into online networks by scanning a router's SKU and password. Image-based search also is a capability of Amazon's 2014 Fire Phone, while the beta version of Google Lens was launched earlier this year by Pinterest. "We're getting to the point where using your camera to discover new ideas is as fast and easy as typing," says Pinterest's Albert Pereta. Experts note the method relies on the accuracy of the image-recognition system, and Google is working to refine the technology.
Researchers Engineer Shape-Shifting Noodles
May 24, 2017
Researchers at the Massachusetts Institute of Technology's (MIT) Tangible Media Group have engineered flat sheets of gelatin and starch that configure themselves into three-dimensional (3D) structures when immersed in water. "This project is the one of the latest to materialize our vision of 'radical atoms'--combining human interactions with dynamic physical materials, which are transformable, conformable, and informable," says MIT professor Hiroshi Ishii. The researchers generated a flat, dual-layer gelatin film with a denser top layer that can absorb more water and curl over the bottom layer, forming a slowly rising arch. The team 3D-printed strips of edible cellulose over the top gelatin layer, which function as a water barrier. Printing cellulose in various patterns onto gelatin enabled predictable control over the structure's reaction to water and the shapes it takes. The research was presented last month at the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.
Technology Aims to Improve Physiotherapy Outcomes Using Sensors and Machine Learning
May 18, 2017
Researchers at the University of Waterloo in Canada have integrated motion sensors and software so people recovering from hip and knee replacements can see the real-time performance of their rehabilitative physiotherapy exercises. Sensors affixed to limbs transmit data to a computer as patients go through their exercises, and the software applies human body modeling and machine learning to analyze the data and instantly synthesize a stick-figure visual representation of the motion on a screen. The stick figure is displayed side by side with the ideal exercise movement to give patients a basis for comparison. The researchers note the Automated Rehabilitation System's initial outcomes demonstrate that immediate visual feedback improves patient performance. "The system we have developed is quite flexible," says Waterloo professor Dana Kulic. "It can be used for any type of human movement."
Family TV Viewing and SMS Texting Could Help Cut Internet Energy Use
May 18, 2017
Researchers at Lancaster University and the University of Cambridge in the U.K. recently performed a detailed study on Android device users, comparing their observations with a dataset of nearly 400 mobile devices in the U.K. and Ireland. The team identified four categories of data-hungry service--watching video, social networking, communications, and listening. The researchers note the four categories cumulatively comprise approximately 50 percent of mobile data demand. The team recommends developers create features for devices and apps that encourage users to communally enjoy streamed media, cutting the overall number of data streams and downloads. They also note other ways to conserve energy include reducing previews or encouraging click-throughs to content. In addition, the study found energy expenditure via instant messaging apps' data demand exceeds that of short messaging services by about a factor of 10.
How Movie Magic Could Help Translate for Deaf Students
May 17, 2017
Scientists are using computer-animation methods popular in the movie industry to generate lifelike digital avatars to help hearing and hearing-impaired persons seamlessly communicate via sign language. The Rochester Institute of Technology's Matthew Huenerfauth leads a team using motion-capture technology in which humans use sensor-dotted apparel as they move, with their movement data rendered into open source mathematical models so other researchers can use them to develop avatars. Meanwhile, researchers working on DePaul University's Paula avatar are programming it to mimic the action of "role shifting" by developing mathematical models of how bodies naturally make specific motions, and using them to automate critical parts of Paula's signing through keyframe animation. Georgia Institute of Technology researcher Harley Hamilton's team is developing "CopyCat," a game for improving children's signing skills by having participants interact with a sign-language cat avatar. The cat's responses indicate whether or not players are making the right signing movements, via a motion-sensing camera.
Study Researches 'Gorilla Arm' Fatigue in Mid-Air Computer Usage
Purdue University News
Brian L. Huchel
May 8, 2017
Researchers at Purdue University's C Design Lab are investigating arm and muscle fatigue associated with advancements in the use of hand gestures for mid-air computer interaction. "Arm fatigue--the so-called 'gorilla arm syndrome'--is known to negatively impact user experience and hamper prolonged use of mid-air interfaces," says Purdue professor Karthik Ramani. Ramani led an investigation focusing on determining an individual's arm strength and calculating cumulative subjective fatigue levels. "With a simple depth camera and a dumbbell, we are able to do as good measurements as the other methods," Ramani says. The study estimated the cumulative subjective fatigue at an improved 15-percent error rate over conventional methods. The researchers believe their work will help expedite and encourage the development of improved user interface solutions for virtual and augmented reality systems. The study's results were presented last month at the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.
How a Tap Could Tame the Smart Home
May 9, 2017
Researchers at Carnegie Mellon University (CMU) have developed a system enabling smartphone users to tap their phone against Internet of Things appliances and automatically load a contextual menu onscreen. Their Deus EM-Machina system harnesses electromagnetic noise discharged from everyday electrical objects to power a device classifier, facilitating contextual functionality directly to the smartphone screen so it can serve as a dynamic control device. The project is part of the CMU Future Interfaces Group's work with wearable devices, extended to a more accurate form factor, says CMU professor Chris Harrison. "Unlike a smartwatch, you perform distinct activities on a smartphone," Harrison notes. "It's more of a tool than an accessory." The 12 example applications the team built for the technology include thermostat control, router configuration, document printing, and mobile-to-desktop text transmission.
Q&A With Ph.D. Student and Smartwatch Designer Jun Gong
May 25, 2017
In an interview, Dartmouth College's Jun Gong discusses Cito, an actuated movable smartwatch he designed with scientists from Dartmouth and other universities. Gong says the idea came from issues with current smartwatches, and he sought to provide actuation and movement to alert wearers of important messages. Gong proposed five specific movements--linear translation, rotating, orbiting around the wristband, lifting, and tilting--and over five months he generated ideas and interaction scenarios that were then engineered in a prototype. Gong says the Cito prototype serves as a proof of concept for users and other companies in the smartwatch industry. "I'm interested in how to design these interactions and how these interactions can help people, can allow people, maybe with no knowledge of device technology, to easily use this kind of device," Gong says. He presented the technology last month at the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.
Calendar of Future Events
DIS '17: Designing Interactive Systems Conference
Edinburgh, United Kingdom
TVX '17: ACM International Conference on Interactive Experiences for TV and Online Video
EICS '17: The 9th ACM SIGCHI Symposium on Engineering Interactive Computing Systems
C&C '17: Creativity and Cognition
IDC '17: Interaction Design and Children
Stanford, CA, USA
UMAP '17: User Modeling, Adaptation and Personalization Conference
RecSys '17: 11th ACM Conference on Recommender Systems
MobileHCI '17: 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Ubicomp '17: The 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Maui, Hawaii, USA
AutomotiveUI '17: 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
CHIPLAY '17: The Annual Symposium on Computer-Human Interaction in Play
SUI '17: Symposium on Spatial User Interaction
Brighton, United Kingdom
ISS '17: Interactive Surfaces and Spaces
UIST '17: The 30th Annual ACM Symposium on User Interface Software and Technology
VRST '17: 23rd ACM Symposium on Virtual Reality Software and Technology
ICMI '17: International Conference on Multimodal Interaction
SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.
ACM Media Sales
If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.
Association for Computing Machinery
2 Penn Plaza, Suite 701
New York, NY 10121-0701