Welcome to the December 2017 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
Robotic Arms With a Human-Like Sense of Touch
Keio University (Japan)
November 30, 2017
Researchers at Keio University in Japan say they have created a "real haptics" avatar-robot with a General Purpose Arm (GPA) that transmits sound, vision, movement, and tactile perception to remotely located users in real time. The team notes the avatar-robot GPA employs both high-precision motors in the avatar arm and algorithms to drive them, with the precise control of force and position being essential for transmitting the sense of touch without using conventional touch sensors. The robot GPA can identify shapes and compositions of hard or soft materials, as well as the position of objects in three-dimensional space; the device also can manipulate them based on real-time instructions from a remotely located user. "Our real-haptics technology is an integral part of the Internet of Actions [IoA] technology with potential applications in manufacturing, agriculture, medicine, and nursing care," says Keio professor Takahiro Nozaki. One IoA application he notes the GPA is being tested on is for fruit picking.
Wearable Computing Ring Allows Users to Write Words and Numbers With Thumb
Georgia Tech News Center
November 29, 2017
Researchers at the Georgia Institute of Technology (Georgia Tech) have created Fingersound, a thumb ring enabling users to interface with smart devices, tracing letters and numbers and seeing the figures displayed on a computer screen. Users strum their thumb across their fingers, and the movement is detected by the ring's hardware. Fingersound can identify the beginning and end of an intended gesture by using the microphone and gyroscope to detect the signal. "Our system uses sound and movement to identify intended gestures, which improves the accuracy compared to a system just looking for movements," says Georgia Tech's Cheng Zhang. Georgia Tech professor Thad Starner notes one future application would be to send calls to voice mail or answer text messages without creating a distraction by reaching for or looking at the phone. Fingersound was presented in September at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp 2017) and the International Symposium on Wearable Computing (ISWC 2017) in Maui, HI.
Launch of the World's First Online Platform for Digital Accessibility
November 20, 2017
The European Union-funded Prosperity4All project has launched DeveloperSpace, the first-ever online digital accessibility platform where interested parties, developers, companies, and scientists can develop simple, cost-effective, and efficient solutions for assistive technologies. "Our primary goal...is to provide a platform that brings together elements that already exist: a place where developers, for example, can find modules, codes, diagrams, and ideas from other developers or scientists and can use these themselves," says project leader Matthias Peissner. "We want to bring together existing research findings to create new solutions that actually work in a market context." DeveloperSpace forms a key support of the Global Public Inclusive Infrastructure (GPII), which along with building better accessibility solutions aims to enable customization of digital technologies for all users." [DeveloperSpace is] the only place to offer new developers the whole range of resources and information on accessibility that is available online," says GPII co-director Gregg Vanderheiden.
The Surgeon Who Wants to Connect You to the Internet With a Brain Implant
November 30, 2017
Washington University neurosurgeon Eric C. Leuthardt envisions neural prosthetics advancing in the near future, to the point where humans will have electronic brain implants that interface directly with the Internet and each other. Leuthardt says it is reasonable to assume that a cellphone's functionality could be embedded within a single grain of rice within the next 20 years. "That could be put into your head in a minimally invasive way, and would be able to perform the computations necessary to be a really effective brain-computer interface," he notes. Previous experiments by Leuthardt and colleagues found electrode arrays in the brains of human patients enabled them to play a video game by thought alone. Using pattern-recognition software, Leuthardt and Gerwin Schalk at the New York Department of Health demonstrated the ability to decode intention. Leuthardt notes the buzz emanating from Silicon Valley has generated "real excitement and real thinking about brain-computer interfaces being a practical reality."
Birmingham University Is Pioneering VR, AR, Simulation, and Telerobotics Systems
November 29, 2017
The Human Interface Technologies Team (HITT) of Birmingham University in the U.K. is pioneering virtual reality, augmented reality, mixed reality, simulation, and telerobotics technology. The team stresses the value of understanding human factors when developing such technologies. "Our research looks to avoid the technology push failures that were so evident in the 1980s, '90s, and early 2000s by developing and evaluating demonstrators that emphasize the importance of the human context," says HITT founder and Birmingham professor Bob Stone. When it was established, HITT worked closely with the U.K.'s Human Factors Integration Defense Technology Center, while more recently it has participated in collaborative projects in maritime defense and unmanned systems. "When it comes to developing demonstrators, we are focused on using affordable technology that makes training more streamlined, portable, and transferable and which de-risks solutions," Stone notes. His philosophy is that the end-user's needs should always come first and technology second when developing simulators.
Researchers From Music and Engineering Team Up to Turn Big Data Into Sound
Virginia Tech News
November 28, 2017
Music and engineering researchers at Virginia Polytechnic Institute and State University (Virginia Tech) are jointly developing Spatial Audio Data Immersive Experience (SADIE), a data analysis platform designed to convert data into sound to gain new insights. Instead of putting information in a visual context to reveal patterns or correlations, SADIE will use a sound environment to exploit the natural affordances of the space and the user's whereabouts within the sound field. "Identifying new time and space correlations between variables often leads to breakthroughs in the physical sciences," says Virginia Tech professor Ivica Ico Bukvic. "It makes sense that we would want to go beyond two-dimensional graphical models of information and make new discoveries using senses other than our eyes." SADIE will represent datasets associated with the Earth's upper atmospheric system as distinct aural properties in the Cube facility, where a motion-capture system will let users navigate the sonified data via a gesture-driven interface.
The Next User You Design for Won't Be a Human
November 20, 2017
Human-centered design is expected to eventually transition to a hybrid human-machine design paradigm, which is being called "Centaur Design." One company focusing on technologies for this paradigm is Google, which has designed a navigation application called Waze equipped with a machine-learning system that monitors all drivers linked to a network in real time. Any instructions a driver enters results in Waze delivering a route optimized not only for them, but for the entire network of drivers using the app, making decisions that prioritize the group over the individual. The app automatically tracks drivers when they install it, but it also lets them report accidents and other elements to inform real-time route recommendations. The computer assumes control of directions while the drivers provide context. A key challenge of making Centaur Design viable for everyday users is engendering a sense of trust in systems when users know they have little actual control over them.
How Maryland Computer Scientists Are Bringing the Past to Life
Big Ten Network
November 14, 2017
To enhance replicating the experience of living in Soviet-controlled East Berlin as part of a historical exhibit about the Berlin Wall, the Newseum in Washington, D.C., enlisted computer scientists and developers from the University of Maryland Institute for Advanced Computer Studies (UMIACS). "What they did was brought to the museum the technology we needed in order to bring people to the streets of East Berlin where they can experience the power of the Berlin Wall," says the Newseum's Mitch Gelman. UMIACS developers note they employed the Unity video game design platform to produce a three-dimensional proof-of-concept environment in which users can walk the streets of East Berlin, view both sides of the city from the guard tower, and even participate in the wall's dismantling. Maryland's Mukul Agarwa, a graduate student in human-computer interaction, contributed significantly to the seamless integration of visual assets within the environment, adding authenticity to the simulation.
Dartmouth Computer Scientist at Forefront of Sensing Revolution
Valley News (NH)
November 18, 2017
Dartmouth College professor Andrew Campbell leads a team that has launched the CampusLife consortium, a coalition of U.S. and European universities and students using various mobile sensing applications to examine student health. Campbell says he was inspired to create such apps out of a desire to improve the medical treatment of depression and other healthcare services. These and other achievements have placed Campbell at the leading edge of sensing innovation, and he predicts scaling up sensing technologies so they can reliably serve the general public will require the resources of a major company. Campbell recently concluded a research fellowship with Google, in which he explored smartphone sensing at Verily Life Sciences, a Google-owned research entity. Other researchers and organizations investigating mobile sensing include the U.S. Defense Advanced Research Projects Agency, which is underwriting the development of a mobile app that will passively evaluate soldiers' readiness for battle, known as the Warfighter Analytics using Smartphones for Health (WASH) project.
Intelligent Voice Leads Interactive Conversational AI Project
November 15, 2017
The European Commission (EC) has awarded a 4-million-euro ($4.75-million) grant to Intelligent Voice to jointly develop an interactive "health bot" with several other companies and academia across seven European countries. The EMPATHIC project seeks to create a conversational "Personalized Virtual Coach" to help seniors live independently, an important consideration as the population of seniors in the European Union grows. Each firm nominated by the EC will concentrate on generating evidence-based research and integrating intelligent user and context sensing methods via voice, eye, and facial analysis, in combination with intelligent heuristics, visual and spoken dialogue systems, and system reaction capabilities. The Intelligent Voice system is targeting the application of cutting-edge speech-recognition solutions, and the project's key challenges include real-time speech recognition of multiple languages, as well as developing suitable machine-learning models to train the platform on medical subjects. In addition, the system will require effective speech synthesis to develop a conversational interface.
Island University Professor Revolutionizes Museum Visit With Augmented Reality App
TAMU-Corpus Christi News
November 2, 2017
Texas A&M University-Corpus Christi professor David Squires has developed the Instructional Design and Educational Technology Augmented Reality Transmedia Storytelling (IDET ARTS) application to enhance the visitor experience at the Art Museum of South Texas (AMST). The app employs AR to add interactivity and immersion, and Squires says AR's benefits "are that it can increase interest and participation for museum visitors of all ages." AMST has incorporated the IDET ARTS app into its Digital Darkroom exhibition, enabling visitors to activate interactive audio content when they hold up their IDET ARTS-loaded iOS device to a specific piece of artwork. "It's like having a tour guide in your hand, and moving through the exhibition at your pace," notes AMST official Karol Stewart. Squires says he plans to create a citywide IDET ARTS platform for science, technology, engineering, art, and math (STEAM) development for greater public engagement, and deeper learning in informal STEAM education.
We Built a Robot Care Assistant for Elderly People—Here's How It Works
November 21, 2017
Researchers at Trinity College Dublin in Ireland have built Stevie, a wheeled, slightly humanoid robot designed to function as an assistant caregiver for seniors. Some tasks performed by Stevie, such as giving seniors medication reminders, are autonomous, while others involve human-robot interaction. Stevie also can help users maintain their social connections, with its head-screen enabling Skype calls, for example. Stevie offers versatile communication modes, including speaking, facial expression, onscreen text display, and gestures, thus following the principles of universal design so it can adapt to the requirements of the greatest possible number of users. The Trinity team's goal with such technology is to integrate empathy, compassion, and decision-making with robot-level efficiency, reliability, and continuous operation as a complement to human caregivers. The researchers envision assistant robots such as Stevie helping to relieve human caregivers from the burden of various routine and mundane tasks, so they can devote more time to engaging with their charges.
Calendar of Events
GROUP ‘18: 2018 ACM Conference on Supporting Group Work
Sanibel Island, FL
HRI ‘18: ACM/IEEE International Conference on Human-Robot Interaction
IUI ‘18: 23rd International Conference on Intelligent User Interfaces
TEI ‘18: Twelfth International Conference on Tangible, Embedded, and Embodied Interactions
CHI '18: CHI Conference on Human Factors in Computing Systems
DIS ‘18: Designing Interactive Systems Conference
Hung Hom, Hong Kong
ETRA ‘18: 2018 ACM Symposium on Eye Tracking Research and Applications
EICS ‘18: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
IDC ‘18: Interaction Design and Children Conference
TVX ‘18: ACM International Conference on Interactive Experiences for TV and Online Video
UMAP ‘18: User Modeling, Adaptation and Personalization Conference
MobileHCI ‘18: 20th International Conference on Human-Computer Interaction with Mobile Devices and Services
AutomotiveUI ‘18: 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications
RecSys ‘18: 12th ACM Conference on Recommender Systems
Ubicomp ‘18: The 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing
UIST ‘18: The 31st Annual ACM Symposium on User Interface Software and Technology
ICMI ‘18: International Conference on Multimodal Interaction
CHIPLAY ‘18: The Annual Symposium on Computer-Human Interaction in Play
CSCW ‘18: ACM Conference on Computer-Supported Cooperative Work and Social Computing
Jersey City, NJ
ISS ’18: Interactive Surfaces and Spaces
VRST ‘18: 24th ACM Symposium on Virtual Reality Software and Technology
Nov. 28-Dec. 1
SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.
ACM Media Sales
If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.
Association for Computing Machinery
2 Penn Plaza, Suite 701
New York, NY 10121-0701