Welcome to the July 2017 SIGCHI edition of ACM TechNews.
ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.
ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.
The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.
Soon Your Desk Will Be a Computer Too
July 5, 2017
Carnegie Mellon University's Robert Xiao is developing Desktopography, a project "to break interaction out of the small screens we use today and bring it out onto the world around us," Xiao says. Desktopography projects digital applications onto a desk where users can pinch, swipe, and tap, and Xiao has combined a depth camera and pocket projector into a prototype that people can screw into a standard lightbulb socket. The depth camera generates a constantly updated three-dimensional desktop map, observing when objects move and when hands enter the frame. The data is then passed along to the device's brain, which Xiao's group programmed to differentiate between fingers and other objects. Desktopography uses algorithms to identify various items, and then plots the best possible site to project the information. "We want to put the digital and physical in the same environment so we can eventually look at merging these things together in a very intelligent way," Xiao says.
Creating a Virtual Climbing Experience
July 6, 2017
Dartmouth University professor Emily Whiting and colleagues have employed three-dimensional (3D) modeling and digital fabrication to develop a mountain-climbing simulation that can be practiced on indoor walls. The 3D rebuild was accomplished by combining hundreds of location photos with video of the climber's movements, enabling the researchers to create fabricated holds affixed to a climbing wall. Whiting says the system enables users to enjoy in a gym environment "the physical experience of climbing [a mountain] without causing the erosion and damage to the location." The researchers ultimately imagine a system that combines photos and video into a database of outdoor climbs from which holds could be manufactured and provided to gyms. Whiting also wants to improve the simulator's visuals with virtual reality or projected images. The team demonstrated the system's replicative abilities in May at the ACM Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.
Robots Offer the Elderly a Helping Hand
June 20, 2017
Robots designed as assistants and caregivers for senior citizens in the European Union are making progress, with researchers at the University of Coimbra in Portugal focused on making robot conversation as natural and intuitive as possible and improving in-home navigation. Project leader Luis Santos envisions robots as part of a wider transformation in elderly care, noting, "In the future, elderly care will also be very focused on information and communications technologies--for example virtual access to doctors or care institutions and 24/7 monitoring in a non-invasive way are likely to become standard." Meanwhile, researchers at the National Center of Scientific Research "Demokritos" in Greece are seeking to integrate robotics within a smart home featuring connected devices, automation, and sensors. At Italy's University of Trento, Luigi Palopoli's team is developing robots that motivate seniors to follow exercise regimens, and a key aspect of their work involves removing "emotional barriers" that make elderly people solitary and less healthy.
Why Don't My Document Photos Rotate Correctly?
June 27, 2017
Korea Advanced Institute of Science & Technology professor Uichin Lee and colleagues learned why document photo orientation errors occur on smartphones, and have proposed a solution. They determined the cause was a glitch in screen rotation-tracking algorithms. In an experiment, Lee's team found landscape document photos had error rates of 93 percent. Smartphone camera apps display current orientation using a camera-shaped icon, but Lee found users are not cognizant of this feature, and they do not notice its state when they take document photos. The team's corrective method fixes phone orientation by tracking the device's rotation sensor. When people shoot document photos, Lee notes their phones become parallel to the documents on a flat surface, and the intent of capturing documents can be identified as gravity falls onto the phone's surface. Lee says experimental results showed his team's algorithms yielded 93-percent accuracy in tracking phone orientation in document-capturing tasks.
Student Researchers Ask How Secure We Feel About Internet Security
Bucknell University News
June 8, 2017
Internet users' attitudes toward technologies designed to secure their privacy were explored in a new study conducted by researchers at Bucknell University, which was presented in May at the ACM Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO. "A big issue with data privacy is that nobody really understands or has a very clear grasp of what a private context online really means," says Bucknell's Stephanie Garboski. "Our research is about trying to make people comfortable with the protocols involved." The researchers assessed users' comfort with differential privacy, which adds inaccurate data to make user query trace-backs less certain. The experiment determined some individuals may misunderstand the function of differential privacy and its variants. "We thought that people might not understand how adding mathematical noise isn't a bad thing--that it's actually a good way to mitigate risk while still providing beneficial information to the researchers and companies collecting information," says Bucknell's Brooke Bullek.
When Kids Talk to Robots: Enhancing Engagement and Learning
June 26, 2017
Three studies from Disney Research illustrate how conversational robots and virtual characters can augment learning and expand entertainment options for children. One study examined how young children responded to interactive TV programming, with an emphasis on when the character responds as soon as the child finishes answering, when the character repeats an unanswered question, and when the character signals whether the answer is right. Children were more likely to verbally interact with the program when the character waited for responses and when unanswered questions were repeated. Another study was on collaborative storytelling, in which a robot made contextual and non-contextual suggestions while the child was telling a story. Younger children and boys appeared to have more difficulty grasping contextual suggestions. The last study used a conversational robot and an autonomous interaction simulation to rate children's responses to references to previous conversations, with participants especially favoring the robot with persistent memory.
What's on the Horizon in UI, VR, AR & AI
June 26, 2017
In an interview, Future Today Institute founder Amy Webb discusses disruptive technologies such as artificial intelligence (AI) and conversational user interfaces (UI), noting in the latter case that coding empathy into machines is difficult. With commercialization of virtual reality (VR) and augmented reality (AR) expected to continue, Webb predicts, "we'll see a number of new content providers building out stories and experiences for each platform." Meanwhile, Webb says AR's potential to deepen people's understanding of the surrounding world is based on its ability to "transition the ubiquitous information layer we currently access using computers and the Internet to frictionless devices like mobile phones, eyeglasses, and smart contacts, allowing us to glean information everywhere." In terms of AI's likely impact, Webb says it will be "the next layer of technology that will be integrated into everything we do and much of the technology we use, and that includes mixed reality interfaces and devices."
Ambient Literature Allows Us to Redefine the Role of the Reader
June 30, 2017
Composer Duncan Speakman has produced a book and accompanying mobile phone app to present a travelogue commissioned by Ambient Literature, a joint research project between the University of the West of England Bristol, Bath Spa University, and the University of Birmingham in the U.K. In an interview, Speakman says the goal was to make each audience member "an active reader, to stop and engage with sound and physical experience," and in essence activate the piece. Speakman says the book/app guides users through the city they are in, seeking out specific sites and invoking sounds and stories from remote but related situations. Each participant is invited to connect those stories to their location, generating a map of where they currently are and of places that may later be gone. "This work seems to offer the ability to experience the 'other,' and make that relevant to where we are, where we're situated when we experience it," Speakman says.
Penn Interactive Map Shows Community Traits Built From More Than 37 Billion Tweets
Ali Sundermeier; Katherine Unger Baillie
July 5, 2017
Researchers at the University of Pennsylvania's World Well-Being Project (WWBP) aim to capture, explore, and share differences across U.S. communities on a large scale by building the Well-Being Map, an interactive, open source tool based on the statistical language analysis of more than 37 billion publicly shared, geotagged tweets and on regional demographic information. The map displays scores for characteristics in each county, such as well-being and personality traits like openness and government-reported health. Top 10 lists display the highest- and lowest-scoring counties in the country and in every state for each trait, and a County Profiles page lets users enter a county name and obtain its ranking in the U.S. for all reported characteristics. "I hope people will just want to play around with it," says WWBP co-founder Johannes Eichstaedt. "If I were considering moving somewhere in the U.S., I would want to know how happy people are there."
Using Virtual Reality--and Mom's Sewing Machine--for Stroke Rehab
June 12, 2017
University of Southern California (USC) researcher Sook-Lei Liew has built a prototype rehabilitation device for people recovering from a stroke from a commercially available virtual reality (VR) system, an electrode-studded swimming cap, and a brain-computer interface. Liew was inspired by a study that found using an avatar with long arms in the virtual world enabled users to interact with the real world as if they had long arms. This and other studies led Liew to speculate whether stroke victims can train themselves to recover movement by using a healthy avatar body in VR. Liew's Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training (REINVENT) system uses VR and brain and muscle sensors to display limb movement in the virtual world when the patient has used the correct brain and muscle signals, even if their limbs are paralyzed in the real world. Liew theorizes patients can train the damaged circuits to work again over time.
Improving Virtual Reality and Exploring Ear Shape Effects on 3D Sound
June 26, 2017
Last month's third joint meeting of the Acoustical Society of America and the European Acoustics Association in Boston, MA, highlighted a multidisciplinary acoustics research project in which computer models employed recordings from a live concert held at Paris' Notre Dame Cathedral and room acoustic simulations to generate a virtual reconstruction of the performance using spatial audio and virtual reality (VR). The digital acoustical data was augmented with computer-generated virtual navigation, or three-dimensional visualizations produced with immersive architectural renderings that float the viewer through the cathedral's sound qualities. "The importance of multimodal interactions, how visual and auditory cues balance in spatial perception, is key to VR and the sense of immersion, of being 'in' the VR world," says Brian F.G. Katz at Pierre and Marie Curie University in France. Katz says their next research project will be applying this immersion method to theater simulations and other complex multimodal environments.
Advanced aGent Engagement Team Explores World of Virtual Reality
June 16, 2017
Making human-machine interactions in immersive and virtual environments productive is the purpose of the University of Texas at El Paso's (UTEP) Advanced aGent Engagement Team (AGENT), formed by UTEP professor David Novick in 2012. "The long-term goal is to create artificial agents who are helpful and natural to talk with or interact with humans," Novick says. AGENT's projects include "Survival on Jungle Island," an immersive interactive application, which won the award for Outstanding Demonstration at the ACM International Conference on Multimodal Interaction (ICMI 2015) in Seattle, WA. The app involves interaction between an embodied conversational agent and a human via speech and gesture, which was found to boost rapport when the agent asks the human to conduct task-related gestures and perceives a human performing these gestures. "With educational agents, we can determine if students connect with the agent and learn much more than just using a textbook," says AGENT researcher Adriana Comacho.
Calendar of Events
RecSys '17: 11th ACM Conference on Recommender Systems
MobileHCI '17: 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Ubicomp '17: The 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Maui, Hawaii, USA
AutomotiveUI '17: 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
CHIPLAY '17: The annual symposium on Computer-Human Interaction in Play
SUI '17: Symposium on Spatial User Interaction
TBD, United Kingdom
ISS '17: Interactive Surfaces and Spaces
UIST '17: The 30th Annual ACM Symposium on User Interface Software and Technology
VRST '17: 23rd ACM Symposium on Virtual Reality Software and Technology
ICMI '17: International Conference on Multimodal Interaction
SIGCHI is the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI). We provide a forum for the discussion of all aspects of HCI through our conferences, publications, web sites, email discussion groups, and other services. We advance education in HCI through tutorials, workshops and outreach, and we promote informal access to a wide range of individuals and organizations involved in HCI. Members can be involved in HCI-related activities with others in their region through Local SIGCHI chapters. SIGCHI is also involved in public policy.
ACM Media Sales
If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.
Association for Computing Machinery
2 Penn Plaza, Suite 701
New York, NY 10121-0701