ACM SIGCHI Banner
Welcome to the February 2017 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Accessibility Still Not Out-of-the-Box, but Cloud Can Help
Techtonics (01/27/17) Aida Akl

Although cloud services can make life easier for disabled people and improve their productivity, inaccessibility in end-user software and devices remains an impediment. However, a huge effort called the Global Public Inclusive Infrastructure (GPII) aims to make accessibility solutions universally available. Jeffrey Bigham with Carnegie Mellon University's Human-Computer Interaction Institute says a lack of out-of-the-box accessibility support makes it difficult to leverage "some of these really interesting benefits of having a cloud-connected device for accessibility. We're not yet taking advantage of the cloud for accessibility in the ways that we could." For example, a cloud service delivering customized software-as-a-service that is downloaded on demand could help make things easier for paraplegics requiring a hands-free or speech-to-text interface. Bigham notes the inaccessibility challenge is ultimately rooted in the interface, which is typically supplied by the local device. The GPII initiative, with participants based in the U.S., Europe, Canada, and elsewhere, plans to promote accessibility solutions that interoperate with all technologies. One of GPII's solutions is "auto-personalization," which finds what a person needs in order to use information and communication technologies. Experts agree the success of such projects demands close and ongoing collaboration between device makers and operating system developers.


New Research Investigates the Potential of Culturally Aware Robots
University of Bedfordshire (United Kingdom) (01/30/17)

A three-year international project to build and assess the first-ever culturally aware robots will be conducted with the participation of researchers from the University of Bedfordshire and Middlesex University London in the U.K. Both universities will lend expertise in cultural competence and evaluation to the project, which seeks to build robotic assistants for seniors. The robots will be capable of communicating via speech and gestures, moving independently, assisting individuals with everyday tasks, delivering health-related aid, and providing entertainment and access to technology. "Building culturally aware...robots that can autonomously re-configure their interactions to match the culture, customs, and etiquette of the person they're caring for means that they are more likely to be accepted by elderly clients," notes Bedfordshire team leader Chris Papadopoulos. Creating the cultural concepts and guidelines so the robots can respond to their clients' culture-specific needs and preferences is the goal of Middlesex professor Irena Papadopoulos. She says her expertise in transcultural health and nursing will be applied toward programming the robots to adapt to diverse backgrounds so they are more acceptable to older people. In the project's final year, the robots will undergo testing at care facilities in Britain and Japan. The effort is jointly underwritten by the European Union and the Japanese government.


New Techniques Allow Greater Control of Smartwatches
Georgia Tech News Center (01/24/17) Jason Maderer

Researchers at the Georgia Institute of Technology (Georgia Tech) are pioneering new ways to interact with smartwatches using swiping motions, various breathing techniques, and skin tapping, among others. They presented their research in November at the ACM International Conference on Interactive Surfaces and Spaces (ISS 2016) in New York. One notable interaction technique is WatchOut, which uses the smartwatch's gyroscope and accelerometer sensors to enable taps and scrolling gestures on the case and watchband, "outside" the watch screen. "Other techniques that improve control of smartwatches have included [three-dimensional] gestures above the screen, bigger screens, or adding an extra armband," says Georgia Tech's Cheng Zhang. "We wanted to show it could be done with existing technology already common on today's devices." Another team worked on hands-free operation, with Georgia Tech's Gabriel Reyes leading development of Whoosh, a method enabling a person to control the watch by blowing, exhaling, shushing, sipping, or puffing on the screen. The watch employs its microphone and machine-learning technology to identify the breath patterns of each acoustic event, then assigns an action to each. Another Georgia Tech application called TapSkin lets users tap on the back of their hand to enter numbers 0 through 9 or commands, via the watch's microphone and inertial sensors.


Coming Next in Domotics: Houses That Decipher Voice Commands
CORDIS News (01/25/17)

The goal of the European Union-funded LISTEN project is to design and deploy hardware and software to facilitate robust, hands-free, and voice-based access to Web applications in smart homes. The LISTEN project integrates a speech-capture system operating as a wireless acoustic sensor network with an automatic speech-recognition system, and it currently can recognize up to four languages. With the LISTEN system, users can turn various smart appliances on and off, and perform regular functions such as Web search, email dictation, and access to social networks free of headsets or close-proximity microphones. LISTEN's Alina Suhetzki recently demonstrated the system's ability to switch lights on and off using voice commands with no delay in execution. The project's partners, which include the Foundation for Research and Technology-Hellas (FORTH) in Greece, RWTH Aachen University in Germany, the European Media Laboratory (EML), and Cedat85 of Italy, plan to close the gap between the acoustic sensor and the automatic speech-recognition research communities, as well as push the envelope of current state-of-the-art technologies. FORTH and RWTH Aachen researchers demonstrated LISTEN technology in a recent speech separation and recognition competition, finishing in second place.


Monotype, Google, and MIT AgeLab Team Up to Research How We Read at a Glance
Creative Review (01/26/17) Rachael Steven

A new research consortium founded by the Massachusetts Institute of Technology's (MIT) AgeLab, Google, and Monotype plans to explore how people read in "glance-based" environments, with an emphasis on digital screens, heads-up displays, and virtual and augmented reality settings. The Clear Information Presentation (Clear-IP) consortium will compile evidence on how typography and design decisions impact content reading and retention at a glance, and apply it toward developing a best practice toolkit for designers. "Designing for digital applications allows us to change colors and shape things in ways that were unimaginable a few years ago," says MIT AgeLab's Bryan Reimer. "Design intuition tells us a tremendous amount, but how do we inform designers with information that can help them make more strategic decisions? We don't want to constrain the artistic process, but just provide more scientific evidence that designers can use." AgeLab has been collaborating with Monotype on legibility research for several years. Clear-IP's goal is to conduct data-driven research to help designers and interface designers meet "the technological challenges and psychological implications of a fast-paced lifestyle that has fundamentally altered how information is perceived and processed." Reimer notes, "our focus is on uncovering the fundamentals of clear information presentation in a modern...quick-glance environment."


Augmented Reality Is More Than Just Holograms
Network World (01/20/17) Steven Max Patterson

The recent AR in Action Conference at the Massachusetts Institute of Technology Media Lab hosted discussions and panels for augmented reality (AR) experts and practitioners. The main theme of the conference was expanding AR's definition beyond holographic projection headsets, with Chris Croteau from Intel's Wearable Device Group citing "a liberal definition of AR...on the way data is presented to users and how they interact with it." Holographic headsets were highlighted in talks focusing on such innovations as surgical AR applications and 360-degree field of views to enhance combat fighter operations. Still, fighter pilot Patrick Guinee stressed "the conference widened my myopic aperture to the universal possibilities." He noted AR is essentially an overlay of data sources affecting human perception, which could be visual, aural, or haptic. In addition, AR can transform human perception of the surrounding environment when a digitally augmented human engages with a data source. "AR...augments people's cognitive capabilities and perceptions through their interconnecting data acquired from bio-sensors, worn sensors, and the space around them," said genomic and life sciences visionary Juan Enriquez. Meanwhile, Contextere's Carl Byers described AR as "a suite of technologies that extend human capability and understanding using a variety of approaches across industries and applications."


Here's a Look at the Smart Cities of the Future
Futurism (01/18/17) Todd Jaquith

The U.S. National Science Foundation (NSF) is financing a Critical Resilient Interdependent Infrastructure Systems and Processes project to develop "resilient complex adaptive systems" for critical infrastructure, or smart cities. "A smart city is where every device, every entity, and every object can connect for whatever the needs," says Narayan B. Mandayam, chair of Rutgers University's Department of Electrical and Computer Engineering. "Wireless connectivity is the glue that holds everything together, and the bottom line is to improve the quality of life in cities and quality of the planet." Mandayam is working with Rutgers professors Janne Lindqvist and Arnold Glass to meet the smart city challenge. The ingredients of a smart city include a fully integrated infrastructure with significant computing power as well as multiple redundant systems. Cognitive psychology expertise is being tapped to refine smart city design and implementation to accommodate and even influence urban dwellers' behavior into more sustainable habits. Smart cities of the future are expected to incorporate purpose-built artificial intelligence programs and machine-learning algorithms to process data fed from numerous sensors. Clean, renewable energy will power the cities via compartmentalized power systems equipped with robust backup systems. "To make a smart city happen, a tremendous amount of investment in infrastructure will be needed, but the benefits will likely far outweigh the costs," Mandayam says.


Can Children Learn From a 'Mixed-Reality' Game?
The Hechinger Report (01/08/17) Jill Barshay

Forest Grove Elementary School in Pittsburgh, PA, has implemented a "mixed-reality" game developed by Carnegie Mellon University's Nesra Yannier to enhance students' education. Yannier's part-digital, part-analog teaching machine is an experiment into whether high-tech human-computer interaction concepts can improve teaching and learning. Composed of building blocks on a shaking table mated to an instructive computer display, the device offers children instant feedback while also delivering tactile and social interactions. The system's computer employs motion sensors to perceive students' actions as a camera transmits a near-infrared light to the objects on the table; when the light bounces back, it sends information back to the transmitter, which constructs a three-dimensional image of what is on the table. In 2015, Yannier started testing the system to see how well young children could use it to learn basic physics principles. She found students gained five times as much knowledge using the machine as they did from only using software on a computer screen. Another early experiment sought to determine whether children learn best individually or collaboratively, and found neither approach was better than the other. Thus far, Yannier's earthquake table is a playful tool for learning about centers of gravity, symmetry, and balance.


One in Five Adults Secretly Access Their Friends' Facebook Accounts
UBC News (01/19/17) Sachi Wickramasinghe

Researchers at the University of British Columbia (UBC) in Canada recently conducted a survey of 1,308 U.S. adult Facebook users and found 24 percent of respondents had spied on the Facebook accounts of their friends, romantic partners, or family members, using the victims' own computers or cellphones. "It's clearly a widespread practice," says study author Wali Ahmed Usmani. The study participants admitted to spying on their Facebook friends out of simple curiosity or fun. However, the researchers also identified other motives, such as jealousy or animosity. "Jealous snoops generally plan their action and focus on personal messages, accessing the account for 15 minutes or longer," says UBC professor Ivan Beschastnikh. The researchers also found, in many cases, snooping effectively ended the relationship. The findings highlight the ineffectiveness of passwords and device PINs in stopping unauthorized access by other users. "There's no single best defense--though a combination of changing passwords regularly, logging out of your account, and other security practices can definitely help," says UBC professor Kosta Beznosov. The research will be presented in May at the ACM Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO.


Mobile Virtual Reality Gets a Hand From University Researchers
Nevada Today (01/17/17) Kirstin Swagman

Researchers at the University of Nevada, Reno (UNR) have unveiled PAWdio, an inexpensive way to add hand tracking to mobile virtual reality (VR) using a pair of ear buds. The researchers presented PAWdio in October at the ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play (CHI PLAY 2016) in Austin, TX. "One of the implicit missions of our lab is to bridge the gap between high-end but expensive and cheap but low-end VR platforms," says UNR's Majed Al-Zayer. "Realizing this mission would make high-end VR technology more accessible to the masses. I believe that solutions like PAWdio demonstrate that getting the best out of the two platforms is possible." PAWdio employs acoustic sensing to track a hand's spatial movement, while emitting a high-frequency sound wave and using Doppler shifts to calculate the distance between the ear bud and the smartphone to determine the position of the hand. UNR professor Eelke Folmer says PAWdio also has the advantage of being less computationally intensive than other hand-tracking approaches, which typically use computer vision. The team says the next step is to develop a math game in which children solve simple equations by selecting three-dimensional objects using PAWdio.


Talk to Me: Voice Control Is Taking Off, but It's Not Taking Over Yet
The Conversation (01/24/17) Fraser Allison

Voice-controlled technologies such as Amazon's Echo speaker and its Alexa digital assistant are very competent at listening to humans, but less intelligent in terms of understanding what they are hearing, writes Fraser Allison, a Ph.D. student at the Microsoft Research Center for Social Natural User Interfaces at the University of Melbourne in Australia. Allison says the subtle nuances of human speech represent complex technical challenges for voice interfaces, and several companies are making progress in different areas. For example, Google Now can provide relevant responses to a wide range of requests by tapping Google's troves of data about the Web and users' personal activities. Meanwhile, Amazon Echo is particularly proficient at hearing requests from across a noisy room via a noise-cancelling far-field microphone array. Voice interfaces have recently significantly improved their ability to understand everyday or "natural" speech instead of stilted and carefully worded commands, although their comprehension of queries is still limited to simple questions. However, not all of the limitations of voice interfaces can be remedied via technological innovation. Dealing with noise pollution will remain a major obstacle, while social acceptance of voice interface use may be questioned; another factor is the danger of distraction, which is especially hazardous for in-vehicle interfaces.


The Quest for a Perfect Interface
Engineering.com (01/04/17) Chris McAndrew

In an interview, William Gribbons, the founder of the User Experience Center and User Experience Studio and the director of Bentley University's Human Factors in Information Design Program, discusses the current state of the human-computer interface (HCI) industry. Gribbons notes the computer interface required much more expertise and effort on the part of the user in the 1980s and 1990s. "A core theme that we address...is how much are we asking the user to do?" he says. "How much past learning can we use? From one product to another, or within a product line, what do they have to work differently?" Gribbons says computer-assisted design products should have consistency and transfer of learning. "One of the great untapped opportunities in this field is how do we build in support and intelligent support," he says. "When we leave the user on their own to deal with changes and inconsistencies, it exceeds the capabilities of the user. Can we build intelligence in the systems (wizards and agents)?" Gribbons notes Bentley's program is chiefly focused on mobile games, software applications, and hardware, with an emerging emphasis on the "design of everything." "Removing the C from HCI does not require anything new," Gribbons notes. "You are focused on the human interaction of anything, including software."


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.