ACM SIGCHI Banner
Welcome to the August 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


The Quest for the Next Human-Computer Interface
The Atlantic (07/11/16) Adrienne LaFrance

Interfaces that facilitate machine-human coordination on informational tasks are limited by their own design, and National Center for Adaptive Neurotechnologies deputy director Gerwin Schalk says this restricts the potential capabilities of both humans and computers. Interfaces are designed so computers can perform as much as possible within the constraints of a person's sensory motor systems, but Schalk says one task traditional interfaces make impossible is explaining to computers "some complex plan, intent, or perception that I have in my brain." This mismatch becomes more profound with the computer's growing sophistication, and roboticists warn a lack of interface improvement will halt machine learning and artificial intelligence innovation. Although many technologists expect the advent of augmented reality and virtual reality to yield the next major interface, others say the advancements needed to realize this milestone do not currently exist. Such systems cannot map the real world in real time with sufficient precision, and a core issue to address is determining to what degree and when the non-virtual world matters to users. Researchers stress the continued value of haptic feedback that keyboards and touchscreens provide, while Schalk's laboratory is developing brain-machine interfaces that can translate thoughts into action. Schalk theorizes a direct neural interface to a computer "could make all the feelings and perceptions and desires of a person directly accessible to technology."


How to Operate Your Smart Watch With the Same Hand That Wears It
Technology Review (07/26/16) Signe Brewster

Researchers at Carnegie Mellon University (CMU) have harnessed existing sensors such as gyroscopes and accelerometers and machine learning to teach a commercially available smart watch to identify five different gestures performed by the hand wearing the watch. Tests of the system by volunteers found it was about 87-percent accurate, and postdoctoral student Julian Andres Ramos Rojas in CMU's Human-Computer Interaction Institute says its accuracy must exceed 95 percent to be commercially viable. Whereas Google has advanced the concept of adding several gesture controls--including wrist-flicking, shaking, and lifting--to Android smart watches, the CMU team's strategy seeks to make finer finger gestures recognizable by the watch. The researchers believe their work could be particularly helpful in healthcare, where it could be used by patients with motor neuron diseases. In addition, they think it is possible to use the technology to gesture-control other devices besides a smart watch, such as phones, laptops, and virtual- or augmented-reality headsets. "I don't think we're going to settle into a single mode of interaction," Rojas says. "Instead, it's going to be this really nice combination of techniques."


Websites With History Can Be Just as Conversational as Chatting With a Person
Penn State News (07/26/16) Matt Swayne

Researchers at Pennsylvania State University (PSU) say people can find websites with search and interaction history just as engaging as talking with an online human agent or robot assistant. According to a U.S. National Science Foundation-funded study, users of an online movie database website offering a list of past interactions thought the site was as responsive as one that offered chatbot or human helpers, says PSU professor S. Shyam Sundar. "Highly interactive browsing history can give the user this back-and-forth sense of dialogue that is almost the same as talking with an attentive customer agent," Sundar notes. "With clever design, you can give the sense of a conversation and the flow of information and that could translate to higher user engagement." The researchers suggest user engagement is encouraged more by a feeling of contingency--the sense a conversation is occurring--than by the perception of interactivity on the site. They say the study's findings imply interaction history could augment digital assistants. "If Siri, for example, could tell you a little bit about your interaction history with her, some of the clunkiness in the chat could be overcome because it makes for more of a conversation between the user and the device," Sundar says. He notes businesses that design their websites with this type of history could support the same level of user absorption as sites with robot and human chat, but without investing a lot of money.


Humans Can Now Use Mind Control to Direct Swarms of Robots
ZDNet (07/22/16) Kelly McSweeney

Researchers at Arizona State University's (ASU) Human-Oriented Robotics and Control Lab have developed a brain-machine interface that humans can use to simultaneously control several robots with thought impulses. ASU professor Panagiotis Artemiadis says this control method for robot swarms can be utilized for "tasks that are dirty, dull, or dangerous." The prototype interface involves a computer-linked electrode array integrated with a skullcap the user wears, recording electrical brain activity translated by advanced-learning algorithms into commands transmitted wirelessly to the robots. The user observes the robots and creates a mental image of them performing different tasks, and a key challenge for the team was decoding the brain activity so it was accurate and repeatable. Artemiadis says this challenge was met by the algorithms, which are capable of real-time adaptation to the recordings. "The complexity of a system that requires the brain to activate areas to control robotic artifacts that do not resemble natural limbs, in our case a swarm of drones, is significant and so far unexplored," Artemiadis notes. "The fact that the brain can adapt to output control actions for a swarm of multiple robots is fascinating and quite useful for human-robots interaction."


Avoiding Stumbles, From Spacewalks to Sidewalks
MIT News (07/22/16) Larry Hardesty

Researchers at the Massachusetts Institute of Technology's Department of Aeronautics and Astronautics (AeroAstro) and the Charles Stark Draper Laboratory are developing a boot with built-in sensors and tiny haptic motors that vibrate to guide the wearer around or over obstacles. Their purpose is to help astronauts avoid falls, and the results of a preliminary study revealed the kinds of stimuli, and areas of the foot stimuli are administered to, that could provide the best navigation cues. For the pilot study, AeroAstro researcher Alison Gibson devised a device that spaced six haptic motors around each of the subject's feet, with the intensity of the motors' vibrations varying continuously between minimum and maximum settings. The team hoped these variations could signal distance to obstacles, as measured by sensors in the boot, but subjects had problems identifying steady increases in intensity when distracted by cognitive tests, as well as when they were attending to stimuli. From these findings, Gibson is working on a boot with only three motors--at the toe, at the heel, and toward the front of the outside of the foot. She notes stimuli will jump from low to high intensity when the wearer is in danger of hitting an obstacle. The researchers say their work also could be applied to the design of navigation systems for the visually handicapped.


USC Crafts Tech System Using Mobile Apps, AI to Expand Care
Health Data Management (07/25/16) Amanda Eisenberg

Researchers at the University of Southern California (USC) Center for Body Computing's (CBC) Virtual Care Clinic (VCC) program are using a smartphone application from the Institute of Creative Technologies to enable doctors to construct digital avatars to engage remotely with low-risk patients. The app is used to triage patients, so physicians will be able to put their time to better use and perform more tasks relating to the treatment of high-risk patients. The VCC provides wireless, on-demand access to Keck Medical Center of USC specialists, while doctors handle remote management and care of patients irrespective of location, using a combination of mobile apps, "virtual doctors," analysis systems, diagnostic and wearable sensors, experiential design, and expert patient health information. The VCC's artificial intelligence option duplicates doctors to supply answers to perennial, frequently asked questions that offer better quality. USC CBC executive director Leslie Saxon says the mobile app would administer standardized medical advice to patients, while doctors could communicate with patients via text or photo messaging through the VCC. The app requests patients enter their medical information and then curates medical content on their condition based on factors such as age, race, gender, and comorbidity. Saxon also notes the VCC team is working to deliver medical data to patients in a more easily understood terminology.


CCC Computing Research Symposium--Computing in the Physical World
CCC Blog (07/21/16) Klara Nahrstedt

The rapid emergence of computing in the physical world was a key theme of the Computing Community Consortium Symposium on Addressing National Priorities and Societal Needs. During her keynote address on computational sustainability, Cornell University's Carla Gomes noted it was uncharted territory for computer science and a highly interdisciplinary area. She said computational sustainability defines a new interdisciplinary field with the goal of advancing computational methods for sustainable development, which entails balancing environmental, economic, and societal needs. Gomes noted the end goal of sustainable development is the human well-being of current and future generations, and she cited the CompSustNet research network as an example of computational sustainability, with its emphasis on biodiversity, balance of environmental and socioeconomic requirements, energy and renewable resources, and computational drivers such as big data machine learning, optimization, dynamic modeling, and crowdsourcing. The symposium also featured panelist discussions on smart cities issues from business, city government, and academic perspectives. Future opportunities and questions focused on a lack of measurability of success with smart cities solutions, the need for open access and tools to enable broad smart cities research, and standardization challenges for the Internet of Things.


A Picture Is Worth a Thousand Calories: Ordering Food on a Touch Screen Can Influence Choices
University of Michigan News Service (07/17/16) Greta Guest

Researchers at the University of Michigan and the Chinese University of Hong Kong found the interface consumers use impacts their selections when ordering food electronically. Via five studies, Michigan professor Aradhna Krishna and colleagues found people tend to make more indulgent rather than healthy food choices when using an iPad-like touch interface. The studies suggest when a user views a self-indulgent food on a touchscreen, they automatically imagine reaching out and picking it up. "We find that when you touch the screen to order food, this mental interaction leads you to a more emotional choice rather than a more cognitive one," Krishna says. When the researchers had study participants select either a cheesecake or a fruit salad using an iPad or a desktop computer, they calculated 95 percent of those using the iPad ordered cheesecake while 73 percent did so with the desktop computer. When comparing results between iPads with touchscreens and those with a stylus, iPads with a mouse, and desktops, a consistent finding was the hedonistic food choice was more strongly associated with a touch interface than with any other interface. "All of this points to the driver being the mental simulation of reaching out to grab something with your hand," Krishna notes.


Using Kinect Sensors and Facial Recognition in the Classroom
Campus Technology (07/06/16) Dian Schaffhauser

A project at Carnegie Mellon University is using Kinect sensors in the classroom linked to software to help teaching assistants (TAs) polish their teaching skills in science, technology, engineering, and math courses. Professor Amy Ogan and postdoctoral researcher David Gerritsen's Computer-Aided Noticing and Reflection (CANAR) project is receiving U.S. National Science Foundation funding to support teaching and learning in university classrooms by spanning cultural divides between students and teachers. The CANAR setup involves installing two camera- and microphone-equipped Kinect sensors in the classroom--one on the left side and the other on the right side--between the teacher and students. Each sensor is connected to a laptop computer, which records activity fed by the sensors during the class. Audio data is leveraged to signal when the TA is talking too much, and also to measure student-teacher interaction via after-class analysis. CANAR will incorporate video analysis starting this summer, using facial-recognition technology that detects facial expressions, posture, and other features from students. Having this information "tells us something about [whether students are] engaged in the classroom and listening to what's happening," Ogan says. She notes TAs can access the sensor data after class to assess their performance, and adjust their teaching technique if attention is flagging.


Conversation as Interface: The 5 Types of Chatbots
Computerworld (07/13/16) Kris Hammond

Conversation via chatbot has become a viable interface thanks to advances in speech recognition, and most contemporary speech-based systems are either triggered task models that follow commands, or call and response systems that are mainly entertainment. There are five core functionalities and interaction dynamics that conversational systems will likely incorporate, including the capability to produce responses based on a statistical model of how pertinent those responses will be to a user's entered text. Meanwhile, triggered task models respond to keywords and handle the tasks associated with them using speech recognition, but they currently cannot organize more complex tasks or change the user's preferences. Search is a typical fallback function for when the triggered task model fails, but the system cannot remember the information it retrieves, and executing more complex tasks requires the system to recall more of what users have said and the tasks they want performed. Emerging systems promising more complex task interactions and conversation management will have knowledge of tasks, the information needed to carry them out, and the ability to track the information the user already has passed on to them. The fifth system model combines data analytics to determine data-defined facts with natural-language generation to support more human interactions. Such systems truly know what the user is querying about because they know what they are talking about.


Play the Piano Without a Piano--via Ubiquitous Display Technology
R&D Magazine (07/19/16) Te-Mei Wang

The Industrial Technology Research Institute's (ITRI) new iNTERPLAY gesture-control interface is mated to ubiquitous display technology. Among iNTERPLAY's applications is a virtual piano for playing and learning to play the instrument. The system projects a piano keyboard onto a surface, and the player's key choices are instantly tracked, mapped to the corresponding note and rhythm, and rendered as sound. The application also can display the musical score and which keys to select to play a piece, and users can bring their iNTERPLAY piano to events. Other potential uses of the interface include a smart identification and advertisement system, and a bookstore search application. The gesture control/ubiquitous display combination features a three-dimensional (3D) depth camera with a high-performance gesture-recognition algorithm integrated with image projection. Some systems support multi-touch, which differentiates input based on how many fingers are used, such as on a computer touchpad, and makes appropriate adjustments to its behavior and tasks. Others include object-recognition and 3D-scanning functions, so annotations and other interactions can be performed on a physical object or on a scanned virtual object.


An Architect With ALS Designs a Home Controlled by Blinks
PBS NewsHour (07/07/16) Leah Samuel

Architect Steve Saling, who suffers from amyotrophic lateral sclerosis (ALS), has designed systems incorporated into the assisted living center where he resides to enable ALS patients to control their environment using eyeblinks and facial twitches. The Leonard Florence Center for Living near Boston supports 30 bedrooms outfitted with such systems, and Saling also started the ALS Residence Initiative, a fundraising and advocacy group whose goal is to build similar homes to serve the U.S. ALS population. The group is working with local ALS patients and caregivers to build automated assisted living homes in New Orleans and Dahlonega, GA, and plans also are underway for similar homes in Dallas, Baltimore, and Windham, ME. When Saling blinks or twitches his facial muscles, his computer converts the movements into radio-frequency signals relayed to boxes installed throughout the facility to a basement receiver. From there they are transmitted as commands to the appropriate operating systems, such as those for elevators, TVs, air conditioning, windows, and lights. Saling consulted closely with architects during the apartments' design process, with particular emphasis on accessibility issues. "Because much of my profession had been computerized and I excelled in computer-assisted drafting, I was still able to convey my ideas with a lot of precision," he says.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.