ACM SIGCHI Banner
Welcome to the October 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


From Pixels to Pixies: The Future of Touch Is Sound
Reuters (09/29/15) Jeremy Wagstaff

Mobile device interaction may be transformed thanks to ultrasound, while touchable and three-dimensional (3D) holographic displays might one day become feasible with the addition of laser light. Ultrasound projects of note include one to produce invisible, in-car air-based controls, which motorists can feel and tweak via interaction with ultrasound waves. "You don't have to actually make it all the way to a surface, the controls find you in the middle of the air and let you operate them," says Ultrahaptics co-founder Tom Carter. Meanwhile in Japan, researchers are exploring the possibilities of manipulable 3D holograms of mid-air ultrasound haptic controls made visible by tiny lasers. The University of Tokyo's Hiroyuki Shinoda is leading a team's application of ultrasound technology to enable people to remotely visualize, touch, and interact with things or each other. Team member Keisuke Hasegawa notes the distance between the two is currently constrained by the use of mirrors, but this could eventually be translated into a signal enabling interaction no matter what the distance. Possibly the biggest challenge to commercialization of mid-air interfaces is making the technology affordable.


Mobile Robots Could Help the Elderly Live Fuller Lives
University of Lincoln (09/28/15) Elizabeth Allen

University of Lincoln researchers have developed mobile service robots that will be tested in senior populations in Britain, Greece, and Poland. The ENabling Robot and assisted living environment for Independent Care and Health Monitoring of the Elderly (ENRICHME) trials will help determine the feasibility of the machines in helping the elderly live fuller lives via their integration and performance in smart homes. "The system will build on recent advances in mobile service robotics and ambient-assisted living to help people improve health and well-being," says ENRICHME principal investigator Nicola Bellotto. "From a technological point of view, there will be an intelligent interactive robot that is integrated with a smart home, communicating with a network of caregivers and relatives. This will be of particular benefit to those people who have mild cognitive impairments." ENRICHME will give caregivers and professional staff the ability to identify evolving trends of cognitive impairments and possible emergencies, including monitoring sudden mood swings that might signal deterioration, or the need for family or health services to intervene. The robots also will be programmed to identify individuals so they can deliver personalized services for people living with others. New research in adaptive human-robot interaction will give the machines tools to support cognitive stimulation and social inclusion, which learn from and adjust to the state of the user.


Painting Bot Follows Your Eyes to Create Art
New Scientist (09/25/15) Sandrine Ceurstemont

A robotic system developed by Imperial College London senior lecturer Aldo Faisal and his team can paint, using a commercially available eye-tracking device to follow a person's gaze and transmit commands to an industrial robot arm, which translates how and where the painter looks into brushstrokes. "It's the first step to human augmentation with additional limbs," Faisal says. He plans to switch the brush grip with a robotic hand to enable object manipulation, permitting the system to be adapted for amputees or paralysis victims. Faisal also speculates gaze instructions could be sent over the Internet to control the arm remotely, so immobile patients could perform tasks from bed. In addition, operators might employ the system to virtually control machinery at hazardous sites. "Some people say it feels like telekinesis," Faisal notes. Technical University of Denmark researcher John Paulin Hansen envisions a similar system being used to operate robots. "Gaze control will be integrated into cell-phones and tablets within the next couple years," he predicts. "It will soon become a part of everyday life."


The Past and Future of AI: A Chat With Barbara Grosz
Harvard University (09/23/15) Leah Burrows

In an interview, Harvard University professor Barbara Grosz discusses her areas of research, the advancement of artificial intelligence and human-computer interaction. "If you're going to build agents that interact with people, you have to think about people's cognition and the ways they behave," she says. Grosz notes much progress has been made in terms of computers' dialogue capabilities, but cautions they still have far to go before they match human dialogue. She notes computer agents such as Siri and Cortana can easily be fooled because they lack contextual and discursive abilities. "No current system is thinking to the extent [Alan] Turing imagined computers might be by now," Grosz says. She is exploring the question of whether systems can be designed to behave so well they pass for human, and a key challenge her team is tackling is developing a way for computers to delegate responsibilities to team members. "An enormous challenge for systems is to be able to determine what information to share with whom when," Grosz notes. She emphasizes the importance of her students learning the value of designing artifacts for those who will use them, with a focus on the limitations or flaws within a system and the unintended consequences of those limitations.


Make Your Own Buttons With a Gel Touch Screen
Technology Review (09/23/15) Rachel Metz

Researchers primarily from the Technical University of Berlin (TU Berlin) have built a prototype gel-coated touchscreen that hardens in reaction to heat, enabling the creation of temporary buttons in numerous shapes that do not have to be pre-configured. The technology could make it easier to use various electronics applications, including in-car displays, smartphones, and wearable devices, for functions such as entering information or receiving alerts. The researchers can stiffen the GelTouch prototype into three basic shapes to form a grid of buttons, a slider, and a one-finger joystick button. The buttons were used to dial a phone number without looking at the display, while the slider was able to scroll through a series of photos, and the joystick could play a simple game. GelTouch employs a heat-responsive hydrogel layered on top of the display, which is transparent and spongy until it is heated above 90 degrees Fahrenheit. The heat evaporates the gel's water and it becomes up to 25 times harder. "You basically can have unlimited shapes or structures or whatever you want," says GelTouch creator Viktor Miruchna. The researchers note there are several technical challenges to making GelTouch useful and commercial, including the display's softness compared to the more solid touchscreens consumers are used to, and maintaining buttons' shape over time by adjusting the electrical heating current.


Planning the Smart City in Paris
Data-Smart City Solutions (09/21/15) Adam Tanaka

Harvard University undergraduates participating in an interdisciplinary smart cities program in Paris this summer developed a slate of projects, including plastic hydroponic islands, a pneumatic trash system incorporated into the sewage network, and adaptive road networks that adjust their size, usage, and direction to traffic patterns. The program is the result of an alliance between Harvard Summer School and France's Center for Interdisciplinary Research, and it emphasized the connection between smart technologies and biological precepts. Paris Smart City initiative project manager Fabienne Giboudeaux notes the participants' training in life sciences was an advantage in bringing an innovative mindset to urban issues. "On the level of the smart city, we are trying to develop an integrated approach that links different kinds of networks, uses, and users of the city," he says. "I found the parallelism with biology incredibly stimulating and intriguing." New planning practice techniques were developed under the program, an example being a quantitative methodology to measure the effectiveness of an area undergoing redevelopment by analyzing factors such as trash access, pedestrian circulation, and noise distribution. City officials also were interested in the students' atypical proposal methods--which often employed multimedia components--which they thought carried lessons for planning professionals.


Future of Immersive Gaming Gear for the Blind
Phys.org (09/25/15) Jason Johnson

Intel researchers envision their real-world assistive technology, based on Intel RealSense cameras and wearable sensors, opening up immersive gaming to the sight-impaired. The multidisciplinary team applies expertise in design, human-computer interfaces, human factors, and prototyping toward the development of natural, intuitive, and immersive ways to use the RealSense camera technology, which enables users to control and interact with PCs via hand gestures. The team demonstrated that connecting a RealSense camera to a wearable computer strung with vibrating haptic actuators enables computer vision to assist sight-impaired people with real-world navigation. Their next goal is to offer an improved way to engage with augmented reality experiences that include games. Team member Chandrika Jayant says there is no need for a camera to identify anything within a purely digital gaming experience, but one is required if the game is designed to pull in real-world elements. The prototype of the RealSense Spatial Awareness Wearable is a vest outfitted with a computing module, which links wirelessly to eight thumb-sized vibrating sensors. The vest transmits vibrations to its wearer's body based on cues from real-world surroundings. The RealSense Interaction Design Group's Rajiv Mongia says the tool has potential to improve blind people's experience with augmented or virtual reality games.


UW Team Links Two Human Brains for Question-and-Answer Experiment
University of Washington News and Information (09/23/15) Deborah Bach

University of Washington (UW) scientists have used a direct brain-to-brain link to enable pairs of participants to engage in a question-and-answer session by transmitting signals from one brain to the other over the Internet. "It uses conscious experiences through signals that are experienced visually, and it requires two people to collaborate," says UW professor Andrea Stocco. The first participant wears a cap connected to an electroencephalography machine, which records electrical brain activity, and is shown an object on a computer screen. The second participant sees a list of possible objects and associated questions, and uses a mouse to send a question. The first subject answers yes or no by focusing on one of two flashing light-emitting diodes attached to the monitor, which flash at different frequencies. Either answer transmits a signal to the inquirer via the Internet and triggers a magnetic coil positioned behind the inquirer's head, but a "yes" answer generates a response intense enough to stimulate the visual cortex and cause the inquirer to see a flash of light called a "phosphene" that tells the inquirer the answer is "yes." Participants guessed the correct object in 72 percent of the real games, versus 18 percent of the control rounds. An earlier experiment demonstrating a direct brain-to-brain link between people was based on research by UW professor Rajesh Rao into brain-computer interfaces that enable people to activate devices by thought.


Choosing Interactive Tools for Virtual Museums Mixes Art and Science
Penn State News (09/21/15) Matt Swayne

Researchers at Pennsylvania State University's (PSU) Media Effects Research Laboratory say virtual museum exhibits should use communication and navigation technologies that match the experience they want to offer visitors. The researchers conducted a study analyzing how 126 visitors to an online virtual art museum perceived technologies for communicating about and navigating through exhibits. They thought the technologies were helpful when they were available separately, but less so when they were offered together. The PSU team tested customization tools that helped participants create their own art gallery, live-chat technology to enable communication with other visitors, and three-dimensional navigation tools that some participants used for exploration of the virtual space. "When live chat and customization are offered together, for example, the combination of tools may be perceived to have increased usability, but it turns out using either customization or live chat separately was greater than either both functions together, or neither of the functions," says PSU professor S. Shyam Sundar. "Our data also suggest that expert users prefer tools that offer more agency or control to users whereas novices appreciate a variety of tools on the interface." Sundar notes users also might react to the tools on other online platforms, such as news or educational sites.


How Teams of Computers and Humans Can Fight Disasters
University of Oxford (United Kingdom) (09/23/15)

In an interview, University of Oxford research fellow Steven Reece discusses the multi-university ORCHID project for enabling human-computer collaboration to aid disaster response. He says the project seeks to form human-agent collectives (HACs) that create "flexible teams of computers and humans to interpret large, unstructured data sets." Reece notes the HAC model inverts the traditional human-computer relationship by allowing computers to assume control from time to time and request information from humans. "It was the goal of ORCHID to figure out...how to incentivize humans to contribute to the HAC, track performance, maintain the best teams, and record the sources of information and decisions that are made," he says. In terms of HACs' benefits to disaster response teams, Reece cites examples such as coordinating a fleet of unmanned aerial vehicles to visually map aid requirements across a disaster area. Another application is CrowdScanner, a system used in the aftermath of the recent Nepalese earthquake to locate settlements from satellite images and spot settlements not mapped on open sources, so search and rescue teams could reconnoiter those sites. Reece also envisions a service in which people post resources that could be matched to a crisis response team with their own goals. He says machine learning can connect responders' requirements to the people with the resources using crowd interpretations of the resource providers' offers.


Can Robots Make Good Teammates? In Yale Lab, They Are Learning the Skills
Yale News (09/21/15) William Weir

Yale University researchers are training collaborative robots in teamwork etiquette at the school's Social Robotics Lab, under the leadership of Ph.D. candidate Brad Hayes. Among the skills being taught to the machines are stabilizing parts, handing over items, organizing a workspace, and helping people use a tool better. Developing robot teammates is one way to realize versatile machines with current technology. "It's only now that this is becoming feasible, to develop robots that could safely operate near and around people," Hayes notes. "We're trying to move robots away from being machines in isolation, developing them to be co-workers that amplify the strengths and abilities of each member of the team they're on." Hayes' research concentrates on robots learning "supportive behaviors" that help make others' tasks less difficult. Such behaviors require the machine to learn about jobs and teammate preferences, but to not have the same requirements of executing the entire job themselves. Hayes says having the robot learn tasks autonomously is very time-consuming and may be impossible for certain tasks, while another strategy is to directly demonstrate the task processes to the robot. "It can then then save that example and figure out if it's a good idea to generalize that skill to use in new situations," he says.


PrintPut Integrates Simple Touch and Pressure Sensors Directly Into 3D Printed Objects
3ders.org (09/20/15)

Researchers at Queen's University Human Media Lab have developed resistive and capacitive input widgets for creating interactive three-dimensional (3D) prints, which were presented at the INTERACT 2015 Conference in Germany. The PrintPut technique integrates touch and pressure sensors directly into 3D-printed objects by embedding interactive widgets directly within printed objects. PrintPut employs a conductive ABS filament to provide diverse sensors, which an industrial designer can incorporate into 3D designs. The PrintPut process starts by creating a base model of a designer's 3D shape in a computer-assisted design program. The model is then imported into Rhinoceros 3D and the points and curves for interactive areas are defined. The script applies this input toward the construction of the flat geometry for the desired sensors, which is projected onto the original model so it conforms smoothly to the surface. Following this, the script extrudes the geometry into the surface and subtracts it from the base model, resulting in two interlocking 3D models--the conductive circuits and the base model with hollows for these created paths. PrintPut requires an ABS-supported 3D printer with two extruders, and after a designer generates an object with sensor geometry, they enter it into their printer's build manager and assign the base and conductive geometry to standard and conductive filaments, respectively.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.