ACM SIGCHI Banner
Welcome to the December 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


In a Small Space, a Big Issue
The New York Times (11/20/14) Christopher F. Schuetze

The emerging smart watch market is triggering interest in design issues for small devices. Researchers hope to maximize the viewing surface and enable interaction without having fingers block the screen. Carnegie Mellon University professor Chris Harrison predicts next-generation watches might be capable of constantly monitoring what the user says or where he looks. For example, Harrison says if Africa's Mount Kilimanjaro is mentioned in a conversation, the watch could automatically look up the mountain's key information, which the user could then read and integrate into the conversation. Stanford University professor James Landay says voice recognition provides limited control, and he suggests movements of the hand wearing the watch could be used. He notes this would require incorporating skin sensors, gyroscopes, and accelerometers into the device, and says, for example, a pinching gesture in the air could reduce the map displayed on the screen. Harrison suggests the screen could be enlarged by making the wearer's skin function as both a display and an input device. He notes the use of infrared reflective technology could enable the entire arm to be a working touchscreen, in addition to a movable watch case.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


From Cognition to Control: Fundamental Research Continues to Advance Cooperative Robots
National Science Foundation (11/19/14)

The U.S. National Science Foundation (NSF), in partnership with the National Institutes of Health, the Department of Agriculture, and the National Aeronautics and Space Administration, announced $31.5 million in new awards to further the development of cooperative robots. The awards mark the third round of funding made through the National Robotics Initiative. Fifty-two new research awards were granted, ranging from $300,000 to $1.8 million over one to four years, to advance the fundamental understanding of robotic sensing, motion, computer vision, machine learning, and human-computer interaction. The awards include research to develop soft robots that are safer for human interaction, determine how humans can lead teams of robots in recovery situations, and design robots that can check aging infrastructure and map remote geographic areas. NSF's investments in robotics explore both the technical and engineering challenges of developing co-robots (robots that cooperate with people) and the long-term social, behavioral, and economic implications of co-robots across all areas of human activities. As part of the initiative, NSF also supports the development of new methods for the establishment and infusion of robotics in educational curricula. "Our engineers and scientists are creating a world where robotic systems serve as trusted co-workers, co-inhabitants, co-explorers, and co-defenders," says the NSF Engineering Directorate's Pramod Khargonekar.


Smart Cities Will Take Many Forms
Technology Review (11/18/14) Nate Berg

In an interview, New York University Rudin Center for Transportation Policy and Management researcher Anthony Townsend discusses the last two decades' application of technology by cities. He sees completely computerized cities as being less private and resilient compared to smart cities with decentralized and redundant infrastructure "where the services that we create using sensors and displays and all these digital technologies are trying to achieve objectives that are more in line with increasing social interaction, increasing sustainable behaviors, [and] reinforcing the development of culture, creativity, and wellness." With the worldwide urban population expected to exceed 6 billion people by 2050, Townsend stresses the role technology can play to enable the livelihood of people in developing countries. "Smartphones are the technology that I think is the most important," he argues. With virtually all city dwellers having a smartphone by 2050, Townsend predicts "we're going to have billions of still relatively poor people walking around with networked supercomputers in their pockets." He says he is interested in cities' development of long-term technology plans, which they are using to determine how technology can help realize their already developed long-term visions. Townsend also speculates smart cities will vary according to the localized, technology-facilitated services that are implemented. He particularly emphasizes urban infrastructure's greater probable reliance on consumer-provided devices in less-wealthy cities.


UT Arlington Theatre Arts Research Provides Insight Into Human Behavior for Scientists, Engineers Who Build Social Robots
UT Arlington News Center (11/21/14) Bridget Lewis

University of Texas at Arlington lecturer Julienne Greer is applying her theater arts expertise and research to help scientists and robotics engineers build more responsive social robots through a better understanding of human experience. "Don't we want the people who design this technology to also consider how human beings express feelings and interact with one another in addition to considering how a robot should be wired?" she asks. Greer recently wrote a paper focusing on how to enhance authenticity between humans and robots, which prompted her invitation to present the paper at the Sixth International Conference on Social Robotics in Sydney. In her workshop, Greer said the categorizing of behaviors or gestures and comprehension of how they induce specific human emotions will enable engineers and roboticists to apply those behaviors and gestures to robotic programming. "It is not in the algorithms and models, per se, that lies the creation of an actual relationship between humans and social robots, but in the measure of how algorithms and models serve the purpose of building connection and authenticity," says University of Milan professor Giuseppe Boccignone. Greer's next project will concentrate on learning the response of humans to robots encountering various circumstances through a data-capturing test.


Samsung Takes Eye-Scrolling Technology to Disabled Community
The Wall Street Journal (11/25/14) Jonathan Cheng

Samsung Electronics recently announced the development of a next-generation mouse that can help disabled users type, scroll, and navigate the Internet free of manual intervention. The EYECAN+ project employs a sensor mounted under a computer monitor to translate a user's eye movements into mouse commands that trigger 18 basic computer functions, such as copying, pasting, zooming, and select all. EYECAN+ was developed in collaboration with quadriplegic Yonsei University student Shin Hyung-jin, and it builds on Smart Scroll technology that was piloted with the Galaxy S4 smartphone. Smart Scroll enabled users to scroll down a page of text on the smartphone, using their eye movements in conjunction with the phone's camera. The researchers say EYECAN+ is an upgraded, far less cumbersome version of Samsung's original eyeCan device, and it does not require the user to wear a headset. Instead of releasing the EYECAN+ for commercial sale, Samsung will fabricate a limited number for charitable donation. Still, Samsung engineer Park Jung-hoon says it is possible for the company's mainstream commercial projects to borrow technology from its socially-minded initiatives, and vice versa. He also notes Samsung's current interest in virtual-reality (VR) headsets, including the September rollout of a Gear VR device, could be applied to the company's experiments with disabled people.


ISU Team Uses Virtual Reality to Help Train Future Astronauts
Iowa State Daily (11/18/14) Lauren Vigar

Iowa State University (ISU) professor Nir Keren and his team have developed an interactive model of the International Space Station at the university's Virtual Reality Applications Center. A white, 10-foot virtual reality cube transforms into the space station, enabling researchers to virtually step inside the interior. "The mission of [the center] is to perform research on the rapidly-expanding interactions between humans and technology," says ISU professor Stephen Gilbert. "[The center] is an interdisciplinary hub of expertise in human-computer interaction with significant facilities to support research in [human-computer interaction]," Gilbert says. Keren is working with colleague Warren Franke and a research team to use the virtual space station as a training tool for astronauts. The team works closely with retired astronaut Clayton Anderson to ensure an appropriate representation of the space station and the variety of functions associated with space station operations. The team built the virtual station based on a basic three-dimensional model provided by the U.S. National Aeronautics and Space Administration. Over the past eight years, Keren has developed VirtuTrace, a simulation engine used for the virtual station. A key feature of the simulator is a motion-based navigation system that enables trainees to "move" in the virtual world without the need for handheld devices. Eventually, the researchers hope to develop an affordable, field-usable interactive software tool that will improve decision-making and behavioral performance under stress in various scenarios.


Personalized Learning Encourages Creativity
The Tartan (11/23/14) Adithya Venkatesan

A recent seminar on human-computer interaction, hosted by Carnegie Mellon University, featured a lecture by New York University professor Winslow Burleson on using the latest technologies to develop better systems for personalized learning, intelligent creativity support, and open health innovation. Burleson discussed the componential model of creativity, which he said is comprised of four core elements of creativity generation, including intrinsic motivation, domain-relevant knowledge, creative thinking style, and external factors. Burleson supports the development of systems that can simulate human affects, intelligent-tutoring systems, creativity support tools, and context-aware systems. At his media lab, Burleson uses a virtual learning companion to help people who believe they cannot increase their intelligence. He said the motivational learning companion provides help just before the person decides to quit learning something. Burleson noted this triggers a metacognitive experience that could potentially lead to an active experience. By linking the two, people were able to enjoy what they were doing. Studies show this is the point at which people are the most creative and generate the best ideas and are able to withstand difficulties. Using sensors and cameras, Burleson's lab identifies the experiences of various people and reacts to them accordingly, which he said can be beneficial in education and other areas.


A 3D, Talking Map for the Blind (and Everyone Else)
University of Buffalo NewsCenter (11/19/14) Charlotte Hsu

Developers at the State University of New York at Buffalo's Center for Inclusive Design and Environmental Access (IDeA Center), in collaboration with Touch Graphics, have constructed and tested three-dimensional maps that vocalize building information and directions in response to tactile contact. The purpose of the maps is to help blind visitors find their way in public spaces. "We're providing a level of information that allows them to navigate their environment easily, without help, which gives them a sense of independence," notes IDeA Center researcher Heamchand Subryan. An installation at Massachusetts' Perkins School for the Blind employs conductive paint on miniature buildings to detect pressure from a visitor's fingers, triggering announcements of building names and directions for reaching destinations. Users also can browse a verbal index of all points of interest via a menu controlled by three buttons. The Perkins installation also can enhance navigation for sighted visitors, for example by projecting a spotlight on buildings when they are touched. "The touch-responsive models solve the 'last mile' problem for blind pedestrians, who can often navigate to a building or campus address using [global-positioning systems], but then need help to get to the classroom building or doctor's office where they need to be," says Touch Graphics president Steve Landau.


Which Mobile Apps Are Worst Privacy Offenders?
IEEE Spectrum (11/18/14) Neel V. Patel

Many users are unaware that free mobile applications reduce their privacy by sharing contact lists with third parties or using their location to target ads. In response, Carnegie Mellon University's (CMU) Computer Human Interaction: Mobility Privacy Security Lab developed PrivacyGrade.org to rate free Android apps' privacy on a scale of A+ to D. PrivacyGrade's underlying model evaluates how much private information an app extracts from a user's device and its degree of alignment with user expectations, based on the preference ratings of 725 different users on 837 free Android apps. "Our privacy model measures the gap between people's expectations of an app's behavior and the app's actual behavior," says project leader Jason Hong, a professor in CMU's Human-Computer Interaction Institute. PrivacyGrade displays the permissions used by the app, a simple description of what that permission involves, and why the app is requesting that permission. The PrivacyGrade website lists "A" grades for the most widely-used apps, including almost all Google apps, and those from Facebook, YouTube, Instagram, and Twitter. Meanwhile, mobile games and more entertainment-based apps are more likely to receive a "D" grade. The site also lists information on third-party libraries that deploy a feature for an app not directly designed by the app's developer.


New Software Plug-In Enables Users to Add Haptic Effects to Games, Media
Phys.Org (11/17/14)

Disney Research has developed a library that simplifies the use of haptics, or "feel" effects added to video games, movies, or virtual simulations. The library contains more than 50 feel effects, including falling rain and a walking cat that can be adjusted in intensity. "We believe that allowing users, even novices, to access, customize, and share haptic effects will bring haptics into the mainstream of electronic storytelling, much like sound effects and visual effects," says Disney Research Pittsburgh researcher Ali Israr. "Once people can create intimate and engaging experiences for themselves and others, haptics effects can be utilized in everyday activities." The library's accompanying FeelCraft software enables users to save the feel effects and share them with other users. Disney researchers have demonstrated the technology by integrating it into a video game that enables users to associate six events with a corresponding feel effect. The system was implemented using a vibrotactile array, which creates sensations by stimulating the back. "Vibrotactile stimulation of the hand and back is a widely used source of sensation for games, movies, and social interactions, but the FeelCraft platform can be easily adapted to other haptic feedback modalities," Israr notes.


AOL Gift Launches Connected Experiences Lab
Cornell Chronicle (11/13/14) Anne Ju; Syl Kacapyr

AOL is funding a four-year partnership with Cornell Tech and the Technion-Israel Institute of Technology to develop the Connected Experiences Lab (ConnX), which will explore analytical techniques for personal data streams to create actionable insights. ConnX also will strive to develop connectivity tools that deepen and sustain engagement within families and communities. The collaboration will extend to the Jacobs Technion-Cornell Institute, with part of the gift supporting activities there in coordination with AOL Israel. The research will focus on the use of data in the areas of human-computer interaction, computer vision, machine learning, natural-language processing, and social computing across multiple disciplines, including computer science, management science, electrical engineering, and information science. The Cornell Tech lab will involve full-time engineers and designers, in addition to faculty, Ph.D. students, and postdocs. "To build meaningful technologies and improve the likelihood of impact, we need people in-house that can design and build systems, moving early research prototypes into the real world," says Cornell Tech faculty member and ConnX co-founder Deborah Estrin. Meanwhile, Jacobs Technion-Cornell Institute professor Mor Naaman notes, "we are designing new technologies that would be used by a population that is 50-percent female. It is only appropriate that the research team reflects that ratio."


Tiny Tattoos Sense Health
EE Times (11/13/14) Jessica Lipsky

Researchers at the University of California, San Diego's (UCSD) Center for Wearable Sensors have created prototypes of inexpensive nanosensors that can be screen-printed and worn on the skin as temporary tattoos for medical applications. Such devices "couple favorable substrate-skin elasticity along with an attractive electrochemical performance," notes UCSD professor Joe Wang. Non-invasive diabetes monitoring using tears and endurance and performance assessment via perspiration are just some of the attached sensors' demonstrated uses, while Wang says the devices also can measure heavy metal elements and harvest energy through a printable biofuel or zinc battery. Research began with printable textile-based sensors sewn into garments for performance measurement, and Wang says later efforts focused on combining multi-electrode layers with apparel "to develop a forensic lab on a sleeve with detection of explosives and gunshot residue all integrated with supporting electronics on a sleeve, on a textile." In addition, the center's researchers are working on carbon-based micro-needle sensors that can monitor electrolytes under the skin as well as deliver medications more effectively. Proof-of-concept sensors presented by UCSD students include devices offering point-of-care glucose monitoring that can be plugged into a smartphone, and a nano-engineered retinal prosthesis to measure and treat neurodegenerative blindness.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.