ACM SIGCHI Banner
Welcome to the July 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members at the beginning of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


The Robots Are Coming
Foreign Affairs (07/15) Vol. 94, No. 4, P. 2 Daniela Rus

Robotics promises to dramatically improve the quality of everyday life, but for that to happen requires humans and machines to complement each other's strengths in a collaborative relationship, writes Massachusetts Institute of Technology (MIT) professor Daniela Rus. To make pervasive, customized robots a reality is a focus of current research, which seeks to improve the manufacturing, movement, reasoning, and environmental processing of robots, as well as their interaction with humans and each other. Rus notes advances in functionality have been enabled by innovations in robot design and the algorithms that govern robot perception, reasoning, control, and coordination. One area of robotics that has gained increasing visibility is driverless cars, which many major automakers are pursuing with expectations they will be able to market them by 2020. Rus says broad acceptance of robots demands the natural integration of intelligent machines within the human world, instead of vice-versa. However, issues with production time, limitations in robots' environmental perception and reasoning capabilities, and fragile communication remain prevalent. In terms of reasoning, robots have difficulty because of the specification of their computations, they collect too much low-level data, and they cannot cope with unexpected situations. Rus says improving robot-robot and robot-human communication is another challenge. She notes her MIT research group recently enabled robot teams to assemble furniture by giving them the ability to understand error and seek human assistance.


Computing at Your Fingertips
Technology Review (06/23/15) Larry Hardesty

Research groups from the Massachusetts Institute of Technology Media Lab unveiled prototypes of finger-worn input devices with unique functions at ACM's CHI 2015 conference in April. The Fluid Interfaces group presented a text-to-speech converter for the visually impaired, which is worn like a ring, has a built-in camera, and uses tactile or audio feedback to scan users' fingers along lines of text. The device uses an algorithm that pulls visual information from text, and when the user positions his finger at the start of a new line, the algorithm calculates the text's baseline. It then tracks each word sliding past the camera, and isolates the word it identifies near the center of the field of view. The other device, developed via collaboration between the Living Mobile group and the Responsive Environment group, consists of a thumbnail-mounted track pad that enables users to control portable devices when their hands are full. It also could enhance other input devices, so someone typing a text message, for example, could toggle quickly between symbol sets. The device bundles together touch sensors, a battery, an antenna, a microcontroller, a Bluetooth radio chip, and a sensor controller.


Rethinking Computerized Clinical Alerts
National Science Foundation (06/24/15) Nivedita Mohanty; Aaron Dubrow

With support from the U.S. National Science Foundation's Smart and Connected Health Program, Indiana University-Purdue University Indianapolis (IUPUI) researchers are redesigning drug interaction warnings to avoid alert fatigue, in which clinicians become inured to the large volume of safety alerts stemming from the computerization of healthcare. The researchers say their effort is advancing knowledge in human-computer interaction to overcome the challenges of drug-drug interaction alerts' low level of effectiveness. "We are looking at how to improve the trust between the physician and computer," says IUPUI researcher Jon Duke. The redesign process depends heavily on examining the types and sources of information healthcare providers viewed as important and effective. Information flow was researched via direct observation of hospital team meetings, and then work models were built to identify themes that fuel trusted advice in clinical environments. The models combined the roles of evidence in medical literature and advice supplied by peer consultants. Armed with these insights, the researchers are reconfiguring the computer interface to reflect various trust-based alert models. The team also is devising unique interface designs in which computer alerts communicate drug safety guidance in various forms, with concepts that include visualizations for different trust-based alert messages.


Computer Vision and Mobile Technology Could Help Blind People 'See'
University of Lincoln (06/25/15) Marie Daniels

University of Lincoln computer scientists are developing technology that will turn smartphones and tablets into navigation tools for blind and visually-impaired people. The specialists in computer vision and machine learning plan to embed a smart-vision system in mobile devices. The team wants to use color and depth-sensor technology inside new smartphones and tablets to enable three-dimensional mapping and localization, navigation, and object recognition. The researchers will then develop the best interface--vibration, sound, or spoken word--to relay this to users. The system will use the device's camera to detect visual clues in the environment as the user moves around a space. The researchers note a unique aspect of the technology will be its ability to learn from its environment and from human interaction, which will make it quicker and easier to identify the landscape and guide users. "There are many visual aids already available, from guide dogs to cameras and wearable sensors," says Nicola Bellotto, the project's leader. "We aim to create a system with 'human-in-the-loop' that provides good localization relevant to visually-impaired users and, most importantly, that understands how people observe and recognize particular features of their environment."


Forget Touchscreens and Buttons, Google's Project Soli Lets You Control Gadgets Using Hand Gestures Made in Thin Air
Daily Mail (United Kingdom) (06/26/15) Michael Fitzpatrick

Google's Project Soli technology can read gestures and translate them into commands for gadgets using even the smallest display interfaces. "Using a tiny, microchip-based radar to track hand movements we can now track the minutest movements and twitches of the human hand to interact with computers and wearable devices," says Soli lead researcher Ivan Poupyrev at Google's Advanced Technology and Projects lab. He notes until now researchers did not have the fidelity to capture hand movements in such precise detail. "But now using radar, for the first time in history you can build 'Minority Report'-type interfaces," Poupyrev says. His team says the project's most formidable challenge was miniaturizing a shoebox-sized radar so it could fit on a microchip. They say this process took only 10 months thanks to innovations in communications technology related to Wi-Gig, a next-generation Wi-Fi technology. "Soli would be actually perfect for [virtual reality (VR)] because the user's field of vision is limited and something like Soli would replace a physical device," notes technology consultant Serkan Toto, "making VR as a whole much less awkward and more intuitive to use."


How Do Toddlers Use Tablets?
Iowa Now (06/18/15) Gary Galluzzo

University of Iowa researchers analyzed more than 200 YouTube videos of infants and toddlers using iPad tablet computers to "provide a window into how these children are using tablets," and they say their insights support "opportunities for research and starting points for design." Iowa professor Juan Pablo Hourcade notes 90 percent of the children in the video had acquired moderate tablet-use ability by age two, compared to slightly more than half of 12- to 17-month-olds. Moderate ability was characterized as requiring assistance from an adult to access apps, but being able to use them while exhibiting some problems with basic interactions. According to Hourcade, his team was able to estimate the ages of the children and perceive a clear progression of successful performance tied to age consistent with developmental milestones. "One of the biggest differences we found is that when children turn one year old, they switch from using both hands and all their fingers to interact with the tablet to using an index finger--which is what adults do," Hourcade says. He believes this and similar studies will shape development of apps that encourage interactive education for infants and toddlers. The research was published in the proceedings of the ACM CHI 2015 conference.


Self-Expression, Conversation, and Adults v. Teens in Social Media
Newswise (06/16/15) Stephanie Koons

A study by Pennsylvania State University (PSU) researchers found differences between adolescents and adults' use of social media apps, with the former generally using them more for self-expression and conversation than the latter. "Teens have much higher levels of self-disclosure on the Internet," reports Patrick Shih at PSU's College of Information Sciences and Technology (IST). Shih and colleagues studied 27,000 teens and adults using Instagram, and they demonstrate the possibility of identifying age information in user profiles via a mix of textual and facial recognition methods, and of using that information to probe how teens use and engage Instagram versus adults. The study found teens tend to post fewer photos on Instagram than adults, possibly because their resources to explore environments outside their daily activities may be limited. Moreover, more than half of photos posted by teens fell under "mood/emotion" and "follow/like" topics, while adults' posted photo topics skewed toward more diversity, such as "arts/photos/design," "locations," "nature," and "social/people." The IST researchers used textual pattern-recognition algorithms to parse a roster of patterns describing a user's age, as well as an online tool called Face++ to detect ages and genders of people depicted in photos. They presented their findings at ACM's CHI 2015 conference in April.


Reading Lessons
The Economist (06/20/15)

Braille use by blind people is waning, not least because of policies in which sight-impaired people are being educated alongside their sighted counterparts. However, the use of Braille is still critical for blind people holding jobs, especially in fields where complex equations need to be understood, for example. A resurgence of Braille may stem from efforts such as a project initiated by University of Michigan researchers, which yielded a prototype pneumatic technology system that debuted at the recent World Haptics Conference. The device includes a screen with a grid of pins the diameter of Braille dots. The tops of the pins are normally flush with the screen's surface, but they can be pushed upwards to generate patterns representing Braille symbols. Each pin rests on a silicone-rubber membrane positioned above a small cavity, which is in turn linked to a tiny pneumatic line and valve. Blowing air through the valve into the cavity causes the membrane to inflate, pushing the pin above the screen's surface, while opening the valve releases the air and returns the pin to its original position. Device co-developer Sile O'Modhrain thinks the tool can be scaled up to the size of a normal tablet.


Robot Love: How to Persuade Humans to Embrace Machines
Engineering and Technology Magazine (06/15/15) Ed Cara

Adhering to a humanoid form factor is not necessarily a requirement in the development of robots that work for or with people on a regular basis. Not only can human-like robots repel people past a certain point of realism, but they also can create unreasonable expectations of communication and behavioral skills beyond the means of current technology. For example, the cubical Mechanical Ottoman--a remotely operated, non-anthropomorphic footrest--designed by Stanford University researchers offers a glimpse of acceptable robot configurations and functions. A mobile trashcan from the same research team, which wiggles playfully when trash is put inside it, has encouraged human interaction in tests, not least because of people's tendency to make social attributions. However, Carnegie Mellon University professor Jodi Forlizzi says how people react to robots depends on the context of their interactions. She notes this points to the need to design machines that not only communicate their goals to humans, but also anticipate people's own motivations. The acceptance of smart, driverless cars, medical robots, and other innovations can only proceed if people are willing to cooperate with these technologies. Researchers say this entails balancing trustworthiness with effectiveness, and practicality with likeability, in the machines' design.


Touch-Toned: Virtual Reality Games Place Authenticity in Users' Hands
The Los Angeles Times (06/18/15) Paresh Dave

Games companies showcased their latest virtual reality (VR) technologies, designed to translate players' actual hand and finger gestures into virtual actions, at the recent Electronic Entertainment Expo. For example, Oculus VR's Rift helmet, in conjunction with hand-worn ring devices, enables players to see ghostly images of their hands in a virtual space, with the virtual hands mimicking their movements thanks to wall-mounted sensors that track the rings' movements. The prototype Oculus Touch system lets testers load a slingshot, pick up a lighter, hold a firecracker, or bounce a ping-pong ball on a paddle. Meanwhile, Sony's Project Morpheus VR headset promises to deliver at least three environmental control modes for players, including a mode in which shakes of the head can knock down objects, a three-dimensional Tetris game in which tapping a game pad rotates blocks, and a dual-hand baton-like controller. Also exhibited at the expo was a sensor-filled ring from Nod Labs that can control the movements of a virtual character in response to swiveling hand movements in one game, or fire lasers with a thumb-tap in another. Nod Labs' setup does not require a player to stand in the path of a sensor for the motion control to function.


CSL Advances Mobile Augmented Reality Technology
University of Illinois at Urbana-Champaign (06/11/15) David Robertson

University of Illinois at Urbana-Champaign (UIUC) Ph.D. student Puneet Jain is conducting research and developing software related to mobile augmented reality in conjunction with UIUC professor Romit Roy Choudhury and IBM Research. The project at the University of Illinois' Coordinated Science Laboratory is being funded by the U.S. National Science Foundation, Google, and Nokia. The OverLay technology operates within a smartphone and blends the device's camera and sensor data to extract sense from surroundings in real time. Once the system understands that a user is staring at a specific object, relevant data can be immediately displayed onscreen. "Combining vision and sensing achieves the desired outcome," Choudhury says. "Specifically, as users look at different objects in their environment, the sensor data is used as geometric constraints." For example, OverLay sees objects A and B are statistically separated by certain angles, objects B and C are usually separated by a 10-second interim, and so on. "With these observations, we came up with an optimization framework to build a geometry out of our surroundings, and once we built that layout, we were able to drastically speed up computer vision," Jain says. He notes the technology's potential applications include marketing, infrastructure management, museums, shopping, and privacy, in addition to surveillance and defense.


Fuzzy Robot Friend Romibo Helps Improve Youngsters' Social Skills
Pittsburgh Post-Gazette (PA) (06/16/15) Amaka Uchegbu

Carnegie Mellon University researchers Aubrey Shick and Garth Zeglin have developed Romibo, a fuzzy robot designed for social therapy. The robot is designed for use by therapy programs in schools, libraries, and community service centers to help autistic children improve their social skills and reduce anxiety. The researchers say Romibo enhances literacy, social skills, math, and science lessons designed by Fine Art Miracles for children with and without autism. The two-wheeled Romibo can be picked up by children and has a face on a computerized screen with simplified human features, and is capable of following faces and holding a person's gaze. The robot can talk as well, speaking words and phrases entered into an iPad controller. "It seems the universal response [to Romibo] is one of delight," notes Fine Art Miracles' Jane Cinicola. "The kids are totally engaged, enthralled really." Romibo can teach children on the autism spectrum to improve their language skills, along with social behaviors. The robot also can benefit children with nondevelopmental challenges, according to Fine Art Miracles CEO Tess Lojacono; she notes some children from difficult socioeconomic backgrounds and those who do not apply themselves out of fear of failure forget their insecurities and focus harder with Romibo.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.