ACM TechNews SIGCHI Edition
ACM SIGCHI Banner
Welcome to the September 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


How to Clean Up the World of Online Reviews
New Scientist (08/26/15) Aviva Rutkin

It is a formidable challenge to vet the user-generated reviews on the Internet to filter the legitimate ones from those that are less than honest.  For example, Boston University's Georgios Zervas reports at least 16 percent of Yelp reviews are flagged by the company's secret algorithm as suspicious, and ultimately sifted out.  Compounding the situation is genuine reviews can be distrusted by users, and rectifying that problem is the focus of a team of Google researchers, who asked approximately 2,000 people to evaluate the credibility of online restaurant reviews.  Their findings suggest the most trustworthy reviews tend to be those with a balanced tone.  The researchers believe review platforms might benefit from software that assesses the tone of a review in real time, and tries to gently discourage reviewers from writing something overly critical or rapturous.  The Google research will be presented at the International Conference on Human-Computer Interaction in Germany in September.  Meanwhile, Stanford University's Paolo Parigi is examining how trust develops between people who offer accommodation on sites such as CouchSurfing and Airbnb and their customers.  He thinks simplicity of presentation instead of slickness is a winning attribute of successful reviews.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Intel Is Teaching Its Gadgets to Mimic Humans
Wired (08/18/15) Molly McHugh

Intel's RealSense technology represents the "sensification of compute," says CEO Brian Krzanich. He says Intel is researching ways to imbue devices with more human-like behavior in order to better enable them to learn and understand their users. Krzanich notes part of the plan to achieve this is extending RealSense's platform compatibility to include ROS, Linux, Unity, XSplit, Structure SDK, OSVR, and Google's Project Tango.  For example, the Intel-Project Tango collaboration involves merging RealSense and Google's three-dimensional (3D) mapping initiative, with one early result being a RealSense smartphone that can 3D-scan a room.  RealSense developers are working on various notable projects and innovations, including Razer's creation of a USB-powered camera designed to sit atop desktops or virtual reality headsets in order to better track gamers' movements and embed that feedback within the gaming experience. Another project is the development of a virtual hotel concierge enhanced with RealSense so it can avoid collisions, among other functions.  "RealSense not only has the capability to dramatically expand device use cases, but also has the ability to drive processing requirements," says analyst Patrick Moorhead. "This is vitally important given many apps moving to the cloud.  Intel should be spending hundreds of millions if not billions to make this happen."


Home-Based Brain Computer Interfaces to Enhance Lives of People With Disabilities
CORDIS News (08/26/15)

The BACKHOME project developed by Eurecat's Felip Miralles over the past three years has focused on the migration of brain-computer interface (BCI) technology from laboratory to mainstream use in the home as an assistive technology for the disabled. Miralles says the completed project has yielded five advancements, including a framework that fulfills the needs of a multifunctional BCI with remote home support; novel BCI equipment establishing a new benchmark of lightness, autonomy, comfort, and reliability; easy-to-use services customized to people's needs with one-click command and adaptive usage; a telemonitoring and home support system; and a Web-based application for therapists offering remote services. Among the end-user products currently on the market is the g.Nautilus wireless signal acquisition system, and the intendiX BCI system for everyday use. Currently in production is the sensor-based eKauri telemonitoring and home support system. Miralles says these innovations were assessed via a user-centered design strategy. This approach "provided useful lessons for technical developers indicating aspects that are most important, such as the need to be able to use the system without caregiver support, the importance of the infrastructure in the living environment, and the importance of advancing the algorithms used to prevent undesired selections," says BACKHOME technical coordinator Eloisa Vargiu.


Exploring Comfortable Skin-Worn Sensors for Touch Input
Tech Xplore (08/11/15)

Researchers at the Max Planck Institute for Informatics, Saarland University, Carnegie Mellon University, CNRS LTCI, Telecom-ParisTech, and Aalto University have developed iSkin, an elastic and customizable sensor technology that can be worn on the human body. iSkin is capable of reading touch input with two levels of pressure, even when stretched by 30 percent or when bent with a radius of 0.5 cm, according to the researchers. "iSkin is made of multiple layers of thin, flexible, and stretchable silicone," they say. "The base material is polydimethylsiloxane (PDMS), an easy-to-process silicone-based organic polymer. PDMS is fully transparent, elastic, and a highly biocompatible material." The researchers also note iSkin can be configured into different shapes and sizes for different parts of the body. Co-developer and Planck Institute researcher Martin Weigel says iSkin originated from the field of robotics, where it is applied to imbue robots with human-like tactile sensations. "However, we are the first to look into how we can use it on the body to control mobile devices; so as a kind of second-skin, which nicely conforms to your body," he notes. Weigel says the silicone contains carbon particles to create conductivity, enabling iSkin sensor use for electronics.


Scientists Develop Mind-Controlled Robotic Exoskeleton That Uses LEDs to Help Paraplegics Walk
International Business Times (08/24/15) Mary-Ann Russon

A thought-controlled robotic exoskeleton that can help paraplegics regain the ability to walk has been developed by researchers from Korea University and the Technical University of Berlin (TU Berlin).  Users put on an electroencephalogram (EEG) cap and then stare at a device facing them with five embedded light-emitting diodes (LEDs), each of which operates on a different frequency.  When the user concentrates their gaze on one of the LEDs, the suit can identify a brain signal that commands it to move forward, turn left, turn right, sit down, or remain stationary.  "Exoskeletons create lots of electrical 'noise,'" notes Klaus Muller, a professor at TU Berlin's Institute of Software Engineering and Theoretical Computer Science.  "The EEG signal gets buried under all this noise--but our system is able to separate not only the EEG signal, but the frequency of the flickering LED within this signal."  The processing of the user's brain signals must occur remotely and depends on a wireless EEG signal receiver and an independent signal-processing unit in the same room as the person in the suit.  The scientists want the system to be migrated to help direct any other existing exoskeleton suit.


Dartmouth Team Uses Smart Light, Shadows to Track Human Posture
Dartmouth College (08/10/15) John Cramer

Dartmouth College researchers have developed a light-sensing system that facilitates the continuous, unobtrusive tracking and reconstruction of human postures as part of an effort to create smart, gesture-manipulated environments. "Here we are pushing the envelope further and ask: can light turn into a ubiquitous sensing medium that tracks what we do and senses how we behave?" says Dartmouth professor Xia Zhou, co-director of the Dartmouth Networking and Ubiquitous Systems Lab. The LiSense system the researchers developed uses visible light communication (VLS) to effect the real-time rebuilding of a three-dimensional human skeleton's movements. LiSense's components include off-the-shelf light-emitting diodes, photodiodes, and micro-controllers. The system uses shadows generated by the human body from blocked light and reconstructs the skeleton's postures. Challenges the researchers say they surmounted to realize LiSense included that of multiple ceiling lights causing diminished and complex shadow patterns on the floor. They designed VLC-enabled light beacons to separate light rays from different light sources and recover the shadow pattern cast by each light. The researchers also developed an algorithm to reconstruct human postures using two-dimensional shadow data with limited resolution collected by photodiodes in the floor. The research will be presented at ACM MobiCom 2015, which takes place Sept. 7-11 in Paris.


Researchers Turn 3D World Into 'Projection Screen' for Better Robot-to-Human Communication
Georgia Tech News Center (08/12/15) Tara La Bouff

Professor Heni Ben Amor at the Georgia Institute of Technology's (Georgia Tech) School of Interactive Computing has helped enhance human and robot safety in manufacturing by enabling robots to project their next action into a three-dimensional (3D) space and onto any moving object. "The robot's...intended action continues to follow the object wherever that moves as long as necessary," says Ben Amor. The breakthrough lets human workers to stand next to the robot in an assembly environment instead of controlling it from a distance, so they can inspect precision, make rapid adjustments to its work, or move aside as the robot and human take turns assembling an object. Intention projection was facilitated via collaboration between Ben Amor and Aalborg University's Rasmus Andersen, who blended existing research at Georgia Tech's Institute for Robotics & Intelligent Machines with new algorithms and personal experience with auto manufacturers. The first step was to refine algorithms to enable a robot to detect and track 3D objects, and then a second set of algorithms was developed to display information onto a 3D object in a geometrically correct way. Combining these elements enabled perception of an object, and then identification of where on that object to project information and act, followed by continuously projecting that information as the object moves.


Surgeons May Get Remote Assistance With New 'Telementoring' System
Purdue University News (08/24/15) Emil Venere

Researchers at Purdue University and the Indiana University School of Medicine are developing the System for Telementoring with Augmented Reality (STAR) to enable specialists to remotely support far-flung battlefield surgeons. Purdue professor Voicu S. Popescu says STAR uses a transparent display with a tablet computer positioned between the surgeon and the operating field so it can directly integrate a mentor's graphical annotations and illustrations within the field of view.  The tablet is held in place by either a robotic arm or a mechanical holder controlled by a surgical assistant, and it obtains a video stream of the operating field as the surgery is performed.  The video stream is transmitted to the mentor who enhances it with annotations, which are relayed back to the surgery site where they appear on the transparent display.  STAR employs computer-vision algorithms to keep annotations in alignment with the rapidly shifting images of the operation. The system needs only commercially available gear such as consumer electronics.  Popescu says among the technology's limitations is the fact that "the video acquired by the tablet will be warped to the view of the surgeon, which will require acquiring the operating field with a depth camera...and will require tracking the surgeon's head."


Fujitsu Brainstorm Room Lets You Write on the Walls
IDG News Service (08/13/15) Tim Hornyak

A new brainstorming user interface (UI) designed by Fujitsu features digital writing surfaces and digital sticky notes that can be tied to data on smartphones and projectors for the walls and tables. Fujitsu researchers say the purpose of the UI is to enable seamless sharing of mobile device data over large projection surfaces, and the creation and sharing of new data. Naoyuki Sawasaki at Fujitsu's Ubiquitous Systems Laboratory says the system is unique in that on-site equipment and smart devices can be immediately connected, and the space itself becomes a UI where device information can be freely expanded. In one demonstration in Tokyo, a table and wall were equipped with overhead projectors, cameras, a Kinect motion-sensing system, infrared light pens, Wi-Fi linking participants' mobile devices, and a server running the UI software. Personnel shook their smartphones to trigger an app that would transmit their phone data to the UI. Their phone screens were projected on the table and menu options were chosen with the pens. The participants wrote in Japanese on projected windows on the table, and could choose an option that converted their handwriting into text. The words were automatically transferred to digital sticky notes, which could be "thrown" onto the wall by dragging the pen across the table.


Sandia Teams With Industry to Improve Human-Data Interaction
Sandia National Laboratories (08/13/15) Heather Clark

Sandia National Laboratories researchers are developing tools to enhance intelligence analysts' collection and analysis of visual information via a Cooperative Research and Development Agreement with EyeTracking. "Both Sandia and EyeTracking are being helped by a direct link between each other," notes EyeTracking president James Weatherhead. "The hope is for both sides to come out with these tools and feed solutions back to different government agencies." EyeTracking will provide hardware and software elements under the alliance, while Sandia will supply access to researchers to push forward more innovative visual data interaction. Eye tracking is typically used by labs to examine how people reason, but Sandia needs to investigate real-world environments, according to Sandia researcher Laura Matzen. She says the Sandia/EyeTracking partnership could potentially improve dynamic-image data analysis to the degree where researchers could design augmented experiments or field studies using dynamic images, compare how people or groups of people engage with dynamic visual data, advance cognitive science research to probe how expertise impacts visual cognition, and inform new system designs to scale up surveillance by partially automating some analyst steps. Sandia's Laura McNamara says the partnership and other initiatives are designed to bolster the link between humans and technology and create software systems with end-users in mind.


A UIUC Researcher Just Got $1.5M to Create Robot Assistants for the Elderly
Chicago Inno (08/18/15) Karis Hustad

The U.S. National Science Foundation (NSF) has awarded University of Illinois Urbana-Champaign (UIUC) researcher Naira Hovakimya a $1.5-million grant to underwrite development of assistive robots that can help seniors with daily tasks. As leader of the Automation Supporting Prolonged Independent Residence for the Elderly (ASPIRE) project, Hovakimya will coordinate a team of engineering, computer science, and psychology researchers to create co-robots, which include drones and ground-based machines. "The idea is that if we get technologically equipped houses, people will most likely enjoy their independent life in their home as opposed to going to a nursing home, where things will be overstuffed and understaffed," Hovakimya says. Her research into human-robot interaction mainly yielded studies that centered on humanoid robots, whose development is more expensive and time-consuming. 'What we are talking about particularly are miniaturized drones," Hovakimya says. "It seems there is no research around their social etiquette." A second NSF grant of $300,000 was allocated to Hovakimya's team to establish a platform for Non-Intrusive, Collaborative, Empathetic, Robust (NICER) robots, which will help make people more comfortable and safe with drone robots as aides.


Lab Coats and Leggings: When Science and Dance Connect It's Quite a Show
The Conversation (08/24/15) Gene Moyle

Dance is increasingly viewed by artists, scientists, and academics as an intriguing research frontier, writes Queensland University of Technology (QUT) professor Gene Moyle. She notes the recent QUT DANscienCE Festival highlighted the work, research, and practice of academics and dancers worldwide in a variety of disciplines, including environmental science, physics, robotics, gamification, and health. Research spotlighted at the event included projects exploring ballet to provide insights into addressing dizziness symptoms in the senior population. Meanwhile, dementia patients participating in dance classes have demonstrated improvement in their physical, social, and emotional health. Other research efforts look into the psychological mechanisms underlying the creation of music and dance, and their use to assess complex systems and human-computer interaction. These interfaces involve using physical movement to evaluate memory and learning. Dance as a means of communication is another scientific focus point, while the teaching of biology via dance entails the embodiment and incorporation of information within gestures and sequences of movement so the brain can make associations with what is being taught, along with connections to fun and humor. The role dance is playing in the field of robotics is exemplified by QUT's Robotronica, which displays the full spectrum of innovation in this area.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe