ACM SIGCHI Banner
Welcome to the April 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Tactile Internet: 5G and the Cloud on Steroids
Engineering and Technology Magazine (03/24/15) Katia Moskvitch

Researchers are developing a tactile Internet to revolutionize remote control as well as extend the sense of touch to data. Among its required components are faster data rates and devices for encoding and decoding tactile sensation, in conjunction with next-generation robots that can be controlled from afar via the touch interface. Scientists at Italy's Sant'Anna School of Advanced Studies are exploring sensors built with micro-electromechanical systems and integrated into soft artificial tissue that emulates a fingertip. "When you touch something, your hand 'encodes' the strength of the touch and the qualities of the touch, or the texture," says lead researcher Calogero Maria Oddo. "Our sensors can do the same, to a degree, understanding many different materials and differentiating between things like cotton, wood, or marble." King's College London researcher Mischa Dohler envisions such data being uploaded to the cloud and sent to users with a haptic display engineered to reproduce the tactile sensations. "That could be a glove, or a second skin-like flexible exoskeleton, or some other kind of user interface like a joystick integrated with mechanical actuators," notes fellow King's scientist Kaspar Althoefer. Although the autonomous robots that users would operate remotely through haptic interfaces are still under development, the tactile Internet could progress incrementally in areas such as gaming with smart suits or gloves.


Engineering Professor Earns Award for Influential Audiovisual Study
UT Dallas News (03/25/15) LaKisha Ladson

University of Texas at Dallas professor Carlos Busso earned the ACM International Conference on Multimodal Interaction's first 10-Year Technical Impact Award for his work on a pioneering study about audiovisual emotion recognition. His research analyzed the constraints in exclusively detecting emotions from speech or facial recognition, and he sees value in using both modalities simultaneously. "We have a lot of trouble distinguishing between [happiness and anger] if you only listen to audio, but when you see a smile on the face, that makes the whole difference," Busso notes. "In our study, we demonstrated which emotions are likely to be confused using only one modality, and we fused them together and found that you get more accurate emotion classification." The study involved an actress reading 258 sentences while making happy, angry, sad, and neutral facial expressions. Audio and facial motions were respectively recorded via a microphone and a motion-capture system. One insight from the study is that key emotion classification data is revealed by the cheek area, while eyebrows are less significant. Future uses of Busso's research could include clinicians employing human-computer interaction with depression or Parkinson's sufferers to better evaluate their therapy's effectiveness.


Stanford Collaborates on Research to Help Online Groups Organize Themselves
Stanford Report (03/23/15) Tom Abate

Stanford University professor Michael Bernstein and graduate student Niloufar Salehi have developed an online forum framework to surmount impediments to group action. "It's easy to come together online, act upset, and blow smoke," Bernstein says. "We wanted to take it to the next level: what does it take to come together to transform that energy into decisions and the pursuit of common goals?" The researchers identified stalling and friction as the primary barriers to online action. The former concerns the need to overcome the inertia preventing people from agreeing on goals and cooperating to meet them, while the latter is the risk that disagreements will distract participants from their common ends. The team's study group was the Dynamo forum, whose members included several hundred freelancers who worked via Amazon Mechanical Turk. They conceived of a forum architecture that enabled members to execute commonly agreed-upon goals, such as making small changes in their online work environment. Stalling was countered by having members vote on which actions to follow to identify goals with the most support, while friction was minimized when forum leaders acknowledged objections and worked out tangible proposals to address differences. Salehi, whose area of study is human-computer interaction, describes this process as the labor of action. She will present her team's findings at ACM CHI 2015, which takes place April 18-23 in Seoul, Korea.


Computer Student on Gesture Control: Start Experimenting
Phys.org (03/25/2015) Nancy Owano

Swedish computer science student Daniel Rapp has built upon the work of Microsoft Research and University of Washington UbiComp Lab researchers in the field of gesture-based computer control. The earlier research, prepared for the Proceedings of ACM's Conference on Human Factors in Computing Systems in 2012 (CHI 2012), offered a hardware component called SoundWave. The system is comprised of only a speaker and a microphone so users would not need special body-worn sensors to control existing applications. Rapp developed SoundWave controls to offer Doppler demos that can manipulate Web pages via hand gestures. "First, the speakers emit sounds at particular frequencies," writes John Wenz in Popular Mechanics. "When a hand passes through the waves, it changes them in subtle ways, and the microphone senses those changes when it picks up the sound. The app is programmed to use them to scroll the screen. Rapp has also designed the sound to modulate like a theremin." Rapp describes the Doppler Effect as a physical phenomenon that influences waves in motion. He says the work he based his motion-sensing experiments on offered the hardware setup, but no code to test it, which he decided to replicate on the Web.


Necklace and Smartphone App Developed at UCLA Can Help People Track Food Intake
UCLA Newsroom (CA) (03/12/15) Bill Kisliuk

WearSens is a necklace designed by University of California, Los Angeles (UCLA) researchers to monitor the wearer's food consumption, and it could help users track and improve their dietary habits. "This technology allows individuals and healthcare professionals to monitor intake with greater accuracy and more immediacy," says UCLA professor Majid Sarrafzadeh. WearSens, which was developed with support from the U.S. National Science Foundation, rests above the sternum and employs piezoelectric sensors to read vibrations from the action of swallowing. When the wearer consumes food, skin and muscle movements from the lower trachea set off the sensors, and the necklace relays the signals to a smartphone. A UCLA-developed algorithm then translates the data into information about the food or beverage. The phone displays data about the volume of food or liquid ingested and can offer advice or analysis. The sensor data is translated with a spectrogram to visualize the vibrations captured by the sensors. "The breakthroughs are in the design of the necklace, which is simple and does not interfere with daily activity, and in identifying statistical measures that distinguish food intake based on spectrogram images generated from piezoelectric sensor signals," says UCLA researcher Nabil Alshurafa.


Carnegie Mellon's Automated Braille Writing Tutor Wins Touch of Genius Prize
Carnegie Mellon News (PA) (03/24/15) Byron Spice

The work of Carnegie Mellon University's (CMU) TechBridgeWorld group to help visually impaired students learn to write Braille via a slate and stylus has been honored with the 2014 Louis Braille Touch of Genius Prize for Innovation from the National Braille Press' Center for Braille Innovation. The group's Braille Writing Tutor is an automated teaching program that connects to a laptop and provides learners with audio feedback to address skill-building challenges. A version of the system's hardware specification and software is available for download to help anyone construct their own tutor. Many TechBridgeWorld participants also have created a battery-driven Stand-Alone Braille Writing Tutor with onboard computing for use in areas where reliable computers and external power are scarce. "The Braille Writing Tutor has been one of our most successful projects to date," says CMU professor and TechBridgeWorld director M. Bernardine Dias. "We've seen the profound impact it has had on blind and visually impaired students and their teachers in the communities where we have been fortunate to test the tutor and its potential for impact in so many more communities around the world." TechBridgeWorld is a research group within CMU's Robotics Institute that develops technology for underserved areas of the world.


New Lab Director Puts the Human Into Human-Centered Computing
Ocala Star-Banner (FL) (03/22/15) Jeff Schweers

The University of Florida (UF) College of Engineering's Human Experience Research Lab headed by Juan Gilbert focuses on human-centered computing, and its purpose is to transfer technology out of the lab and into the hands of users. "Everything we work on has societal impacts by definition, so it's a candidate for the real world," Gilbert says. One project under development at the lab combines the disciplines of brain-computer interfaces (BCIs) and human-robotic interaction, in the form of a BCI wired into a laptop. The interface enables the user to transmit neural commands electronically to an off-the-shelf flying drone to study user engagement, both in terms of people's interaction with the technology and their feelings about it. Potential medical applications of the technology include a prosthesis that gives people some mobility without requiring surgery, says UF student Chris Crawford. Gilbert's students differ from their more career-focused peers in other doctoral programs in their desire to make industry connections and address real-world challenges. Gilbert stresses the industry is hiring large numbers of Ph.D.'s, and he wants to ensure his students are competitive. "This is not just about fulfilling academic requirements but [to] ensure you have the credentials to get your dream job," he says.


Scientist Hopes Vest Will Broaden Range of Human Senses
Agence France-Presse (03/19/15) Glenn Chapman

Neuroscientist David Eagleman has designed a garment known as a variable extra-sensory transducer (VEST) to expand the range of human senses by transmitting information to wearers in the form of vibrations on their backs. VEST is programmed to sync to tablet computers, which can translate spoken words, stock prices, or other information as digital data relayed wirelessly to vibration motors woven into the back of the apparel that can be worn under clothing. "Your brain doesn't know or care where it gets the data from," Eagleman says. "It is essentially a general purpose computing device." The garment initially is targeted to hearing-impaired users, and Eagleman says tests demonstrated people quickly started understanding the language of the vest and can be ready to hold conversations with it in a few weeks. "I think there are a lot of applications [for VEST] beyond sensory substitution," he notes. The startup behind the product also is investigating whether data from aerial drones can improve the flying skills of remote controllers. "As we move into the future, we are going to be increasingly able to choose our own peripheral devices and will no longer have to wait for Mother Nature," Eagleman says.


Prototype Device Promises 3D Computing
The Engineer (United Kingdom) (03/13/15) Julia Pierce

Warwick University researcher Jack A. Cohen has developed a prototype wireless device that combines data from cameras and sensors to read the three-dimensional (3D) movements of the user's fingertips and manipulate digital information on a computer. "At the moment, computers are controlled using [two-dimensional] devices such as mice and track pads," Cohen notes. "Systems such as Leap Motion are a big step up but have limited range and gesture capabilities. If we work in 3D then we can really interact with our information--for instance, when designing using [computer-assisted design] packages it can be a challenge to view what you're working on." Cohen says the system, when used in conjunction with head-mounted displays, will furnish users with a 360-degree view of their design, with depth perception. The technology also could enable people to execute complex or sophisticated actions, such as data sorting/processing. Other industries where such a device could be applicable include gaming and remotely operated machinery. Cohen currently is looking for engineering or design companies that would be interested in using his technology or testing early-stage prototypes, while the market debut of his system is a few years away.


KAIST Introduces a New UI for K-Glass 2 That Works With Eye Blinking
KAIST (03/13/15)

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed the i-Mouse, a new user interface (UI) for the K-Glass 2 smart glass that tracks a user's gaze and links to the Internet via eye blinks. K-Glass 2 also employs augmented reality, enabling real-time displays of pertinent, complementary text, three-dimensional graphics, images, and audio over target objects chosen by users. "The smart glass industry will surely grow as we see the Internet of Things becomes commonplace in the future," says KAIST professor Hoi-Jun Yoo. "In order to expedite the commercial use of smart glasses, improving the user interface and the user experience are just as important as the development of compact-size, low-power wearable platforms with high-energy efficiency." The i-Mouse features a vertically stacked gaze-image sensor (GIS) and object-recognition processor. As three infrared light-emitting diodes embedded in the device are projected into the user's eyes, GIS identifies their focal point and estimates possible gaze location as the user looks over the display screen. An electro-oculography sensor integrated with the nose pads reads the eyelid movements to click the selection.


There's a Revolution Brewing in the Technology Kitchen…
Lancaster University (03/18/15)

Lancaster University researchers are designing various sensing devices for anxiety sufferers, with a focus on deconstructing digital health tools to their most rudimentary elements for collaborative redesign, customization, and integration to meet specific requirements. The initial phase of the project involves a polymer laboratory investigating the possibilities of biodegradable, eco-sensitive, and nontoxic materials for customized manufacturing such as three-dimensional (3D) printing. Technologies will be disassembled into modular components, which can then be divergently combined and tailored to individual needs by selecting different shapes, materials, and functionalities. Products then will be locally manufactured via existing 3D printing networks. Lancaster's Angela Ferrario says the design and development of a broad spectrum of personalized digital tactile anxiety management technologies for people with autism will be supported by development of a Clasp open platform. Clasp is an anxiety management and peer support system previously developed by the university's School of Computing and Communications. "Using Clasp as a case study, we plan to design an exemplar for future digital health tools and to investigate potential impacts on end users, manufacturers, other technologists, and policy makers," says Clasp technical lead Will Simm.


Research to Help Robots, People Work Better Together
GSA Business (SC) (03/18/15)

The U.S. National Science Foundation (NSF) has awarded $500,000 to Clemson University professor Yue Wang, who plans to enhance human-robot collaboration in manufacturing by concentrating on trust and regret while developing control algorithms. Wang wants to determine when a person is too tired or stressed to work via models that quantitatively measure performance in real time as humans and robots work collaboratively. "We will show this information on the computer screen and provide suggestions: now is the time you should take over, or now is a time you should rest," she says. "The workload balance between the human and robot is governed by the trust that the human has for the robot. That's key as we determine how much autonomy the robot has." Mathematical trust modeling is the purpose of Wang's project, with such models founded on how much prior trust the human had for the robot, rate of performance improvement, and rate of decrease in the number of errors of both human and robot collaborators. Fellow Clemson professor Jacob Sorber also received an NSF award for his work on enabling long-term data collection by low-power, low-cost sensors. His proposed Mayfly computing platform is envisioned to ease development of applications for frequently failing devices.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.