ACM SIGCHI Banner
Welcome to the February 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Smart Scarf Carries Multimodal Language to Convey Emotions
Phys.Org (01/26/15) Nancy Owano

A joint University of Maryland/Microsoft Research project has yielded a wearable electronic scarf that could help communicate emotions. The garment is designed to respond to signals collected by sensors via Bluetooth, and its designers identified six actuations to deliver a multimodal language in the forms of heat, cooling, music, weights, vibration, and lighting in response to stress, happiness, sadness, composure, and excitement. The Sensing Whether Affect Requires Mediation (SWARM) technology could, for example, play upbeat music when it senses excitement, or enable stressed people to add weights to the scarf. The researchers consulted with autistic, hearing-impaired, and vision-impaired individuals for input on the prototype's design. The garment is composed of detachable laser-cut felt hexagons overlaid with copper taffeta, with some modules capable of vibration while others can heat up. The control module also communicates with a phone app over Bluetooth. A paper presented by the researchers at ACM's conference on Tangible, Embedded, and Embodied Interaction at Stanford University in January detailed the project's goals. "SWARM is meant to complement a user's current strategies for coping with their emotions, and also provide more information that may not be available, particularly for users with disabilities," the researchers wrote.


Steve Whittaker Named Fellow of Association of Computing Machinery
University of California Santa Cruz Newscenter (01/16/15) Guy Lasnier

University of California, Santa Cruz (UC Santa Cruz) psychology professor Steve Whittaker has been awarded an ACM fellowship for his accomplishments in the field of human-computer interaction. Whittaker's area of research is the convergence of psychology and computation, in which insights from cognitive and social science can inform the design of digital instruments that support multitasking, memory, collaboration, and socializing. A recent focus of his studies is the use of life blogging to improve psychological well-being, and the ramifications of digital breakups, such as how to contend with old Facebook posts and photo archives. Whittaker has collaborated with computer scientists, computer engineers, and other social scientists. Prior to joining UC Santa Cruz four years ago, he was a research scientist at IBM's Almaden Research Center, and held positions at the University of Sheffield, AT&T Labs, Lotus Development, and Hewlett-Packard Labs. In announcing the fellowship selections, ACM said the 46 2014 recipients have enabled breakthroughs in computing research and development that fuel innovation and help sustain economic development worldwide.


Good-Bye, Keyboard: The Future of Input Devices Is (Almost) Here
InfoWorld (01/22/15) Glenn McDonald

The keyboard could eventually become an obsolete input device as new form factors are innovated and built atop new technologies. One innovation that has already arrived is gesture recognition, which currently is used in game consoles and high-end artificial intelligence (AI) research environments. AI and deep learning algorithm advances are helping speech-recognition systems power real-time translation and hands-free computing via machine learning and natural-language interface technology. Meanwhile, virtual keyboards projected onto surfaces promise to optimize portability through an integration of lasers, sensors, and infrared beams. Also holding potential are wearable input devices such as The Ring, which translates finger movements into task commands via a combination of gesture recognition, miniaturized light, and haptic feedback, and light-emitting diode flashes and vibrations. In the prototype phase is the Cicret Bracelet, which can project a touchscreen interface onto the wearer's skin by merging projection keyboards, wearable computers, and mobile tech components. Smart glasses are another wearable input solution, and developing projects include mechanisms that can translate eye movement and gaze direction into commands. Also being explored are brain-computer interfaces such as NeuroSky's EEG biosensors, which enable computer commands by thought.


Mastering Math Through Movement
University of Vermont (01/21/15) John Reidel

University of Vermont professor Carmen Petrick Smith is the author of a new study that makes a case for using physical movement to teach elementary school children math concepts. Smith's team followed a group of students who performed body-based tasks while engaging with a Kinect for Windows math program. One task involved the students forming angles with their arms while projecting them on a Kinect screen, which changed colors to indicate different angle configurations. Smith's study demonstrated these students' understanding of geometrical principles gained significantly, especially compared to students who focused on static representations of angles. The study adds credibility to embodied cognition theories positing that the brain coordinates with physical movements and other environmental and neural processes to generate behavior. Smith says the research indicates a dynamic learning environment is more effective when combined with other pedagogical techniques. "When students are acting out a math problem and using their body to help them explain the answer, that's another modality," she notes. "Maybe they don't know the words quite yet, but they have a way to express it using their body that they didn't have before when they were sitting in a row of desks looking up at the teacher and searching for an answer." Smith is recruiting programmers to further enhance the Kinect program.


UW Computer Scientists Enhance Robotic Manufacturing
University of Wisconsin-Madison News (01/20/15) Jennifer Smith

Research by University of Wisconsin-Madison (UW-Madison) scientists strives to improve the efficiency and naturalness of human-robot collaboration in the manufacturing sector. In conjunction with Massachusetts Institute of Technology (MIT) researchers, UW-Madison professor Bilge Mutlu seeks to determine best practices for integrating human-robot teams in manufacturing environments, using a U.S. National Science Foundation National Robotics Initiative grant. Mutlu's team is building on earlier research related to subjects such as gaze aversion in humanoid robots, robot gestures, and the speech and repair issue. MIT collaborator Julie A. Shah's complementary focus on human-robot interaction seeks to deconstruct the model to ascertain who should execute various tasks. "Automated planning techniques can help bridge the gap in our capabilities and allow us to work more effectively as a team," Shah notes. Another partner in the research is Michigan furniture maker Steelcase, which aims to evolve its industrial systems by broader extension of human-robot collaboration across its operations, according to the company's Edward Vander Bilt. Mutlu expects the Baxter industrial robot platform, which includes two arms and a tablet-like panel for eyes that supply cues to help human workers anticipate the machine's future actions, to transform manufacturing "by making human-robot collaboration better and more natural as they work together."


Haptic Technology: The Next Frontier in Video Games, Wearables, Virtual Reality, and Mobile Electronics
Gizmag (01/15/15) Richard Moss

Haptic technology has only recently started to move from a niche field to a truly transformative component in human-computer interaction, with innovators such as Linkoping University's Mathias Nordvall exploring new ways for haptics to span the gap between tangible and digital domains. Nordvall sought to build a completely haptic video game that has no reliance on video or audio input, out of off-the-shelf consumer technology. The result, based on the game of Pong, gives players the ability to track a ball's movements through specific vibrational pulses relayed via a handheld Xbox 360 controller. Another project to develop a standard haptic interface has yielded Sightlence, a haptic editor designed to generate haptic output signals for use in apps and games of any device. Sightlence could spur the creation of new haptic experiences or languages in the areas of virtual reality, wearable computing, and touchscreen technology. Haptics also could revolutionize virtual reality environments, which are convincing from a visual and audio perspective, but still lack the tactile sensations to become fully immersive. In these and other applications, Nordvall says tactile feedback could be used to either enhance a user experience or completely replace elements of the user interface so the data communicated by haptics cannot be rendered through sound or video.


Looking Ahead: Fitness Tech in 2015
News@Northeastern (01/22/15) Jason Kornwitz

In an interview, Northeastern University professor Stephen Intille, co-founder of Northeastern's personal health informatics doctoral program, discusses upcoming developments in fitness technology. Intille says more watch-like devices that collect fitness data and perform other personal and productivity functions will be introduced in 2015. "Industry will compete to add an increasing number of sensors to the devices, measuring information such as body motion, location, heart rate, galvanic skin response [i.e., sweating], and skin temperature," he predicts. Intille also thinks devices will become more streamlined and stylish, while developers will work out user interface conventions to promote ease of use. "The biggest surprise in 2015 may not be how consumers use these devices for health, but rather an increasing awareness that the devices improve the utility of the mobile phone," he says. Intille describes his research areas as enhancing health behavior measurement with mobile phones and wearable devices, and applying that information toward the creation of interventions that help people make beneficial behavior changes. "We are exploring how mobile phones and smartwatches can be used to incrementally build up mathematical models of a person's typical behavior so that we can identify habits," he says. His team also is developing concepts for using real-time data of a person's activities to influence behavior by delivering computer- and human-generated feedback at actionable points of decision.


New Study Will Help Researchers Change Face of Military Training
U.S. Office of Naval Research (01/27/15) Eric Beidel

A new study recently launched by the U.S. Office of Naval Research (ONR) and conducted by the University of Central Florida (UCF) could enable breakthroughs in next-generation avatars, robots, and other human surrogates for military training. Part of the research will involve installing a remote-controlled robot greeter that interacts with people passing through the lobby of UCF's Institute for Simulation and Training. It will be a component of ONR's three-year Human Surrogate Interaction program, which is investigating human engagement with physical, virtual, and other kinds of surrogates so officials can determine the best way to apply such technology to military training systems. "Marine Corps training concepts continue to merge virtual and live components to create the most realistic, effective, and affordable training for Marines," notes ONR's Peter Squire. "The way people react to and interact with the different surrogates in this study is crucial to understanding how we can improve our military training systems." ONR also is backing the development of the Avatar Mediated Interactive Training and Individualized Experience System (AMITIES), a framework through which a single person can control multiple surrogates via specialized handheld user interface and head-tracking software.


Why Can't Robots Understand Sarcasm?
The Atlantic (01/22/15) Kevin Zawacki

Experts say artificial intelligence cannot be made capable of understanding sarcasm until humans know more about what sarcasm is. "There are all these issues [like]...social dynamics and power dynamics," notes Rutgers University professor Elisabeth Camp. She and Stanford University professor Noah Goodman say the function of sarcasm is enabled by the surrounding context, and this function can be inhibited in machines. "Robots [are still] having difficulty understanding very clear, distinct commands as opposed to nuanced differences based on sarcasm," points out Massachusetts Institute of Technology professor Missy Cummings, who studies human interaction with systems. Cummings says, "You could do all the machine learning in the world on the spoken word, but sarcasm is often in tone and not in word." There also are nonverbal signals to consider, according to Cummings. She also suggests collaboration between people with different research backgrounds, such as engineers and comedians, is a necessary component for translating sarcasm into computer code. She speculates robots equipped to understand any type of nuanced emotion may not arrive for a least two decades, from an academic standpoint. However, other researchers say sarcastic robots are not necessary. "The last thing I want my robot to be is sarcastic," says roboticist Sebastian Thrun. "I want them to be pragmatic and reliable--just like my dishwasher."


UHV Earns Grant to Study Human-Robot Interaction
Victoria Advocate (Texas) (01/24/15)

The U.S. Department of Defense and the U.S. Army have awarded a grant to the University of Houston-Victoria (UHV) Computation and Advanced Visualization Engineering (CAVE) lab to underwrite a project studying human-robot interaction. UHV School of Arts and Sciences dean Jeffrey Di Leo says the grant will enable UHV digital gaming and simulation program director Alireza Tavakkoli to continue his research into digital and virtual technology. Tavakkoli will collaborate with professor Li Chao, who will focus on developing a cloud-based infrastructure for the research, and Donald Loffredo, who will research process design and data analysis. "I've always been interested but haven't done a lot of research before in computer-human interaction," Loffredo says. "It's a hot topic with the U.S. military because robots can do things like deliver weapons systems." The grant was used to procure three advanced robots, software, and equipment for the CAVE lab's motion-capture systems, along with virtual reality gear. Tavakkoli says the three primary areas of concentration are immersive virtual reality in an intelligent environment, tele-robotics and teleoperations, and centralized processes on a high-performance server. The first element will translate the moves and actions of humans into a virtual setting, while the second involves enabling people to control robots through physical movement. The third effort will build a custom server to generate the necessary processing power.


New Prof Brings Major 'Serious Games' to Huddersfield
University of Huddersfield (01/21/15)

University of Huddersfield professor Minhua Ma's field of specialty is the growing discipline of serious gaming, in which video-game technologies are applied toward reaching milestones in education and healthcare. Ma is a contributor to a new volume on serious games for healthcare, featuring an introduction she co-authored that notes such games have chiefly been employed as "a tool that gives players a novel way to interact with games in order to learn skills and knowledge, promote physical activities, support social-emotional development, and treat different types of psychological and physical disorders." Ma says the book is unique in its concentration on how augmented reality technology can play a role in healthcare by combining real-world and computer-generated images. She uses as an example an interactive augmented reality solution that enables Parkinson disease patients to practice tasks in an immersive environment where they can engage with real-life items as well as virtual objects using their bare hands. Also cited in the book is the application of virtual reality to the field of pain management via the creation of a soothing, immersive environment. Ma will chair the Sixth International Conference on Serious Games Development and Applications, where one topic of discussion will be the use of computer games for education and training.


System Encourages Creativity, Makes Robot-Design Fun
Purdue University News (01/16/15) Emil Venere

Purdue University researchers have developed HandiMate, a cardboard-robotic toolkit that enables children with no formal education in programming or electronics to create custom robots they control wirelessly with hand gestures. HandiMate relies on motorized "joint modules" equipped with wireless communicators and microcontrollers. Users create robots by attaching the modules to materials and objects such as cardboard, metal cans, and foam board with Velcro strips. Users then can control the robots like puppets with a glove-based gesture controller. "It brings out the creativity and imagination of the user," says Purdue University professor Karthik Ramani. "You don't have to learn programming or the electronics. Those things are in the background." The researchers tested HandiMate on 12 college students and seven children ages 10-15. "It provides a fun way for children to learn physics related to engineering design because physics is about motion and dynamics, statics, and mechanics and materials," Ramani says. The researchers presented their work at the ninth International Conference on Tangible, Embedded and Embodied Interaction, which took place Jan. 15-19 at Stanford University. "We call it embodied interaction because you almost feel like it is an extension of you," Ramani says. "This technology is easy for the user, but behind the scenes are software algorithms that understand the user's gestures and coordinate the motion of several robotic joints."


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.