ACM SIGCHI Banner
Welcome to the June 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Wearable Computers Will Transform Language
IEEE Spectrum (05/28/14) Ariel Bleicher

Experts expect wearable computers and brain-computer interfaces (BCIs) to revolutionize communication and people's perception of the world, as these technologies become increasingly capable of understanding users. For example, Microsoft Research's Desney Tan says wearable computers will enable people to "sense and capture many more things about the world" and to relay those sensations in new ways. Carnegie Mellon University (CMU) professor Chris Harrison says the wearable computing era will be distinguished by people's ability to manually manipulate digital bits as if they were real objects. Some experts predict wearables will be truly transformative because they will learn things about people without any input from them, an example being a joint Yahoo-CMU project to develop an advanced personal assistant that CMU professor Justine Cassell says "will constantly be learning about you and becoming more personalized." The ultimate goal for many wearable computing researchers is to design machines that harness data from the wearer's brain and body to understand the world in human terms. For example, a BCI under development at the U.S. Army Research Laboratory categorizes images based on the brain's response to them. The scientists working on the project found that a user can read a subject's brain impulses to correctly identify targets with at least 85 percent accuracy. Some experts believe people will constantly wear BCIs in the future, along with myriad other technologies, to record their sensory input and produce digital representations of how they perceive the world.


Tech Pioneers
UDaily (DE) (05/27/14) Karen B. Roberts

University of Delaware (UD) alumnus Wayne Westerman and professor John Elias were recently named fellows of the National Academy of Inventors for their pioneering development of algorithms and touch imaging interface architecture currently used in many touchscreen devices. Their interface transformed how people interact with computers by incorporating scrolling, finger tracking, and gesture recognition. In an interview, Westerman says his work was inspired by the notion to train a neural simulator with multi-finger patterns from a zero-force tablet. "Since I played piano, using all 10 fingers seemed fun and natural and inspired me to create interactions that flowed more like playing a musical instrument," he says. Westerman also emphasizes the important role Elias' expertise played in the prototyping and eventual fabrication of advanced input devices. "The algorithms we invented helped surface typing feel crisp, airy, and reasonably accurate despite the lack of tactile feedback," he says. "Hundreds of millions of people now utilize surface typing on mobile devices." Westerman says he also was inspired to incorporate input device ergonomic concepts by a biomechanics class in UD's physical therapy department, and he and Elias co-founded FingerWorks, the first company to commercialize touch-sensitive imaging technology with a line of 10-finger touch pads and keyboards that integrated typing, pointing, scrolling, and editing gestures within the same surface area.


Playful Behavior Surprises Researchers
University of Sydney (05/28/14) Victoria Hollick; Mandy Campbell

Public interactive displays (PIDs) encourage playful responses that University of Sydney researchers studying the displays did not anticipate, according to research leader and professor Judy Kay. She says the purpose of the study was to learn how to design PIDs for usability, and video recordings revealed "some people spent time playing, dancing, shuffling, and gesturing. So we carefully studied the videos of the people who spent the longest time." Kay says nearly three-quarters of the sample group engaged in some type of playful behavior when operating the display, with dancing being especially popular. "We developed an application that allowed passers-by to explore the information space using four simple gestures," she says. "To indicate its interactive capabilities, the display reflected passers-by as a mirror image in the form of a skeleton." Kay says the team identified three unique forms of playful behavior--dancing, locomotion, and gesturing. The University of Sydney's Martin Tomitsch says the findings have inspired the researchers to recommend strategies for how information displays can respond to such behavior, and notes the study stresses the need to comprehend the connection between the design and the assessment of new approaches to developing human-computer interaction.


Using Thoughts to Control Airplanes
Technical University of Munich (Germany) (05/26/14)

Researchers at the Technical University of Munich's (TUM) Institute for Flight System Dynamics and the Technical University of Berlin (TU Berlin) have demonstrated that brain-directed flight is feasible, as part of the European Union-funded Brainflight project. "With brain control, flying, in itself, could become easier," says project leader Tim Fricke. "This would reduce the workload of pilots and thereby increase safety. In addition, pilots would have more freedom of movement to manage other manual tasks in the cockpit." The pilots' brain waves are measured via electroencephalography electrodes linked to a cap, and electrical potentials are deciphered and converted into control commands through an algorithm developed by scientists from TU Berlin's Physiological Parameters for Adaptation team. The flight simulator tests involved seven subjects with varying flight experience, and Fricke says one participant was able to follow eight out of 10 target headings with only a 10-degree deviation, while several participants also succeeded in landing in poor visibility. The next area of concentration for the TUM researchers is how the requirements for the control system and flight dynamics must be modified to accommodate the new control method. Pilots normally feel resistance in steering and must exert significant force when the loads induced on the aircraft become excessive, but the brain-control system currently does not support such feedback.


Autodesk's Draco Lets You Animate an Illustration in Seconds
Fast Company (05/28/14) Mark Wilson

Autodesk's new Draco system enables users to animate still illustrations in seconds using infinitely repeating and oscillating "kinetic textures." With Draco, users draw a few of the objects they want to animate, indicate the spot where they would like the objects to start from, and the path or paths along which they want those objects to move. Speed, volume of the produced objects, and other aesthetic subtleties can be modified from there. Draco is mainly designed to animate large, repeating objects such as bubbles, steam, waves, and schools of fish. "Because our primary goal was to provide a simplified user interface, there were limitations on the extent of features that we wanted to provide," says Autodesk researcher Tovi Grossman. "The existing system is quite flexible in terms of the types of animation effects it can produce, [but] there would be a tradeoff in terms of the interface complexity if we wanted to enhance that flexibility even further." Draco was unveiled at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Understanding Users Key to System Design, According to IST Professor
Penn State News (05/12/14) Stephanie Koons

Pennsylvania State University College of Information Sciences and Technology (IST) professor Frank Ritter has co-authored a book, "Foundations for Designing User-Centered Systems," which describes the fundamental physiological, psychological, and social underpinnings for system users' behavior while also explaining their influence on system design. Ritter stresses it is crucial for system designers to fully understand their users if they are to develop effective interactive systems. "What this book is trying to do is give you actionable theory and data of how people behave with technology," he says. Among the people the book targets are system designers, developers, and students taking system design and human-computer interaction (HCI) courses, and the book's co-authors include eBay Research Labs HCI director Elizabeth Churchill. Ritter, Churchill, and third co-author Gordon Baxter of the University of St. Andrews have collectively designed, developed, and performed research into interactive systems in aviation, consumer Internet, healthcare, e-commerce, industrial process control, and enterprise systems. Ritter says the book seeks to close the gap in system designers' knowledge of how people behave, which is often filled by folk psychology or commonsense psychology. He believes the book's insights will help readers design more usable, more useful, and more effective interactive systems.


Google Glass Adaptation Opens the Universe to Deaf Students
BYU News (UT) (05/27/14) Joe Hadfield

Brigham Young University (BYU) researchers led by professor Mike Jones have adapted Google Glass and several other types of eyewear to project sign language narration to enhance planetarium shows for hearing-impaired students. Jones' Signglasses project is partly funded by the U.S. National Science Foundation, and it involves testing the system during field trips by deaf high school students. Among the tests' findings is the need for the signer to be displayed in the center of one lens, contrary to researchers' assumption that users would prefer the video displayed at the top. The BYU team also is collaborating with Georgia Institute of Technology researchers to investigate the technology's potential to foster literacy in deaf students. "One idea is when you're reading a book and come across a word that you don't understand, you point at it, push a button to take a picture, some software figures out what word you're pointing at and then sends the word to a dictionary and the dictionary sends a video definition back," Jones says.


Come on Feel the Data (and Smell It)
The Atlantic (05/19/14) Luke Stark

Scientists are working to bring all of the human senses to digital interactions, turning information that people see or hear into more visceral data. Data visceralizations portray information using multiple senses, such as touch, smell, and taste, to provoke thoughts and feelings. These new ways of interpreting data will be increasingly important as the Internet of Things brings many more devices online and generates an ever-growing data volume. Device makers also are focusing on visceral designs, which use the look, feel, and sound of a device to create an immediate emotional impact. For example, the Muse EEG headband enables a smartphone or tablet to take a person's brainwaves as direct input. The tool could be used for a range of activities, from immersive gaming to pouring a cold beverage. Current mainstream digital media products and interaction designs focus on providing visceral reactions as a primary experience through gestural and haptic feedback. Game controllers that shake and rumble in conjunction with online play have become commonplace; these technologies can help make abstract information more meaningful by having a visceral impact that is appropriate for specific data and contexts. For example, devices could shudder when they pick up cookies that track a user's online activities.


'Smart Pills' With Chips, Cameras, and Robotic Parts Raise Legal, Ethical Questions
The Washington Post (05/25/14) Ariana Eunjung Cha

Interest is growing in ingestible or implantable computer chips that can monitor patients' bodily conditions in real time, but their advent is accompanied by legal and ethical concerns. Among the technologies approved by the U.S. Food and Drug Administration are a transponder containing a person's medical history that is injected beneath the skin, a camera pill that can look for tumors within the colon, and a smart pill system designed to ensure older people are taking their medication. The smart pill activates when it comes into contact with stomach acids, and functions with a patch patients wear on their torso. The patch picks up a unique 16-digit code the chip in the pill emits for five minutes after being ingested, and in turn transmits the data to a nearby tablet or smartphone. Meanwhile, in the planning or development stages are even more advanced systems, such as nano­sensors that would inhabit the bloodstream and look for infection or other medical issues, and send appropriate alerts to smartphones. Ethical issues raised by such technologies include their potential to be used coercively as an identification method, or to force people to take medicine they do not want to take. However, advocates promote the technology as providing both life-saving and cost-saving measures.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Tackling the Limits of Touch Screens
The New York Times (05/17/14) Anne Eisenberg

New interfaces that take a cue from analog tools could improve users' acquisition and retention of knowledge when working with touchscreens. For example, Tactus Technology is working on a keyboard that uses a fluid-based system for shape-shifting keys that pop up from the screen's surface as necessary, then recede when they are not needed. York University professor Scott MacKenzie thinks such tactile features could help nurture muscle memory and enhance accuracy. He notes many people who type on flat glass screens must keep their eyes focused on the surface to hit the proper key, and Karstad University's Erik Wastlund points out this leads to less concentration on composition. Meanwhile, Scientific American editor in chief Mariette DiChristina says electronic textbooks aim to improve readers' memory and comprehension by "spacing what you are learning over time with feedback--right or wrong--to immediately help you understand what you know and what you don't know." For example, Macmillan Higher Education's Susan Winslow says pop-quizzes can be inserted into e-pages. Another disadvantage with touchscreens is their inability to help students generate a cognitive map to fit new knowledge, and researchers at the Korea Advanced Institute of Science and Technology have produced a prototype interface that enables students to flip through e-book pages as they would though a printed book.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Tufts Researchers Develop Mind-Reading Headband
WBUR.org (05/16/14)

Tufts University researchers are developing a system in which a computer can adapt to a user's mental state via a headband that reads brain signals. "What we've been best at measuring so far is mental workload or cognitive workload," says Tufts professor Rob Jacob. "If we find out that your workload is going up or going down, we could tweak the computer a little bit to suit your current state." The headband can tell the computer whether the user is stressed and overworked, or at ease and ready to perform more tasks by measuring a combination of blood volume, blood flow, and blood oxygenation levels in the brain, says Tufts professor Sergio Fantini. "If we can actually tell how hard your brain is working, we might be able to figure out the right moment to interrupt you," says the Human-Computer Interaction Lab's Evan Peck. Among the professions Jacob says might benefit from such technology is air traffic control and driving trains, buses, and trucks, where concentration is key and a system such as Tufts' can scale back the workload in response to user stress. Jacob envisions the technology developing to the point where it will not be a costly and bulky tool, and says, "we're working on a way to integrate this with Google Glass or a baseball hat or something."


Microsoft's Susan Dumais '75 Is a Big Reason Why, Computer-Wise, You Find What You Seek
Bates College News (05/01/14) Jay Burns

Microsoft Research Lab scientist Susan Dumais has earned praise and prestige for her research into human-computer interaction and information retrieval, and for her efforts she recently was named the 2014-2015 Athena Lecturer by the ACM Council on Women in Computing (ACM-W). Dumais' work on simulating the function of human memory was the jumping-off point for her career goal of investigating how human-machine interaction can be improved. Following the exploration of how people retrieve information from their memories, Dumais focused on how people retrieve information from computers and other outside resources. During her time at Bell Labs, Dumais co-wrote a paper focusing on how computer users always had to enter the precise, correct word in order to retrieve what they wanted from their computer. She later co-authored a paper offering Latent Semantic Indexing (LSI) as a solution to the problem, which today serves as an indexing and retrieval technique essential to online searches by deciding if words with similar contexts also have a similar meaning, and thus should be elevated as a search result. "LSI induces the similarity among words by looking at the company they keep," Dumais says. Her latest work at Microsoft examines how Web content changes over time, how people revisit Web pages, and how searching can be enhanced by modeling the search's context.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.