ACM SIGCHI Banner
Welcome to the May 2013 SIGCHI edition of ACM TechNews.


ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


New Keyboard for Touchscreens
Max Planck Gessellschaft (04/17/13)

Thumb-typing on touchscreen devices is faster with a new KALQ keyboard that researchers at the Max Planck Institute for Informatics have developed. Using computational optimization techniques and a model of thumb movement, the researchers say they found a layout that enables users to type 34-percent faster than with a traditional layout after a brief practice period. Recognizing that common words on a split-QWERTY layout must be typed with only one thumb thereby slowing progress, the researchers sought a two-thumb text-entry layout that would improve typing performance and minimize thumb strain. "Experienced typists move their thumbs simultaneously: while one is typing, the other is approaching its next target," says the Max Planck Institute's Antti Oulasvirta. "We derived a predictive model of this behavior for the optimization method." KALQ places all vowels except "y" in the area for the right thumb, and the left thumb is assigned more keys. Users are trained to move both thumbs simultaneously, so that as one thumb is typing, the other moves to its next key. The researchers believe KALQ offers sufficient improvements over a QWERTY keyboard to motivate users to switch.


Futuristic Projection Technology Could Be Used Anywhere
Monash University (04/11/13)

IITB-Monash Research Academy Ph.D. candidate Shamsuddin N. Ladha is developing projection technology that turns non-ideal surfaces into display surfaces. Projections will display on surfaces such as office cubicles and room corners without blur or distortion. "This is an holistic approach to improve projection quality, both visual and interactive, on common surfaces using off-the-shelf components and software," Ladha says. Moreover, he is incorporating natural interaction capability into the projection technology. "For example, if a presenter moves and obstructs part of the image, the projector will sense the disruption and adapt to it by moving the display," Ladha says. Users will be able to annotate on the display during their presentation. He notes the technology could be used in the office for business meetings, in the home to watch TV, in seminars, and in life-size gaming and simulation environments. "I also see applications in virtual or augmented reality environments, because the technology will be able to cope with multi-planar surfaces," Ladha says.


New Research: Computers That Can Identify You by Your Thoughts
UC Berkeley NewsCenter (04/03/13)

University of California, Berkeley researchers are studying the possibility of brainwave-based computer authentication that could supplant traditional passwords, enabling users to simply think their password to gain access. Although computer scientists have been suggesting biometric computer authentication since the 1980s, such systems have proven slow, intrusive, and costly, and consequently have not gone mainstream. However, advances in biosensor technologies have made brainwave measurement for computer authentication more feasible. For example, traditional clinical EEGs required dense arrays of electrodes to record up to 256 channels of EEG data, whereas new consumer-grade headsets rely on a single dry-contact sensor attached to the user’s forehead to offer a single-channel EEG signal from the brain’s left frontal lobe. The researchers used the Neurosky MindSet, which has a wireless Bluetooth connection. The technology is secure, accurate, and reproducible enough to replace passwords, and users will be willing to use it, the researchers say. A critical factor to widespread acceptance of brainwave authentication systems is finding a mental task that users are willing to repeat daily, and trials indicate that the task should be simple but not boring. In tests, favorite tasks were counting objects of a certain color and imagining singing a preferred song.


These Adorable Robots Are Making a Documentary About Humans. Really.
Wired News (04/17/13) Angela Watercutter

Massachusetts Institute of Technology (MIT) roboticist Alex Reben created small robots called BlabDroids that are working on perhaps the first documentary directed and filmed by robots, in an effort to test MIT computer scientist Joseph Weizenbaum’s ELIZA effect theory that people tend to anthropomorphize computers and engage emotionally with artificial intelligence. At last month’s Tribeca Film Festival in New York, about 20 BlabDroids queried attendees with intensely personal questions. The self-propelled droids were equipped with digital cameras, speakers that ask pre-programmed questions, and a button that triggered new questions. Reben teamed with filmmaker Brent Hoff on the project, designing the BlabDroids to seem comforting and nonjudgmental. The robots tested their cinematographic skills at the International Documentary Film Festival Amsterdam, and Hoff says the most interesting find is "how people seem so hungry to share themselves” in ways they typically do not. “People are constantly being asked to like someone’s Kickstarter page or some new dumb show, but it is unheard of in human culture in 2013 for anyone to be asked their opinion on the difference between living and existing, or what’s the last nice thing they have done, sadly,” Hoff says. “So perhaps the real use of this project will be to get some robots out into the world and let people express themselves in an ongoing way."


What Can Your Clothing Do?
Concordia University (04/15/13) Emily Essert

Concordia University professor Joanna Berzowska's Karma Chameleon project is investigating the uses of smart textiles with computerized fabrics. In collaboration with the École Polytechnique de Montréal's Maksim Skorobogatiy, Berzowska has developed interactive electronic fabrics capable of capturing and storing the body's power, which can change the garments' color and shape in response to physical movement. Significantly, these composite fibers are embedded with electronic or computer functions, instead of just having these functions attached to the textile. Multiple layers of polymers interact with one another when stretched and drawn out. Although actual garments are not likely to appear for another 20 or 30 years, the fabric could eventually lead to garments that change appearance on their own or use energy from human movement to charge electronic devices. Whether people want to wear clothes that think for themselves remains to be seen, and Berzowska was to address this and other issues surrounding human-computer interaction at the Smart Fabrics 2013 conference in San Francisco. “Our goal is to create garments that can transform in complex and surprising ways—far beyond reversible jackets, or shirts that change color in response to heat," Berzowska says. "That's why the project is called Karma Chameleon.”


Safe Texting While Walking? Soon, There May Be an App for That
Technology Review (04/15/13) Rachel Metz

Researcher Juan-David Hincapié-Ramos at the University of Manitoba’s human-computer interaction lab is developing CrashAlert, an app that aims to improve the safety of walking and texting at the same time. CrashAlert uses a depth-sensing camera to detect obstacles and alert users before a collision occurs. “People aren’t going to just stop texting and walking, and in order to incorporate [cellphones] into our everyday new habits, they have to help with the things they take away from us, like peripheral vision,” says Hincapié-Ramos. In trials, volunteers used an Acer tablet computer with Microsoft Kinect attached for depth-sensing, and carried a laptop and a large battery to power the Kinect in a backpack. The volunteers then attempted to navigate a crowded cafeteria while playing a game on the tablet that simulated the concentration level of texting. In addition, volunteers were dispatched to ensure that each participant encountered at least four obstacles. When the camera sensed an object within 2 meters, red squares appeared on the participant's screen to alert them of the obstacle. Hincapié-Ramos says CrashAlert helped subjects feel safer and move away from obstacles earlier, without hindering game performance. Now the researchers are working on a self-contained prototype with improved software, and Hincapié-Ramos says phone makers could incorporate obstacle-sensing into handsets, giving smartphones more awareness of surroundings.


Non-Invasive Brain-to-Brain Interface: Links Between Two Brains
KurzweilAI.net (04/08/13)

Researchers at Brigham and Women's Hospital and Harvard Medical School have created a non-invasive brain-to-brain interface (BBI) that connects a human brain with a rat brain. The team's goal was to develop a system that would enable a human to remotely flick a rat's tail, without using direct connections between humans and rats or direct connections to the brains. The system includes a brain-computer interface with EEG sensors and a computer linked to the human brain. To make the rat tail move, volunteers looked at a strobed image and EEG sensors detected the visual-evoked-potentials synchronized with the light. The EEG signal traveled through a digital bandpass filter centered at the flickering frequency to negate other brain signals, and a computer analyzed the resulting signals, transmitting a signal to the rat in response to a significant detection. Although this trial involved a simple on-off signal, the team believes it would be possible to detect hand movements via multiple EEG signals or real-time fMRI, which could lead to mirror-image control of the rat's limbs. Ultimately, the researchers believe a BBI could link two awake humans with a bidirectional, long-distance connection over the Internet. The information also could be sent back to the originating person to enable active control or modification of specific neural processing and associated cognitive behavior.


IST Doctoral Student Looks at How MOOCs Are Changing the Nature of Education
Penn State News (04/11/13) Stephanie Koons

Pennsylvania State College's College of Information Sciences and Technology (IST) doctoral student Michael Marcinkowski is studying the impact of learning interactions in an online environment versus a traditional classroom, with the goal of improving the design of educational systems. “What our work tries to do is bring an IST perspective, one that is technologically and sociologically focused, to bear on questions of online education research,” says Marcinkowski. “In particular, we are looking to understand how students and instructors can form a productive dialogue and the ways the systems themselves come to be part of the dialogue.” Human-computer interaction research is generating design and approach questions that impact educational goals. One of Marcinkowski's goals is to study the ways in which online education instructors and students interact with technology through an interpretive process within a broader cultural context, especially with regard to Penn State's recently adopted Coursera massive open online course (MOOC) platform. “It is necessary to approach educational software not simply as a toolset that is able to fit a specific set of requirements, but rather as something capable of being evocative, meaningful, and subject to interpretation,” Marcinkowski says.


Dances With Robots
Boston University (04/08/13) Rich Barlow

Researchers at Boston University's Intelligent Mechatronics Lab have designed robots that dance by mapping the coordinates of actual dancers to enable robots equipped with motion sensors to read moves and respond according to the programming. "The ultimate goal is to understand human reaction to gestures and how machines may react to gestures," says Intelligent Mechatronics Lab founder John Baillieul. He says the research could enable robots to step in for people in hazardous jobs such as treacherous rescues and repairs in dangerous environments. The difficulty is to create robots that can perform tasks with some autonomy and react to situations for which they are not exactly programmed. Baillieul says the machines need “massive experiential data sets” to enable them to respond to numerous situations. He notes that robots are not able to learn from experience and override knowledge when situations require it as humans can. To remedy this, the lab is exploring ways to imbue robots with the human ability to communicate nonverbally, such as by using sensors to read body language.


Mind-Controlled Devices Reveal Future Possibilities
National Science Foundation (04/10/13) Valerie Thompson

University of Minnesota scientists have developed a non-invasive brain-computer interface (BCI) that could help individuals with amputated limbs, paralysis, and other movement-restricting impairments. Using the BCI interface, volunteers are able to precisely control the flight of simulated and small-model helicopters using mind control. Led by Biomedical Functional Imaging and Neuroengineering Laboratory director Bin He, the researchers previously demonstrated that volunteers with a cap containing electroencephalography sensors could fly a virtual helicopter in real time using only their minds. Volunteers were asked to steer a virtual helicopter through a series of rings that appeared on a screen while the team studied the degree to which the sensors detected intended movement. The volunteers flew the virtual helicopters through more than 85 percent of the rings. "To my knowledge, this was the first time anyone had used a non-invasive approach to simulate movement in three dimensions," He says. The researchers recently replaced the computer-simulated helicopters with actual ARDrone quadricopters, and volunteers were required to fly them quickly and continuously through two suspended rings as many times as possible in four minutes, using video feedback from a forward-facing camera on the quadcopter. Beyond proving that the BCI is effective, the experiment shows the many applications the technology could have for people.


Inside Microsoft's Edison Lab: Creating a See-Through Computer
Pocket-Lint.com (04/02/13) Stuart Miles

Microsoft's Edison Lab is creating a see-through computer that would resemble a plate of glass, without wires, cameras, sensors, or projections. Stevie Bathiche, director of research at Microsoft's applied sciences lab, says the project starts with a very flat wedge-like lens that shines images upwards to be projected outwards. The lens is located behind a display and captures information that is sent to a sensor in the display base. The angle of the lens enables the computer to see and record events in front of the screen. One challenge is eliminating the echo of the screen through which filming occurs, which dominates the image if not removed. Eliminating this video echo enables the construction of a flat, see-through display that is cost-effective and works regardless of size, which has been a hurdle for Samsung's transparent OLED technologies. Microsoft believes that gesture is the key to future computer interfaces, and the computer tracks movement over the screen, assessing where the user is in relation to the screen and relaying information accordingly. The final challenge is enabling the computer to track what users see as they move their heads, as if looking through a glass window frame.


Micro Transistor Prototypes Made at Cornell Map the Mind
Cornell Chronicle (04/11/13) Blaine Friedlander

Scientists at the Microelectronics Center of Provence, France, have created microscopic organic transistors that amplify and record signals inside the brain, in an advance that improves the ability to map the human mind. Using earlier prototypes developed at the Cornell NanoScale Science and Technology Facility (CNF), the French scientists created transistors that offer signals with 10-times-higher quality than current electrode technology. Such recordings are used in brain-machine interfaces to help paralyzed people control prosthetic limbs, and to find brain regions that cause seizures in epileptic patients. In addition, the recordings help to map the brain to remove tumors. The scientists used CNF’s lithography and characterization suite of tools. “To understand how the brain works, we record the activity of a large number of neurons. Transistors provide higher-quality recordings than electrodes—and, in turn, record more neuronal activity,” says the Microelectronics Center's George Malliaras. “The CNF prototyping allowed us to skip having to reinvent the wheel and saved us precious time and money.”


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.