ACM SIGCHI Banner
Welcome to the May 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Cyborg Angst: 5 Ways Computers Will Perplex Us in 2039
New Scientist (04/30/14) Aviva Rutkin

Computer-human interaction (CHI) researchers at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto speculated on computer challenges that might exist in 2039, especially if technologies implanted in the human body become the norm. IBM researcher Michael Muller considered what happens when monitors implanted on internal organs pool knowledge to offer health tips. He said determining how to fit the required artificial intelligence in a package safe and small enough for implantation was the primary challenge. Meanwhile, researchers at the University of California, Irvine have conceived of Plantastic, a system in which nanochips on plants feed data to a site that uses the information to advise people on what crops would grow best in which environment. Another speculative project focuses on bionic digits added to human hands to optimize dexterity, and concludes the optimal number of fingers is 12.5, with six normal-sized fingers on each hand and the dominant hand having an additional half-sized finger that can be moved with six degrees of freedom. The Institute for Pervasive Computing's Andreas Reiner speculated that self-driving automobiles will inhibit people's desire to drive by 2039, possibly leading to artificial recreation parks where people can still enjoy manual driving. Finally, experts pondered future brain implants that could enable people to record experiences and share memories with each other, and devices to remind them of things they forgot.


Human Media Lab Unveils Foldable Smartphone Technology
Kingston Herald (Canada) (04/28/14) Dick Mathison

Queen's University's Human Media Lab debuted its PaperFold foldable smartphone technology at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto. The smartphone can open up to three flexible displays that provide additional screen space when required. The three detachable electrophoretic displays enable the phone to be linked in various configurations that can emulate both a notebook computer format or a foldout map. Queen's professor Roel Vertegaal says each of PaperFold's displays can function independently or as part of a larger screen, which "allows multiple device form factors, providing support for mobile tasks that require large screen real estate or keyboards on demand, while retaining an ultra-compact, ultra-thin and lightweight form factor." PaperFold can automatically identify its shape and produce preconfigured changes to each display's graphics. For example, folding the smartphone into the shape of a three-dimensional (3D) building on a map will detect a Google 3D SketchUp model of the building and turn PaperFold into a 3D-printable architectural model. "The PaperFold smartphone adopts folding techniques that makes paper so versatile, and employs them to change views or functionality of a smartphone, as well as alter its screen real estate in a flexible manner," Vertegaal says.


Smart Cities: Using Data to Shape Our Urban Environments
CIO Australia (04/29/14) Rebecca Merrett

Organizations in Australia are leveraging advanced analytic tools, data modeling, and other technologies to help build smarter, data-driven cities. For example, researchers at National ICT Australia (NICTA) gathered real-time data from trains and shipping containers and interviewed stakeholders to construct a computer model of trains coming in and out of Port Botany. They used smart scheduling to show the rail line did not require upgrades for another 10 to 20 years, saving hundreds of millions of dollars. "The state-of-the-art model we developed [showing] how all the parts interact showed that the rail infrastructure does not appear to be the main bottleneck," says NICTA's Dean Economou. "That is very forward thinking of the port [workers] to ask us to analyze the system in this way." Preventing crashes and other incidents with smart safety and emergency services is another area of focus. Transport for New South Wales' John Wall says collecting weather data, as well as information about the driver and car via an opt-in form, could be used to encourage drivers to avoid certain roads during severe wet conditions or to take public transport when they are planning their trips. Another technology to enhance road safety is connected vehicles, using sensors that detect rough roads or slippery wheels and can alert other connected vehicles. Also underway are smart grid utilities to manage and coordinate electricity provision by combining smart grid-generated data with data from weather forecasts, TV networks, city cultural event planners, and other sources.


Microsoft's Prototype Keyboard Understands Gestures
IDG News Service (04/29/14) Nick Barber

Microsoft unveiled a prototype keyboard capable of interpreting basic hand gestures at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto. Microsoft researcher Stuart Taylor says the keyboard's main function is to enable users to keep their hands on or very close to it while typing and using input gestures. The device uses 64 sensors to detect the movement of hands as they brush over the top of the keyboard. For example, swiping a hand over the left or right side can summon left- and right-side menus in Windows 8. Some of the gestures can substitute for keyboard shortcuts, such as the Alt and Tab combination for switching between applications. "What we've found is that for some of the more complicated keyboard shortcut combinations, performing gestures seems to be a lot less overhead for the user," Taylor notes. However, he cautions the keyboard is not designed to be a mouse replacement. "It's less about fine-grain navigation, which would still be performed with a mouse or touchpad," Taylor says. The keyboard's sensors are paired, with one sensor discharging infrared light and the other reading the light reflected back, similar to the technology in Microsoft's Kinect gaming system.


Social Media Users Need Help to Adjust to Interface Changes
Penn State News (04/30/14) Matt Swayne

A study into social media users' struggle with interface changes found that social media companies can reduce stress and curb the erosion of user loyalty if they give users a greater sense of control. "What we need to think about is how social media companies can be more adaptive and how they can improve the longevity of their sites," says Pennsylvania State University (PSU) researcher Pamela Wisniewski. She worked with PSU professor Heng Xu and University of California, Irvine professor Yunan Chen to study users' response to the launch of Facebook's Timeline interface between 2011 and 2012. They determined the mandatory switch to Timeline was very stressful for users, especially after Facebook closed a blog that offered users a place to express their concerns and post feedback. The researchers found that 67 percent of users' coping strategies in the switchover were negative. "Without giving people a way of offering feedback, you make them feel less empowered and they have more of a feeling of hopelessness," Wisniewski says. She also says companies that stop communication risk the circulation of misinformation among users, while changing too many features all at once can confuse users and may incite an even harsher backlash. The researchers presented their findings at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Towson University Gets Patent for Technology to Help Blind Internet Users
The Baltimore Sun (04/27/14) Carrie Wells

Towson University has developed a new type of Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) designed to be easier for vision-impaired users. The SoundsRight CAPTCHA has users listen to a series of 10 random sounds and press the space bar each time they hear a specific noise, such as a dog barking, which enables them to independently access and use the Internet. Developers say SoundsRight is better than Google's audio CAPTCHA, which has a failure rate of 50 percent for blind users, according to studies. Towson researcher Tim Brooks says SoundsRight can be incorporated into any Web page and tailored by the Webmaster, while its script could be modified for use in many languages or have users identify any number of sounds. The Towson researchers tested the CAPTCHA in close collaboration with the National Federation of the Blind, which is striving to encourage various groups and businesses to adopt it. SoundsRight is still in a beta version, and its developers are hoping that a real-world launch will help spot any necessary alterations, says Towson professor Jonathan Lazar.


Programming the Smart Home: 'If This, Then That'
Brown University (04/28/14)

Researchers at Brown and Carnegie Mellon universities have developed a "trigger-action programming" model for intuitively controlling smart home devices, which they presented at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto. Under the model, users automate tasks across various services by generating "recipes" using simple if-then statements. "As a programming model, it's simple and there are real people using it to control their devices," notes Brown professor Michael Littman. However, he says the researchers also wanted to know whether it works for the home automation tasks people want to do. They surveyed workers on Amazon's Mechanical Turk crowdsourcing marketplace to determine what services they desired in a hypothetical smart home, and then ascertained what tasks would require programming that could be expressed as triggers and actions. Most of the programming tasks fit well within the trigger-action model, and the next step was to see how well task-automating recipes could be designed. This involved two interfaces, and the researchers called on Mechanical Turk users to employ the interfaces to produce recipes. Both interfaces were based on the if-then paradigm, but one of them allowed the addition of multiple triggers and actions. The study demonstrated that participants were fairly adept at using both interfaces, and those without programming experience did just as well at the tasks as their experienced counterparts.


Brain Control: Taking Human-Technology Interaction to the Next Level
ASU News (04/24/14) Joseph Kullman

Thought-controlled technology is the focus of research by Arizona State University (ASU) professor Panagiotis Artemiadis, whose work serves the U.S. Air Force's increasing need for mixed human-machine decision-making based on techniques and models for "an actionable knowledge framework between humans and multi-agency systems." Artemiadis' background is in robotics and control systems, with a concentration on human-oriented robotics. The Air Force's Young Investigator Research Program gave Artemiadis a $360,000 grant for a project in which he will investigate the brain's perceptive and predictive capacities to evaluate its ability to perform effectively in "human-swarm closed-loop" communication and control systems. His goal is to better comprehend "the mechanisms involved in how the brain perceives information it receives from observing moving multi-agent systems." Swarms and multi-agent systems in this instance mean multiple robotic, autonomous vehicles in motion, mainly aircraft. "We are going to look at people's brain signals while they are watching swarms, and understand how the brain perceives high-level information from these swarms," Artemiadis says. His aim is to determine whether individuals can reliably sustain high levels of cognitive performance in coordinating a swarm's movements and actions. The brain-machine interface system would combine the observational capacities of both humans and machines, and Artemiadis says users could simply send commands by thought rather than vocally or through a keyboard.


The Mirror That Shows Your Insides
Motherboard (04/25/14) Katherine Templar Lewis

Primary Intimacy of Being is an interactive artwork developed by Des Vues de l'esprit that enables participants to view digitized reflections that reveal their internal organs, bones, muscles, and other structures, using the University of Paris South's imaging and processing technology. The installation integrates a series of positron-emission tomography/magnetic-resonance imaging scans and x-rays with Microsoft Kinect motion-capture technology to generate a mirror that appears to reflect the viewer without skin, and mimics the viewer's movements. The researchers say the system could find application as a personalized medical tool. For example, users could feed it information from their own set of medical scans, which would be blended with real-time data from biosensors such as heart rate. The result would be a genuine image of the individual's insides. "This technology might help [in] developing personalized medicine through interactive medical imaging," says University of Paris South researcher Xavier Maitre. "Hands free is an important issue in the aseptic environment." Primary Intimacy of Being was showcased at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Computer Software Accurately Predicts Student Test Performance
UCSD News (CA) (04/15/14) Doug Ramsey

New affective computing technology can detect students' level of engagement in real time just as accurately as human observers. The software makes use of facial expression recognition to detect how engaged students are during class and predict their level of success. Automatic engagement detection has the potential to revolutionize education, says University of California, San Diego (UCSD) Qualcomm Institute researcher Jacob Whitehill. A team led by UCSD computer scientists, researchers from Virginia Commonwealth University, Virginia State University, and Emotient, a facial expression recognition technology provider, trained an automatic detector to measure how engaged a student appears in a Webcam video while undergoing cognitive skills training on an iPad. The researchers used automatic expression recognition technology to analyze facial expressions on a frame-by-frame basis and estimate engagement levels. Whitehill says the technology has tremendous possibilities in education and business. "Automatic recognition of student engagement could revolutionize education by increasing understanding of when and why students get disengaged," he says. "Automatic engagement detection could be a valuable asset for developing adaptive educational games, improving intelligent tutoring systems, and tailoring massive open online courses."


A New Kind of Tech Bubble
BBC News (04/23/14) Rory Cellan-Jones

Bristol University researchers say they have developed a "chrono-sensory mid-air display system" for generating bubbles onto which images can be projected and which release a scent when popped. "We are interested in creating new and exciting experiences for people," says Bristol professor Sriram Subramanian. "Think about your laptop or phone--you can't put your finger through the screen." The bubbles produced by Subramanian's SensaBubble system deliver short-term messages that vanish when burst but leave behind a longer-term scent. Among the applications Subramanian envisions for SensaBubble is marketing, with one example being bakeries sending scent-laden bubbles out to entice shoppers to come into their stores and buy goods. The professor also sees the technology having educational applications, such as projecting numbers onto different bubbles to teach children mathematical principles. In addition, Subramanian thinks SensaBubble could be used as an ambient notification system, delivering messages to office workers and others to inform them about important things, such as how many unread emails are in their inboxes. The technology was unveiled at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Carnegie Mellon System Lets iPad Users Explore Data With Their Fingers
Carnegie Mellon News (PA) (04/22/14) Byron Spice

The Kinetica proof-of-concept system developed by Carnegie Mellon University (CMU) researchers with support from the U.S. National Science Foundation enables Apple iPad users to manipulate tabular data on their touchscreens with natural gestures. "The interactions are intuitive, so people quickly figure out how to explore the data with minimal training," notes Human-Computer Interaction Institute researcher Jeffrey Rzeszotarski. The system's developers say users can gain deeper insights into relationships by seeing where data points originate as they are sorted. Rzeszotarski cites user studies showing that people using an Excel spreadsheet to analyze data usually made about the same number of observations within a 15-minute period as did Kinetica users, but the latter had a better understanding across multiple dimensions of data. "People often try to make sense of data where you have to balance many dimensions against each other, such as deciding what model of car to buy," says Kinetica co-developer and CMU professor Aniket Kittur. "It's not enough to see single points--you want to understand the distribution of the data so you can balance price versus gas mileage versus horsepower versus head room." The researchers detailed their findings at the recent ACM CHI Conference on Human Factors in Computing Systems in Toronto.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.