ACM SIGCHI Banner
Welcome to the March 2015 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


What Will Tomorrow's Computers Look Like? Nothing Like Today's
USC News (02/18/15) Diane Krieger

University of Southern California professor Andrew Gordon expects human-computer interaction to evolve toward human-human interaction in the decades ahead, with typing input replaced by an anthropomorphic interface. He predicts such advancements will relegate the desktop interface to obsolescence, and this metaphor will "come crashing down" as computing technology continues to mature into a smaller form factor and its costs continue to decline. Gordon speculates future computers may resemble postage stamps, for example. "We'll be ripping them off and sticking them on stuff," he says. "There might be more capability in a single stamp than all the computing power on the planet today." With the cost of computing power falling toward near zero, Gordon soon expects ubiquity and utilization by far more people than simply office workers. He says the key to imbuing computer interfaces with anthropomorphism is teaching them to decode people's words and actions to interpret their underlying mental states and processes, via the process of abduction. "We need to equip computers with the same common-sense theories that we all use to understand each other in social interactions," Gordon says.


DARPA Aims to Breach the Human-Computer Natural Language Barrier
Network World (02/20/15) Michael Cooney

The U.S. Defense Advanced Research Projects Agency (DARPA) has launched a program to bring computers capable of understanding context, gestures, and expressions in their interplay with humans into the real world. DARPA's Communicating with Computers (CwC) program wants to accelerate progress toward two-way communication between people and computers in which the machine can support a full range of natural interaction modes. DARPA says the initiative will establish tasks in which humans and machines must communicate to execute a job, with one task involving collaborative storytelling. "Another CwC task will be to build computer-based models of the complicated molecular processes that cause cells to become cancerous," the agency says. "Computers are starting to do this already in DARPA's Big Mechanism program, but they don't work collaboratively with human biologists...because while machines read more quickly and widely than humans, they do not read as deeply, and while machines can generate vast numbers of molecular models, humans are better judges of the biological plausibility of those proposed models." DARPA's Paul Cohen emphasizes the current perception of computers is as tools activated by a few keywords or clicks. "The goal of CwC is to bridge [the language] barrier, and in the process encourage the development of new problem-solving technologies," he says.


The Social Science Behind Online Shareablity
CityLab (02/20/15) Laura Bliss

Yahoo! Labs Human-Computer Interaction group researcher Saeideh Bakhshi is seeking a method for quantifying the objective characteristics of an image's shareability, which opens up the possibility of predicting how images may go viral. A recent study in which Bakhshi analyzed 1 million Pinterest images found a connection between a photo's main hues and its shareability. The study found images composed mainly of red, purple, and pink were more likely to be shared, while blue, green, black, and yellow limited diffusion. The study controlled for variables having the most impact on photo engagement, such as the number of followers seeing it and how active certain users were. However, the study did not control for gender or the type of content most frequently posted to Pinterest. "Colors tend to have a very implicit effect on mood and behavior, and these effects are not observed [by science] that strongly, so it's not easy to generalize about what colors people prefer and why," Bakhshi notes. "But when you look at it on a larger scale, an online setting, then you have a collective behavior that can't otherwise be observed in things like politics, or culture, or food." She previously ascertained photos on Instagram and Flickr, like the Pinterest images, are platform-dependent in terms of engagement, and the most engaging pictures on one platform would not necessarily be similarly popular on another platform.


Without Smart, Connected People There Are No Smart Cities
Government Technology (02/23/15) Chad Vander Veen

Smart, connected cities cannot work unless they are designed for average people who also must be smart and connected. The majority of the global populace's migration to urban centers, coupled with an increasingly data-rich world, promise to make the shift to global urbanization healthy and prosperous. The area of wellness is progressing thanks to wearable technology to help consumers monitor health, and this offers a glimpse of the potential for innovation to create smart cities; but it is only one component in what it takes to realize smart, connected citizens. IBM India and South Asia's Kanumury Radhesh says smart cities not only use technology to improve inhabitants' well-being, but also facilitate more effective interaction with citizens and businesses, as well as lower costs and reduce resource consumption. A smart city must be assembled from hundreds or thousands of smaller elements that collectively transform the metropolis, and Chattanooga, TN, is one example. The city has ushered in an economic transformation via a citywide, high-speed fiber network and direct public Internet access. Meanwhile, the Internet of Everything is touted as the link between the city and its citizenry, and the tool citizens will use to tap the connections within urban systems to give meaning to the data yielded by those connections. The city also is proving to be the best place for younger generations to live and work in an increasingly digitized world.


Spring Studies Receive 5.3 Million British Pounds to Develop Assistive and Rehabilitative Devices
The Engineer (United Kingdom) (02/24/15)

Britain's Engineering and Physical Sciences Research Council will split a total 5.3 million British pounds in funding between three projects to develop assistive and rehabilitative devices. The projects, which will start in the spring, will focus on a prosthetic hand that provides sensory feedback, robotic apparel to assist people with walking, and biosensors to monitor how patients exercise or use equipment during rehabilitation. Newcastle University researchers will apply their part of the funding toward creating a prosthetic hand that will give users a sense of tactile feedback via fingertip sensors, while also translating proprioceptive data from a virtual hand into neural stimulation so the user can control the prosthesis. Meanwhile, Bristol University will use its funding to develop robotic clothing that enables easy and unassisted movement for users with mobility impairments, disabilities, and age-related weakness. The clothing will utilize artificial muscles fashioned from smart materials and reactive polymers that can exert great forces, in conjunction with the latest wearable soft robotic, nanoscience, three-dimensional fabrication, functional electrical stimulation, and full-body monitoring innovations. For the third project, Warwick University will develop low-cost, disposable, unobtrusive biosensors for use with various patients to monitor their use of exercise equipment and their at-home adherence to exercise regimens.


Silicon Seeks to Outsmart Brain
EE Times (02/24/15) Rick Merritt

Projects to develop computer chips capable of monitoring and responding to the brain are making progress. A $100-million White House initiative is focused on implants for data collection, and Medtronic's Tim Denison reports three companies, including his, have systems in the field so far. Medtronic's Activa PC+S brain implant incorporates a type of silicon oscilloscope that listens to brain waves, and has gleaned more than a year of data in some patients since their initial use in animal implants six years ago. Meanwhile, a NeuroPace implant is designed to monitor for indicators of incipient epileptic seizures and apply stimulation to prevent them, while Cyberonics is providing a Vagus nerve stimulator for patients in Europe for the purpose of automatically listening for and responding to biological signals. Challenges researchers are concentrating on include ways to shrink and position electrodes to capture brain waves, interpret very low-level neural signals, and understand how such stimulators can work in conjunction with drugs, according to Denison. DG Institute of Science and Technology professor Minkyu Je is focused on a silicon transceiver subsystem capable of reconnecting a limb with severed nerve endings into the brain, which also could enable thought-controlled prosthetics or wheelchairs.


9 Fantastical Future Display Technologies
InfoWorld (02/19/15) Glenn McDonald

The traditional cathode-ray tube is giving way to new display technologies as computing migrates out of the office and mobility becomes prominent. One display technology making progress is organic light-emitting diodes (OLEDs), which can generate high-resolution images on screens that are thinner and more energy-efficient than their predecessors. OLEDs' flexibility also is spurring development of displays that can enhance image quality as well as screen size and durability with the addition of slight curves and wraparound edges. Meanwhile, a prototype smartphone that unfurls like a foldout map is a recent development in the realm of electrophoretic displays, and companies also are working on touchscreens that provide tactile feedback as the user scrolls over items onscreen, via ultra-low electrical currents. The forthcoming three-dimensional (3D) Oculus Rift virtual reality headset is another significant display innovation, while its potential pairing with the Unreal Engine could exponentially upgrade realism and immersion. Gigantic billboards that use 3D pixels, or trixels, will be able to project images that shift and move when viewed at different angles, while a mix of mirrors and lasers will create enough fine angular resolution to generate 3D images without the need for 3D glasses. There also is the virtual retinal display, which uses specially focused lasers and LEDs to draw images directly onto the viewer's retina.


Michigan State Tests Telepresence Robots for Online Students
Campus Technology (02/24/15) Leila Meyer

Michigan State University (MSU) is testing telepresence robots that enable face-to-face class participation by online students. "Telepresence-type technology...potentially makes a shift in how students see themselves and how other students and the instructors see these online students, more like just another student in class," says MSU professor John Bell. The first test involved Revolve Robotics' KUBI and Double Robotics' Double robots, which were each assigned to two online students while the other online students interfaced with each other via wall-mounted screens. The KUBI units employ iPads mounted on a pedestal that can pan and tilt under online students' command while displaying their faces onscreen, and the screen-equipped Double units are mounted on a tall, mobile pedestal so the user can move them around the room. Bell says the response was positive for both online students and professors, noting it was especially beneficial for instructors because "it was now possible to view the class as a more integrated group instead of thinking of two different groups of people." MSU now is offering a course where 12 online students engage with the class robotically, with 10 using the KUBI units and two using the Double units, while just one student is in the classroom. Bell says the ways in which robots change the perception of online students opens up new research avenues.


Eye Tracking Is the Next Frontier of Human-Computer Interaction
The Conversation (02/20/15) Melodie Vidal

Eye-tracking technology may become a routine element in our interaction with computers, especially games, writes Lancaster University Ph.D. student Melodie Vidal. Eye trackers typically consist of cameras and infrared lights to illuminate the eyes, and the cameras can use infrared light to produce a grayscale image in which the pupil is easily identifiable. From the position of the pupil in the image, the eye tracker's software can determine the direction of the user's gaze. It is possible to collect subconscious data signaled by eye gaze to better understand the user's thoughts, interests, and behavior, or to augment engagement between them and their computer. There is an abundance of practically useful applications for eye tracking, including their use in marketing and usability studies, computer operation for paralyzed users, and video games. Interaction with game characters is a particularly interesting application, as the use of eye tracking could enable characters to react to the player's gaze in a human-like manner, potentially creating an entirely new immersive dimension. Vidal says smart glasses or headgear with eye-tracking capability also hold potential to support more natural and subtle interaction by harnessing all the data transmitted by eye gaze.


MIT Researchers: Crowdsourced Outlines Improve Learning From Videos
Campus Technology (02/12/15) Joshua Bolkan

Crowdsourced conceptual outlines are being employed by Massachusetts Institute of Technology (MIT) and Harvard University researchers to help students improve their learning from educational videos. MIT says users can navigate with the outlines, so those "already familiar with some of a video's content can skip ahead, while others can backtrack to review content they missed the first time around." The researchers used Photoshop video tutorials and produced their own outlines, and subjects who were given the outlines before seeing the videos completed related tasks with more confidence, while Photoshop experts gave their work more favorable assessments. MIT reports the researchers presented a system for distributing the video-annotation task among paid workers recruited via Amazon's Mechanical Turk crowdsourcing service at last year's ACM Conference on Human Factors in Computing Systems. "Their clever allocation and proofreading scheme got the cost of high-quality video annotation down to $1 a minute," the school notes. The MIT researchers were aware that outlines with subgoal labeling is a much more effective technique for learners. A paper on the team's findings will be presented at ACM's Conference on Computer-Supported Cooperative Work and Social Computing in March.


Massively Open Online Courses Impact Education
The Tartan (02/22/15) Danielle Hu

Massively open online courses (MOOCs) are an area of concentration for Carnegie Mellon University (CMU) professor Gerhard Fischer, who anticipates them having a significant effect on education in the future. "I have been interested in rethinking and envisioning what learning in education in the 21st century might be," he said during a recent lecture hosted by CMU's Human Computer Interaction Institute. "I think that schools will not go away, but we may rethink the function of schools in such an environment since online activities have become very prominent." Fischer's focus on MOOCs ties to the notion of a flipped classroom, where instead of students attending lectures to learn the material required for an established curriculum, the understanding of a subject based on learning materials has already been accomplished before class. During class time, the professor can engage with students based on a discussion-type learning. MOOCs enable well-known academic sources to be distributed to students as a resource to learn prior to class. Many students appear to favor MOOCs' incorporation outside of classes to conventional papers and textbooks. Fischer stressed MOOCs affect a substantially larger number of students than regular universities do, and he suggested the high attrition rate for students participating in MOOCs could be addressed if MOOC-offering schools find "ways to make their educational efforts more legitimate and verifiable" to professional entities.


CSUN Engineering Students Are Developing Learning-Capable Robots
The Sundial (CA) (02/11/15) Marissa Nall

California State University Northridge's (CSUN) Engineering and Computer Science department last year received a $263,598 grant from the U.S. Department of Defense to develop a system for enhancing automation and human interaction, among other projects. Part of the grant has gone toward an effort to create a two-robot team, with one machine focused on ground and the other on flight operations, as a proof of concept for planetary exploration, says project manager Jeremy Friedman. "Their goal is to be autonomous companion teammates with each other in contact with a remote operator, the goal being to traverse desert or exo-planet landscape to go and do mapping missions," he notes. Friedman says the research's objective is nurturing trust between operators and learning-capable robots, with an emphasis on search-and-rescue missions. CSUN researcher Karanvir Panesar is concentrating on improving the interface design to help operators better communicate with a flying robot and the ground control system. He says the grant also will be applied toward innovations such as a vertical takeoff and landing aircraft, a ground vehicle, and two vehicles to function as a ground control station and transportation. These and other projects aim to combine other fields, such as psychology and geography, into their designs and follow an interdisciplinary research path.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.