ACM TechNews SIGCHI Edition
ACM SIGCHI Banner
Welcome to the February 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


In Memoriam: Marvin Minsky 1927-2016
Association for Computing Machinery (01/27/16) Lawrence M. Fisher

Marvin Minsky, artificial intelligence (AI) pioneer, co-founder of the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory, and ACM A.M. Turing Award recipient, died on Jan. 24 at the age of 88. Starting in the early 1950s, Minsky developed computational ideas to characterize human psychological processes, and generated theories on how to give machines artificial intelligence (AI). Work in MIT's new AI laboratory included projects to model human perception and intelligence, and efforts to construct practical robots. The late 1960s saw Minsky working on perceptrons, or machine-learning algorithms that capture some of the characteristics of neural behavior. His collaboration with Seymour Papert yielded theories of intelligence and radical new approaches to childhood education using the Logo educational programming language. Minsky's Theory of Frames remains his most recognized work from the mid-1970s, in which he emphasized, "the ingredients of most theories both in artificial intelligence and in psychology have been...too minute, local, and unstructured to account--either practically or phenomenologically--for the effectiveness of common-sense thought." He conceived of labeling data-structures in memory as frames, which evolved into the primary data structure of AI Frame Languages, and are a key component of knowledge representation and reasoning schemes. Minsky and Papert also developed the Society of Mind Theory, which tries to explain how intelligence could be the result of the interaction of non-intelligent elements. Minsky won the fourth-ever A.M. Turing Award in 1969 "for his central role in creating, shaping, promoting, and advancing the field of artificial intelligence."
Share Facebook  LinkedIn  Twitter  | Return to Headlines


The Moving Finger Moves On
The Economist (01/30/16)

Researchers are developing new touchscreen technologies to make the devices more versatile and less distracting. Car parts manufacturer Robert Bosch GmbH's neoSense system incorporates haptic feedback, using textured surfaces of various types to represent buttons with distinct functions so drivers can feel for the right button without taking their eyes off the road. Developers also are striving to improve capacitance touchscreens so they support thinner and more pressure-sensitive displays, and expand the ways users can operate them with fingers and gestures, according to Perceptive Pixel founder Jeff Han. Apple's newest iPhone reacts to finger pressure via the company's 3D Touch process, in which a sensor under the screen can sense a minute deformation in the glass when a finger is pushed against it. The technology enables additional user actions, such as pressing to preview a message or email before opening it. Meanwhile, ETH Zurich researchers are working on more conductive, nearly invisible capacitance grids composed of three-dimensionally-printed nanowalls to boost touchscreen responsiveness. In addition, contactless, hand gesture-controlled touchscreens are under development at Samsung. Google is building something similar using a miniature radar chip, which could be embedded behind the screen.


Augmented Reality Study Projects Life-Sized People Into Other Rooms
Technology Review (01/19/16) Rachel Metz

Microsoft Research's Room2Room project employs Kinect depth-sensing cameras and digital projectors to capture the image of a person in three dimensions in one room and project a life-sized version of that person onto a piece of furniture in another room, where someone else is physically present, and vice versa. The researchers say each person can see a digital image of the other with the proper perspective, look at the other person from different viewpoints, and interact appropriately. Room2Room's functionality is based on another Microsoft Research augmented-reality project called RoomAlive, in which the same hardware is used to produce a room-sized gaming arena. However, Room2Room sets up two similar rooms so the researchers can scan a person sitting in each room and project them into the other room. Before the technology can be widely adopted, the hardware will need to become less bulky and easier to deploy, while the images will need to be generated at a higher resolution than is currently possible, according to former Microsoft researcher Tomislav Pejsa. Research on Room2Room will be presented at the ACM conference on Computer-Supported Cooperative Work and Social Computing (CSCW), which takes place Feb. 27-Mar. 2 in San Francisco.


How Does Social Media Affect Our Political Decisions?
Civil Beat (01/19/16) Marina Riker

Researchers at the University of Hawaii at Manoa's Hawaii Computer-Human Interaction Lab have found social media impacts political decisions. They learned millennials' positions on issues and their feelings about how public officials serve the community can be changed by social media posts. Voters could not only be influenced by candidates' own online content, but also by how social media sites filter news feeds, according to the researchers. They divided 70 students ages 18 to 29 into three groups that viewed coverage of candidates for Mississippi's governor in 2012. One group saw only candidates' Facebook walls and campaign-related news articles, while another saw candidates' Facebook walls and a speech that was not related to the candidates' campaign; the third group only viewed campaign-related news articles. The researchers found those who were engaged with social media were more sensitive to public officials' community relationships than those who were not using social media. Some participants had a negative reaction if the candidates' Facebook feeds lacked community-driven posts. The researchers found millennials have a tendency to stumble across political information instead of actively looking for it, and they are troubled by this phenomenon. The researchers caution as the software underlying social media sites becomes more individualized, users are more likely to see content that keeps them clicking, instead of being exposed to alternative political perspectives.


UTEP Computer Science Department Develops Award-Winning Interactive Agent System
UTEP News (01/21/16) Elizabeth Ashby

University of Texas at El Paso (UTEP) professor David Novick and his students have developed a system for virtual agents and an immersive interactive application called "Survival on Jungle Island." They created the UTEP AGENT system to study understanding between embodied conversational agents (ECAs) and humans, stressing the impact of paralinguistic behaviors on engagement and rapport. Paralinguistics includes behaviors such as gesture, intonation and rhythm in speech, gaze, and turn taking. In the "Jungle Island" app, an ECA and a human interact via speech and gesture in a 40- to 60-minute adventure composed of 23 scenes. A study conducted with the adventure demonstrated rapport grows when the ECA asks the human to perform task-related gestures and then observes a human performing the gestures. To enable human-ECA interaction, Novick's team blended the use of Unity 4 animation software, a Microsoft Kinect motion-sensing device, and the Windows Speech SDK software. The team also built middleware to facilitate faster development of apps, including the jungle adventure. The UTEP AGENT system received the award for Outstanding Demonstration at the 2015 ACM International Conference on Multimodal Interaction (ICMI), held in November in Seattle.


Japanese Researchers Create Electric Fork That Alters the Taste of Food
Oddity Central (01/22/16)

Meiji University postdoctoral research fellow Hiromi Nakamura reports a new electric fork she and fellow researchers designed can make any dish taste salty without using actual seasoning, which could be of benefit to people with special dietary needs. The concept was first presented as an experiment at ACM's CHI 2012 conference in Austin, TX. Nakamura and her team linked a wire to a 9-volt battery and threaded it through a straw placed in a cup of lemonade. The electricity simulated a salty taste that made the beverage blander, according to volunteers. Nakamura and colleagues have since refined the Augmented Gustation technology so the electric charge can be transferred to food via forks and chopsticks, using the utensil's metallic part and its handle as two separate electrodes, with the taster closing the circuit when they put the food-laden utensil into their mouth. "For me, food hacking is about augmenting or diminishing real food," Nakamura says. "It may seem like we're cooking but we're actually working on the human senses. We are inventing devices to add electricity to the tongue. We're trying to create virtual taste."


When Class Is Run by a Robot
The Atlantic (01/22/16) Jacek Krywko

Researchers in Europe and Turkey are developing robotic teaching machines to help preschoolers learn a second language as part of the European Union-funded L2TOR project. The machines are designed to educate children on basic vocabulary and simple stories, using microphones to listen, cameras to observe, and artificial neural networks to analyze the collected information. In addition, the robots will monitor students' nonverbal signals, such as emotional telltales. "The problem with previous generations of teaching machines was their complete lack of social intelligence," notes Bielefeld University researcher Stefan Kopp. "Yet it's possible to design emphatic machines. Our robots will notice tears, smiles, frowns, yawns...and dynamically adjust to how a child feels." In order for the machines to have the knowledge to react appropriately to students' emotions, L2TOR researchers intend to spend some time in kindergarten classrooms, watching the teachers at work. "We need to learn more about their methods, learn from their experience, and then program our robots to act like them," Kopp says. "We want the machines to be as friendly to kids as possible, yet I think a robot should react to bad behavior."


Rutgers Bitcoin Study Reveals False Beliefs on Ease of Use and Privacy
Rutgers Today (01/25/16) Todd B. Bates

Both experienced and inexperienced Bitcoin users subscribe to false conceptions about how the digital currency really works, according to a study by Rutgers University researchers. They found people who have never used Bitcoin doubt they could ever use it, while even Bitcoin users are not very familiar with how it works and overestimate, for example, transactional privacy. The study will be published at ACM's CHI 2016 conference in San Jose, CA, which takes place May 7-12. Interviews with 10 Bitcoin users and 10 nonusers revealed various insights, including the perception that those with no experience with Bitcoin believed it would be too hard or "too scary to use," says Rutgers professor Janne Lindqvist. He also notes Bitcoin users had misconceptions about the currency's ability to protect their anonymity because transactions are recorded in a public ledger and are traceable with some effort. Lindqvist says the researchers are conducting follow-up studies to measure changes in perceptions following a recent declaration by a high-profile Bitcoin developer that the cryptocurrency had failed because it was controlled by a small group of people. Lindqvist thinks even if this is true, many stakeholders are interested in using digital currencies. Thanks to the emergence of Bitcoin, he is convinced "we'll get more cryptocurrencies or more use of Bitcoin or various currencies."


This Radical Rethink of How Computer Interfaces Work by a 21-Year-Old Designer Is Amazing
The Next Web (01/20/16) Owen Williams

German student Lennart Ziburski has invented the Neo computer interface, which radically departs from the traditional concept of interface operation. Instead of using windows, Neo utilizes a panel interface that fills the height of the screen and scrolls from right to left so space usage is maximized. When a new window opens, it slides into the panel carousel. Neo depends on three figure gestures to scroll through the panels or interact with them. Minimized panels slide to a thin slit in the middle of the panels for later retrieval. In place of a desktop, Neo opts for a default Finder interface that shows quick access to content, search, and Google Now-style suggestions. Tags substitute for folders, and the context menu is rethought as a ring of swipeable icons instead of a set menu. The raw design for Neo's concept is available in Sketch files, which Ziburski has made available for anyone to download and re-use.


Teaching Robots to Be More Than Simple Servants
Discover (01/21/16) Nathaniel Scharping

Robotic innovations are forcing a change in perception as to what constitutes a robot, and the Madlab research studio is using new programs to create more versatile machines via a combination of advanced software and motion-capture technology. Madlab founder Madeline Gannon's software enables a robot arm to watch and imitate human movements. In an interview, she details her vision of more collaborative, less servile robots. Gannon says her goal is to enable robots to use data from their movement to construct a logical framework for understanding and anticipating our actions. "A lot of what I'm trying to get [the robot arm] to do is to act like how people would work together in space," she notes. Gannon's current focus with the robot is to develop task-specific behaviors, and she emphasizes there is now sufficient access to the technology that researchers can comprehend how a person completes a task in a shared space with a robot. "We should be able to codify that task in a way that the robot doesn't need to mimic them, but it can know and help out in completing that task," Gannon says. She foresees the eventual implementation of machine-learning algorithms. Within the next decade, Gannon imagines progress in both the discipline of machines teaching themselves to perform automation tasks, and the field of enhancing human-robot interaction so the machines become extensions of people.


Affective Computing: How 'Emotional Machines' Are About to Take Over Our Lives
The Telegraph (United Kingdom) (01/15/16) Madhumita Murgia

The growing field of affective computing seeks to imbue electronic devices with emotional intelligence so they can correctly interpret and respond to human feelings to improve people's quality of life. "It's not just about human-computer interaction," says Affectiva co-founder and computer scientist Rana el Kaliouby. "I realized that by making machines have emotional intelligence, our own communication could become better." Affectiva software powers the emotions of Jibo, a robot built by Cynthia Breazeal at the Massachusetts Institute of Technology's (MIT) Media Lab; the machine reads stories to children and gives voice reminders from a to-do list, while it's also equipped with face recognition and simple conversational abilities. MIT professor and affective-computing pioneer Rosalind Picard saw human-like emotional capabilities as an essential element of truly intelligent computers, and she eventually co-founded Affectiva with El Kaliouby to build this concept into products. For example, Affectiva's Affdex, a facial-expression analysis program, can scan faces and read micro-expressions using a database of several million unique expressions. Mainly focused on advertising and TV, Affectiva is trying to expand its repertoire to other applications, including an in-car sensor that reads the driver's emotions and takes action in emergency situations. Meanwhile, Picard currently is concentrating on emotion-sensitive wearables for healthcare.


UCSD Spinoffs Create Lab-Quality Portable 64-Channel BCI Headset
KurzweilAI.net (01/13/16)

Researchers affiliated with the University of California, San Diego Jacobs School of Engineering say they have developed the first dry-electrode, portable 64-channel wearable brain-computer interface (BCI). They say the device's portability enables the tracking of brain states throughout the day and enhancement of the brain's capabilities. The dry electroencephalogram (EEG) sensors, which are designed to work on a subject's hair, offer easier application than wet sensors while still delivering high-density/low-noise brain activity data. "This is going to take neuroimaging to the next level by deploying on a much larger scale," says Jacobs School alumnus Mike Yu Chi. A Bluetooth transmitter makes an array of wires unnecessary, while a refined software suite for data interpretation and analysis also is included for applications such as research, neuro-feedback, and clinical diagnostics. To cut through the noise in the EEG data, the researchers developed an algorithm that separates the data into distinct elements with no statistical relation to one another. These components are then compared to clean data, with abnormal data tagged as noise and jettisoned. The BCI tracked how signals from different brain regions interact with each other in real time, while machine learning was employed to link specific network patterns in brain activity to cognition and behavior. A future goal is to facilitate easy integration of BCI and advanced signal-processing techniques with everyday applications and wearable devices.


4 Ideas From 4 Continents: Helping the Blind Navigate Cities
Citiscope (01/14/16) Grace Chua

Cities in Japan, Africa, Europe, and the U.S. are developing technology to help visually impaired people navigate urban settings. Two years ago, Japan's Geospatial Information Authority released a program that can convert its datasets into three-dimensional-printed tactile maps for any location in the country. The maps emboss roads, railways, and walkways, and a visually impaired person could use a printout of a certain neighborhood to organize a mental picture before venturing out into it. In Ife-Ife, Nigeria, researchers at Obafemi Awolowo University have created a wearable device consisting of a shoe gadget and an earpiece, which relays ultrasonic cues that change in pitch depending on how close the wearer is to an obstacle. Meanwhile, a Polish startup is helping Warsaw set up small location-marking beacons at various locations. The beacons broadcast to a smartphone app, which can read out a person's queue number, or make the phone emit vibrations as well as alert a driver to when a passenger's bus stop is approaching. In Denver, CO, city officials are augmenting and expanding the public transportation system to be more accommodating of visually impaired travelers. One example of this is automated bus-stop announcements that tell riders when the next stop is coming up, based on real-time location tracking.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe