ACM SIGCHI Banner
Welcome to the November 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Wearable Interfaces to Become Increasingly Sensor-Based Over Next 5 Years
Campus Technology (10/27/14) Leila Meyer

Mobile and wearable devices are expected to rely less on touchscreen user interfaces over the next five years and more on sensors, according to a new ABI Research study. The study examined 11 unique device features ranging from wireless connectivity to embedded sensors, and found that from 2014 to 2019, "hand and facial gesture recognition will experience the greatest growth in future smartphone and tablet shipments." Moreover, these devices will use gesture recognition for a variety of purposes, from monitoring user attentiveness to device navigation, according to the study. The development of new user interfaces in mobile devices is predicted to impact the design of devices for the homes and automobiles in the areas of voice, gesture, eye-tracking, and other interfaces. ABI Research says the challenge for developers will be translating the added complexity from sensors into user interfaces that are intuitive and easy to use. In addition, as the Internet of Things becomes a reality, developers must deal with the question of whether each device should have its own unique user interface or if they should be controlled externally through a mobile device or centralized display. "The really exciting opportunity arrives when multiple user interfaces are blended together for entirely new experiences," says ABI Research's Jeff Orr.


Researchers Are Using Deep Learning to Predict How We Pose. It's More Important Than It Sounds
GigaOm.com (10/17/14) Derrick Harris

A team of New York University researchers has published a paper on a deep-learning model dubbed MoDeep, which is capable of predicting the position of human limbs in images. The researchers say human pose estimation could have potential uses in fields such as human-computer interaction and computer animation. In addition, computers that can accurately identify the positions of people's arms, legs, joints, and body alignment could improve gesture-based controls for interactive displays and create more accurate motion-capture systems without the need for attaching sensors to people's bodies. The MoDeep researchers created a new training dataset that includes information about the motion of the joints in the images, in which FLIC (Frames Labeled In Cinema) images are paired with neighboring frames from associated movies and averaged to calculate the flow of body parts between them. The researchers say pose-estimation is a natural next step after verifying the accuracy of deep learning for object recognition. Object recognition is more holistic, or concerned about what an object is, for example, while pose estimation is more local, or focused on what is the position of the elbow joint on an object. Apart from computer vision, deep learning also has proven especially useful for speech recognition, machine listening, and natural-language processing. In December 2013, a Google research team published papers on a system called DeepPose, which examined sets of sports images using a FLIC test dataset.


They’re Tracking When You Turn Off the Lights
The Wall Street Journal (10/20/14) Elizabeth Dwoskin

Steven Koonin, the director of New York University's (NYU) Center for Urban Science and Progress, is spearheading research on quantifying urban life. He uses equipment such as a wide-angle infrared camera that can detect 800 gradations of light, although its images are scrambled to prevent researchers from seeing inside homes. Special software is used to determine such data as the time households go to bed, what type of light bulbs they use, and what pollutants their buildings emit. Koonin also has installed sound sensors on streetlight poles and buildings in Brooklyn to measure the loudness of house parties and car horns, which he says could potentially be used to enforce noise ordinances. Similarly, cities with pollution laws could monitor emissions themselves instead of relying on building owners to report them. "It's like when Galileo first turned the telescope on the heavens," Koonin says. "It's just a whole new way of looking at society." Other efforts also are underway to use big data to enable smart cities. For example, the University of Chicago will install dozens of sensor packs on street lamps across Chicago to collect data on environmental conditions such as sound volume, wind levels, carbon-dioxide levels, and pedestrian traffic flow. "It's like a Fitbit for the city," says Charlie Catlett, director of the University of Chicago's Urban Center for Computation and Data.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Paid Subscription | Return to Headlines


How Tech Advances Are Helping Innovators Do More for People With Disabilities
GeekWire (10/21/14) Taylor Soper

The University of Washington's Richard Ladner is enthusiastic about the prospects for technology to help people with disabilities, such as Google's self-driving cars. Ladner and his colleagues note there has been a recent increase in attention to assistive technologies and accessibility research and they believe it is essential for more people with disabilities to pursue careers as coders, engineers, and designers. "They will understand the value of what they are doing," says Ladner, whose research includes MobileASL and the Tactile Graphics Project. "They will see the nuances." The majority of large tech firms now have teams dedicated to building out technology designed to be accessible to everyone. For example, Jenny Lay-Flurrie, who is hearing-impaired, leads the Trusted Experience Team (TExT) at Microsoft, which focuses on accessibility. In July, the winning team of a company-wide hackathon at Microsoft was the "Ability Eye Gaze" group, which leveraged technologies including Microsoft Kinect and Surface to make it easier for people with ALS and other disabilities to control a tablet with their eyes. That team worked on the project with Steve Gleason, a former NFL player who has ALS. Lay-Flurrie says more awareness about disability employment is needed, and companies need to have a strong understanding and approach to hiring people with disabilities. "That means creating a safe environment where people can self-identify and be honest about their disability and what they need to be successful," she says.


Through the Combining Glass to the Future of Public Interaction
Science 2.0 (10/14/14)

Reflective optical combiners such as beam splitters and two-way mirrors are used in augmented reality (AR) to overlap digital content on users' hands or bodies. The technology also can be used to reflect users' hands inside a museum AR cabinet to enable visitors to interact with exhibit artifacts, according to University of Bristol researchers Diego Martinez Plasencia, Florent Bethaut, and Sriram Subramanian. The researchers examined three prototypes that used optical combiners to merge the space in front and behind them using novel augmentations/interaction. "This work offers exciting interactive possibilities that could be used in many situations," says Martinez, a human-computer interaction researcher in the Bristol Interaction and Graphics group. "Semi-transparent surfaces are everywhere around us, in every bank and shop window." For example, he says cites a situation in which a customer can't go into a store because it's closed. "However, their reflection would be visible inside the shop window and that would enable them to try clothes on using their reflection, pay for the item using a debit/credit card, and then have it delivered to their home," Martinez says. He notes that although projectors can only augment the surface of objects, combining them with reflections enables people to reveal what is inside an object or even completely virtual objects. The work was presented at the recent ACM Symposium on User Interface Software and Technology in Hawaii.


The Real Cyborgs
Telegraph.co.uk (10/20/14) Arthur House

An increasing number of innovators are turning the concept of a cybernetic organism (cyborg) into reality. A cyborg is a living thing that encompasses both natural and artificial elements. In June, for example, a company called Neurobridge enabled Ian Burkhart to move his hand despite being paralyzed from the neck down since a diving accident four years ago. Meanwhile, other companies are focusing on replacement organs, robotic prosthetics, and implants not only to restore bodily functions but also to alter or enhance them. When Canadian filmmaker Rob Spence lost his right eye in a shotgun accident in 2005, he replaced it with a wireless video camera that transmits what he is seeing in real time to his computer. In 2013, electronic engineer Brian McEvoy outfitted himself with a subdermal compass to enable him to have a guiding system embedded in his body. In addition, the U.S. military is investing millions of dollars in projects such as Ekso Bionics' Human Universal Load Carrier, an exoskeleton that gives enormous strength to soldiers. Technologist Amal Graafstra believes it may soon be possible to use implants to monitor general health and scan for medical conditions. The implants would send health information to the user’s smartphone or directly to a doctor and would be constantly on to enable providers to recognize health conditions before they become serious.


Concert Cellist Hooks Her Brain Up to Speakers to Create Bizarre New Music
Business Insider (10/27/14) Christina Sterbenz

Katinka Kleijn, a concert cellist, uses her brain waves to play music by wearing an electroencephalography (EEG) headset. The headset tracks the activity of the brain's neurons as they communicate with each other via electrochemical impulses, and a program called Max/MSP is used to translate those signals into audio. The signals initially appear as data that composer Daniel Dehaan must scale as directly as possible into values appropriate for sound. "We receive many separate streams of numbers [from the headset], ranging from very small increments to very large," Dehaan says. Initially, the company that manufactures the headsets would not provide a key, creating a major obstacle for the team. Another issue with translating the data was the majority of the brain's electrical impulses occur at frequencies below what the human ear can hear. Researchers say the frequency of the brain waves can reveal information about the person's psychical and emotional state. For example, increased activity in the 8 to 13 hertz range can suggest a person is aware but relaxed, and higher levels indicate alertness, agitation, tenseness, or fear. Colored spotlights during Kleijn's performances also indicate such states as meditation, engagement, excitement, and frustration. Kleijn notes that during rehearsals she needed to become accustomed to the sound of her brain and learn to control it.


Exoskeleton Will Carry Closer Touch With Digital World
Phys.Org (10/26/14) Nancy Owano

Roboticists in China have developed Dexmo, a hand-capturing device that uses a mechanical exoskeleton. The exoskeleton transmits a person's finger movement to several rotational sensors, and the data is transmitted directly to a device or back to a host computer equipped with a software development kit (SDK). The SDK uses a kinematics algorithm to recreate a hand model. The Dexmo Classic is a wearable exoskeleton system that captures 11 degrees of freedom of hand motion, while the F2 has the additional capability of providing digital force feedback. The researchers say they used relatively inexpensive rotational sensors along with injection-molded plastic parts to curb costs. Dexmo also comes with an SDK for virtual reality (VR) developers that provides examples with the group's built-in hand regeneration algorithms. Dexmo is wireless, transmitting data through Bluetooth serial support, and the researchers note it operates even next to a strong magnetic field because it does not use magnetometers. The researchers say they began working on Dexmo after noticing the lack of affordable hand motion-capturing devices in the fields of robotics and virtual reality.


Researchers in Affective Computing Find New Way to Recognize Emotion in Text
FierceBigData (10/22/14) Pam Baker

Researchers at HeFei University's Key Laboratory of Affective Computing and Advanced Intelligent Machine are proposing to recognize emotion in unstructured data, which can take the form of texts, emails, or notes to blogs. The researchers, led by Changqin Quan and Fuji Ren, call their approach "multi-label textual emotion recognition." The approach is unique because it takes "into account the full emotional context of a sentence, rather than being purely 'lexical,'" says Taylor & Francis Group's Ben Hudson. "Uniquely, Quan and Ren's method allows its users to recognize indirect emotions, emotional ambiguity, or multiple emotions in the subject text." The researchers say their "model generates an emotion vector for each emotional word in a sentence by analyzing semantic, syntactic, and contextual features." They say the emotion vector records basic emotions contained in each word. Hudson notes that "each word is given an 'emotional state' represented by eight binary digits, each corresponding to one, or more, of eight key emotions: expectation, joy, love, surprise, anxiety, sorrow, anger, and hate. The final 'result' is based on 'the state of combined expressions of emotions' in the sentence as a whole."


Will Twitter Revolutionize How Cities Plan for the Future?
Next City (10/21/14) Rebecca Tuhus-Dubrow

Data generated by Twitter can be used to gain insight into cities and even help plan them, says Tufts University professor Justin Hollander. Hollander recently launched Tuft's Urban Attitudes Lab to study big data's potential impact on planning and policy. "A lot of what I'm trying to uncover is, where are people happy? What makes them happy?" he says. "This has the potential to revolutionize how local governments in particular plan for the future." One study completed by Hollander sought to compare feelings expressed in Twitter posts with those expressed in civic meetings. He collected 122,187 tweets geotagged to New Bedford, MA, from February 2014 through April 2014 and examined them using an automated tool intended to classify feelings as positive or negative. Approximately 7 percent of the messages were categorized as positive, and 5.5 percent as negative; in the minutes, the corresponding figures were 1.7 percent and .7 percent, respectively. He then focused on terms related to civic life to compare the frequency of 24 keywords in the tweets and the minutes, including "school," "health," "safety," "parks," and "children." Hollander believes such social media-based information can add another layer to the bigger picture and serve as a starting point for asking deeper questions.


EAR-IT: Using Sound to Picture the World in a New Way
CORDIS News (10/13/14) No. 149053

A European research project called EAR-IT is examining the use of acoustics to collect data. The project, funded by the European Union's (EU) ICT 7th Framework Program as part of the FIRE initiative (an EU future Internet concept), took place in the city of Santander, Spain, which has embedded 12,000 sensors in lamp posts and other places. EAR-IT tested such outdoor applications as traffic flow monitoring at a junction near the city's hospital. "EAR-IT has set up sensors which 'hear' sirens and then trigger other sensors to track the vehicle," says project coordinator and Universidade Nova de Lisboa professor Pedro Malo. "This data is then used to change traffic lights in the ambulance’s favor." To ensure the data collected is accurate, EAR-IT compared data from two streets with electromagnetic-induction sensors. In the future, the acoustic sensors could potentially be used in conjunction with pollution detectors to improve air quality in cities and for monitoring the safety of older people. The sensors could transmit a distress message if someone falls, for example. The researchers also are using acoustic data in the home to save energy by determining what is going on in a room and how many people are in it. "Windows can be made to open, curtains close, and lights and heating turn on and go off automatically," Malo says.


How We'll Stereotype Our Robot Coworkers
HBR Blog Network (10/02/14) Taezoon Park

Robots are expected to arrive in the workplace in the near future to serve as receptionists, shopping assistants, waiters, bellhops, and personal assistants, writes Soongsil University professor Taezoon Park, who runs the Human and Complex Technology Systems Research Lab in Singapore and South Korea. He says at Kita-Kyushu airport in Japan, a robot receptionist has been developed to resemble a well-known Japanese animation character, and answers simple requests and questions. Park says because humans will perceive robots as social actors, robot designers will need to ensure that robots have a personality. He recently led an experiment on robot-human interactions that found participants rated security robots with stereotypically male names and voices as more useful than female ones. Research by others has found the public prefers female-gendered robots working in home settings. In areas where co-workers would have few expectations, such as new roles in the office that are only possible with a robot, social psychology depends on models of attraction based on both similarity and complementarity. As a result, this would require developers to consider whether a robot colleague should have a personality similar to the members of its team or a different one to complement a group. Park says human-robot relationships are expected to evolve over time, similar to human relationships.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.