ACM SIGCHI Banner
Welcome to the May 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


Touch and Feel Over Distance: The Next Trend in ICT?
CORDIS News (04/22/16)

The European Union-funded Haptic Signal Processing and Communications (PROHAPTICS) project has been devising novel technologies and techniques for haptic communication. In an interview, project coordinator Eckehard Steinbach says he envisions a time when haptic feedback enables users to interact remotely with objects and people as if they were physically present. "The long-term goal of this research is to make the teleoperation fully transparent, which means the user will no longer be able to tell if a task is carried out locally or remotely through the [human-machine interface]," Steinbach notes. Among the tangible benefits of haptic communication-facilitated human-machine interaction Steinbach cites are teleoperation systems that support telesurgery and telemaintenance, online stores with tactile displays that enable customers to feel merchandise before buying, and interactive videoconferencing. "PROHAPTICS has developed a series of algorithms, codecs, and protocols which enable haptic communication across distances for both haptic modalities [kinesthetic and tactile]," Steinbach says. "The solutions developed are human-centric in the sense that they consider and exploit the limitations of the human haptic perception system." Steinbach describes PROHAPTICS' mathematical model as an integration of many known constraints of human haptic perception with other limitations described in a common framework, which can be used to decide whether or not a haptic signal change is perceivable by a human. "Based on this model, highly efficient data reduction schemes can be designed," he says.


Singapore Is Taking the 'Smart City' to a Whole New Level
The Wall Street Journal (04/24/16) Jake Maxwell Watts; Newley Purnell

Singapore's Smart Nation program includes an initiative to collect data on daily living that exceeds all previous efforts. An undetermined number of sensors and cameras will be deployed across Singapore to measure everything from flood levels to vehicle traffic to crowd density to the cleanliness of public places. The information will be housed in government data centers while some will be incorporated into the three-dimensional real-time Virtual Singapore platform. Officials say the smart city program is intended to enhance government services via technology, better connect citizens, and encourage private-sector innovations. Despite assurances from the government the sensor-collected data will be anonymized as much as possible, "the big, big elephant in the room is protection of privacy and ensuring security," notes Singapore foreign affairs minister Vivian Balakrishnan. The core component of the Smart Nation program is a predictive system into which sensor data on the exact dimensions of buildings, placement of windows, and types of construction materials will be fed. Officials say the resulting data-rich map will aid in decisions such as rerouting buses based on where riders are congregating, modeling how new skyscrapers might impact wind-flow patterns or telecommunications signals, and plotting out the potential spread of infectious diseases. "Singapore is doing [smart city implementation] at a level of integration and scale that no one else has done yet," says Aecom's Guy Perry.


Smartwatches Still Lack Killer App
InformationWeek (04/26/16) Thomas Claburn

Smartwatches require a compelling app to make them indispensable, with professor Chris Harrison at Carnegie Mellon University's (CMU) Human-Computer Interaction Institute emphasizing a need for more usefulness and usability. "If you ask people what the killer app is [for a smartwatch], for most people it's telling time," he says. Harrison suggests enabling smartwatches with unique capabilities, including access to the wearer's bioinformatics and sensing activity based on the wearer's arm movements. For example, Harrison and colleagues have demonstrated a smartwatch can recognize electronic objects touched by the user by tapping electromagnetic emissions. With collaborators from Disney Research Pittsburgh, CMU researchers have created a method for reading the electronic signatures of electromechanical items. That information can help smartwatch apps infer the context in which the wearer is acting and enhance the user experience. Smartwatch augmentation also could be facilitated via access to a richer environmental dataset, with Harrison citing as an example the transformation of light bulbs into devices that collect information about the local area, which could further improve smartwatch apps' contextual awareness. Harrison says although there is no single killer smartwatch app, "we're getting to the point where smartwatches will have that payoff."


Haptic Glove Joins the Race to Make Augmented Reality Tactile
The Stack (UK) (04/20/16) Martin Anderson

Auckland University of Technology researchers have developed a haptic glove that provides tactile feedback and can control video games and game environments. Grafted onto the glove is a custom-fabricated printed circuit board using an Arduino microcontroller system, and an inertial measurement unit enabled by microelectromechanical systems. The glove is equipped with tubing that features an infrared sensor at one end and an infrared receiver at the other, signaling resistance changes as the tube is bent. Later iterations of the glove were tested against a basic flight simulator game in which hand and finger movements were used to fire weapons and maneuver the plane, with vibratory feedback imparted to the fingertips. The researchers say they are interested in applying force-feedback and low-level electrical muscle stimulation in future experiments with wearable haptics. "Whilst not strictly required for a game controller, the ability to provide force-feedback would allow the haptic glove concept to be extended to a broader range of virtual environments," the researchers note. They say vibrotactile haptic feedback can be helpful for relaying information to a user that is non-haptic in nature, while also demonstrating for many situations true and realistic haptic simulations are unnecessary.


NYU-X Lab: Artificial Intelligence in Education--Imagining and Building Tomorrow's Cyber Learning Platform Today
NYU News (04/13/16) Christopher James

"Wicked challenges" so puzzling in the realm of social and organizational planning they defy conventional linear, analytical, systems-engineering mitigation strategies call for a collaborative computer-assisted interconnected framework of people working together, says New York University (NYU) professor Winslow Burleson. "Advanced cyber-learning environments that involve virtual reality and artificial intelligence [AI] innovations...can facilitate the explorations and conversations needed to solve society's 'wicked challenges,'" Burleson says. "Cyber learning is an essential tool for envisioning, refining, and creating a 'utopian' world in which we are actively 'learning to be'--deeply engaged in intrinsically motivating experiences that empower each of us to reach our full potential." Burleson envisions AI in education evolving by 2041 into a central contributor to democratizing learning and active citizenship for all. He and NYU doctoral candidate Armanda Lewis anticipate a bundled and ever-developing kit of integrated cyber-tools connecting disparate groups and individuals, bringing them together in both a real and a virtual cyber-social-physical environment currently in prototype form at the NYU-X Lab. The platform will support "transdisciplinary collaborations, integrated education, research, and innovation by providing a networked software/hardware infrastructure that can synthesize visual, audio, physical, social, and societal components," Burleson says. Lewis says the concept entails "creating a global meritocratic network using advanced versions of the NYU-X Lab Holodeck to create scenarios and engage in real-world problem solving."


Robot Abuses Google's Smart Textiles to See How Much They Can Take
Technology Review (04/21/16) Rachel Metz

Google is using a robot arm to test the durability and the gesture-recognition performance of the interactive fabrics developed by its Project Jacquard researchers. The textiles are woven with grids of conductive yarns created via conventional fabrication techniques. The experiments, to be detailed in a presentation this month at the ACM CHI 2016 conference in San Jose, CA, involve a robot arm that repeatedly swipes a piece of Google's fabric overlaying a "flexible sponge foam to best simulate body flexibility." After 10 hours of this task, the Project Jacquard researchers estimated a lifespan of three years and 200 days of usage for a swatch of the interactive fabric, assuming it was swiped 200 times a day for a total of 12,000 swipes. At that rate, gesture recognition was slightly more than 95 percent and the fabric did not show visible damage, according to the researchers. The researchers say that rate did not change even after an additional 30,000 swipes, but they note in a second experiment, in which patches of interactive fabric attached to a jacket sleeve were tested with humans performing gestures, the overall recognition rate was about 77 percent. Project Jacquard leader Ivan Poupyrev reports more recent work has improved that rate.


Metadating Helps You Find Love Based on Your Everyday Data
New Scientist (04/20/16) Aviva Rutkin

Although some online dating sites match potential mates using algorithms, an experiment in "metadating" by Newcastle University researchers applied profiles compiled by daters' data-gathering phones and computers. "There's a bit of a mismatch between a data-led view of the world--which is very dry and mechanical--and how we view ourselves," says project leader Chris Elsden. "Can we give people more control over [their everyday data], make it more ambiguous or playful?" Before the experiment, daters were asked to fill out profiles detailing numerical values such as shoe size, the farthest distance they had traveled from home, the earliest and latest times of day they had sent an email in the past month, and their heart rate. The seven male and four female participants began the event by poring over each other's anonymized profiles, and then paired up in couples for four minutes in a speed-dating format. The researchers listened as the participants described themselves in terms of the data and stats they had put down. "Offering a way for people to feel like they have some control, or can be creative or thoughtful about the data they're producing, is really important," says University of Pennsylvania researcher Jessa Lingel. The Newcastle team will present the results of the project this month at the ACM CHI 2016 conference in San Jose, CA.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


In Gaming, Player Behavior Reflects Roles--Even When No Roles Are Given
NCSU News (04/20/16) Matt Shipman

Player behavior in narrative role-playing games (RPGs) reflects specific character roles, according to North Carolina State University (NCSU) researchers. An NCSU team created a simple interactive game and tracked the gameplay of 210 people. Although 78 people were assigned to the role of fighter, mage, or rogue, 91 were allowed to choose from the three roles and 41 were given no role--they simply began gameplay. "We wanted to know how, if at all, having a role influenced player behavior," says NCSU Ph.D. student Ignacio Dominguez, the study's lead author. "We also wanted to know if it mattered whether the role was assigned versus selected by the player." The researchers found player behavior was consistent with their role, regardless of whether it was assigned or chosen, and gameplay was consistent with a single role even if they did not have one. "The results strongly support the idea that players make choices based on their character's role, even if they didn't pick the role,” says NCSU professor David Roberts. The findings also suggest game designers may want to focus content development on actions consistent with character roles, and researchers need to account for roles within games. The NCSU team's study will be presented this month at the ACM CHI 2016 conference in San Jose, CA.


Researchers Develop Magnifying Smartphone Screen App for Visually Impaired
Phys.org (04/22/16) Suzanne Day

Researchers at the Schepens Eye Research Institute of Massachusetts Eye and Ear/Harvard Medical School have developed a smartphone application that transfers a magnified smartphone screen to Google Glass, which users can control using head movements to see a corresponding portion of the magnified screen. The device could benefit visually impaired users who often find it difficult to use the smartphone's built-in zoom feature because of the loss of context. "When people with low visual acuity zoom in on their smartphones, they see only a small portion of the screen, and it's difficult for them to navigate around--they don't know whether the current position is in the center of the screen or in the corner of the screen," says Schepens Eye Research Institute scientist Gang Luo. The researchers tested the technology on two groups--one group that used the Google Glass app and the other using the smartphone zoom feature--and quantified how long it took for them to complete certain tasks. The experiment demonstrated the head-based navigation method reduced the average trial time by about 28 percent, versus conventional manual scrolling. The next stage for the project is to incorporate a wider range of gestures on the Google Glass to interact with smartphones. Another planned focus is determining the efficacy of head-motion-based navigation compared to other popular smartphone accessibility features, such as voice-based navigation.


Robots That Act Differently When You're Around
The Atlantic (04/19/16) Adrienne LaFrance

Robots are being trained to tailor their behavior to individual humans in facilities such as the University of California, Berkeley's Interactive Autonomy and Collaborative Technologies Lab. "Most of robotics is focused on how to get robots to achieve the task, and obviously this is really, really important," says lab director Anca Dragan. However, Dragan says, the researchers are focusing on "how these algorithms need to change when robots are actually out there in the real world. How they need to coexist with people, direct people, and so on." She says the robots base their behavior on models of human behavior, and notes "by learning how humans act...the robot is indirectly learning how to behave itself." In one example, Dragan and her team taught an algorithm to watch drivers on a highway, and then tested how it would behave at a four-way stop; instead of sitting and waiting for other cars to go first, the robot backed up slightly to signal its intentions to the human driver, based on its observation that people often accelerate when more space is between their car and other vehicles. Robots that learn from experience relieve engineers from the burden of manually coding algorithms for unpredictable situations. However, machine learning also means determining not every human driver acts identically, and knowing what kinds of micro-actions signal whether a person is likely to behave one way or another.


Victorian Age Technology Can Improve Virtual Reality, Study Finds
Dartmouth College (04/19/16) John Cramer

The Victorian Age ophthalmological method of monovision can enhance user performance in virtual reality (VR) environments, according to Dartmouth College/Stanford University research to be presented this month at ACM CHI 2016 conference in San Jose, CA. The technique could help solve the mismatch between convergence and accommodation--the visual cues human eye muscles send to their brains as the eyes fixate and focus on objects in three dimensions--that plagues most stereoscopic displays. The researchers employed the Oculus Rift VR headset to engineer a prototype system with focus-tunable liquid lenses enabling a range of optical modifications. The system supports the creation of adaptive focus cues, resulting in higher user preferences and improved performance in VR. It also permits testing of an enhancement that exploits monovision, in which each eye of an observer can focus to a different distance. "In addition to showing how adaptive focus can be implemented and can improve virtual reality optics, our studies reveal that monovision can also improve user performance in terms of reaction times and accuracy, particularly when simulating objects that are relatively close to the user," says Stanford researcher Robert Konrad. Dartmouth professor Emily Cooper says practical optical VR solutions are critical to making the technology more comfortable and immersive. "Our work shows that monovision has the potential to be one such solution," she notes.


Future Co-Working: A Space Where Humans and Robots Will Learn From Each Other?
Deutsche Welle (Germany) (04/26/16) Zulfikar Abbany

Technical University Dresden software engineers envision human-robot collaboration via wearable technology that enables humans to train the machines. In an interview, Dresden researcher Jan Falkenberg says they have developed smart clothing to control design and construction robots while recording their movements. He says the goal is to teach the robots to perform tasks using intuitive learning, and then automate the process. One piece of intelligent apparel Falkenberg describes is a glove connected via Bluetooth to a smart jacket that communicates with the robot using Wi-Fi. Haptic feedback in the glove ensures a flow of intuitive learning to the robot. "Once we've taught the robot, we can enable very simple co-working, because the robot is aware of the human nearby," Falkenberg says. "And because of this, the robot can adjust its movements, because it knows, 'ah, there's a tall person, wearing a smart jacket.' And the human can interact with the robot by making simple gestures and the gestures are recognized, and the robot can continue his work because it recognizes the gestures."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.