ACM SIGCHI Banner
Welcome to the January 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


VR Glove Powered by Finger Motions
IEEE Spectrum (12/21/15) Jeremy Hsu

Researchers from Italy and the U.S. have developed a prototype virtual-reality glove featuring electrically conductive filaments sewn into its nylon fabric to ensure maximum flexibility for the wearer. The "GoldFinger" glove's piezoelectric transducers convert the mechanical motions of the user's fingers into electrical power, demonstrating how the technology could extend the glove's battery charge or potentially shrink the battery's size. "The use of a glove requires comfort and reliability and these requirements are not less important than the increased energetic autonomy of the device," notes Polytechnic University of Turin engineer Giorgio De Pasquale. "This also makes the difference between GoldFinger and other [human-machine interface (HMI)] gloves that use wires to send data, or large and heavy batteries for the supply." GoldFinger's rigid electronic components are bundled within an aluminum case on the back of the glove, while each finger features an optical port that emits light-emitting diode illumination so a computer can track the glove's movements. Software written by De Pasquale's brother translates the finger motions into system commands. Moreover, the action of opening and closing the glove's fingers for 10 seconds generates about 32 mW of power on average. The developers believe GoldFinger and similar HMI gloves could be useful in applications such as design and three-dimensional modeling, remote robot operation, and wearable diagnostic tools for surgeons.


NTU Scientists Unveil Social and Telepresence Robots
Nanyang Technological University (Singapore) (12/29/15) Lester Kok

Nadia Thalmann, director of Nanyang Technological University's (NTU) Institute for Media Innovation, has helped spearhead the development of social robots. One such machine is Nadine, a humanoid "receptionist" that retains memory, recognizes people it has met, and displays emotions dictated by the conversations it has with people. "Over the past four years, our team at NTU have been fostering cross-disciplinary research in social robotics technologies--involving engineering, computer science, linguistics, psychology, and other fields--to transform a virtual human, from within a computer, into a physical being that is able to observe and interact with other humans," Thalmann says. "This is somewhat like a real companion that is always with you and conscious of what is happening." Another NTU robot, EDGAR, is a telepresence robot programmed to project the gestures of its human operator. The operator controls EDGAR remotely anywhere in the world by standing before a Webcam, making upper body movements the robot emulates while the user's face and expressions are displayed on EDGAR's face in real time. The robot also can autonomously act out a script to deliver speeches, using an integrated Webcam to track the people it meets to converse with them. "In the future, a renowned educator giving lectures or classes to large groups of people in different locations at the same time could become commonplace" thanks to telepresence robots, says NTU professor Gerald Seet.


The New Story of Computing: Invisible and Smarter Than You
Co.Design (12/17/15) Mark Rolston

The increasing complexity of life facilitated by technology could be addressed by shapeless computers that support an invisible, more human-like user experience, writes Mark Rolston, cofounder and chief creative officer of argodesign. Rolston says cumbersome screen-based devices will eventually be replaced by more lightweight devices via a foundation established by four interface technologies. Voice control is one foundational technology, while augmented reality (AR) promises to enable hands-free human-computer engagement. The vision of AR can be approximated by computerized projections onto surrounding surfaces, and with the addition of prevalence this could enable people to interact with computers without a machine on hand. Virtual machines also hold the potential of radically changing the human-computer interaction paradigm, because they would be cooperatively tasked based on the needs of the moment, operating in concert to serve the user. Early progress in this concept has been made by the current seamless interaction between laptops, phones, and watches and shared cloud services. As the shapeless computing of the future moves forward, researchers can extend this "cloud thinking" to the interface and build virtual entities composed of a growing array of machines within each user's reach and permission. The Internet of Things could be practically realized by the concept of "Smart-Dumb Things," but Rolston suggests none of these milestones can be achieved without advanced artificial intelligence and a new computer interface that uses completely new interaction cues.


The Japanese Professor Who's Spent Three Decades Perfecting a Human Avatar
Motherboard (12/23/15) Emiko Jozuka

For more than 30 years, Japanese scientist Susumu Tachi has explored the field of "telexistence," in which a human being mentally inhabits a robot or machine avatar, and applied it to the creation of anthropomorphic robots called TELESAR. The latest product of Tachi's research, the TELESAR V, interfaces with a human operator wearing a three-dimensional (3D) virtual-reality headset and haptic gloves that transmit input fed through the robot's camera eyes and fingertip sensors. Tachi notes telexistence-controlled machines have demonstrated remarkable fidelity and dexterity in the performance of various tasks, with potentially limitless uses in industry, healthcare, and other sectors. Researchers at Tachi's Cyber Living lab say telexistence offers superior performance to teleoperation, which requires operators to learn to use controllers without a physical connection to the machine itself. Independent research by Tachi is focused on using telexistence to enhance social interactions, with Tachi expecting people to eventually crave more physical communication devices, such as those that impart temperature, vibrations, and pressure to communicators. Tachi cites TELESAR IV, which also displays its controller's face, as an example of this principle. He notes when people at a social function see a human operator's face on the robot with which they are engaging, the robot attains a more friendly, human-like presence.


Where Is Wearable Computing Heading in 2016?
CIO Australia (12/18/15) Rebecca Merrett

Wearable computing will make progress in contextual applications and industry usage in the coming year, according to analysts. Gartner analyst Adrian Leow foresees wearables driving contextual apps that deliver information and options to users. "These apps are going to be accelerated by wearables, because they are going to be interacting with these apps," Leow predicts. "They will serve up information that you didn't necessarily think to." Leow also envisions more smart clothing uses opening up in 2016, as factory owners begin to consider how sensors on apparel can evaluate an employee's health and fatigue risks, for example. Meanwhile, Frost & Sullivan analyst Arvind Arun expects head-mounted displays "to become more pervasive in the workplace and...transition from the 'testing' phase to the 'widespread implementation' phase." Arun points to logistics, repair and maintenance, and healthcare as sectors in which wearables are starting to catch on. Leow also thinks head-mounted displays will become more sophisticated as virtual objects are engineered to interact with physical surroundings; however, he notes "in terms of where head-mounted displays are going, it's more along the lines of where you layer over your current surroundings, so it's not going to be a full virtual reality."


Do Computers Need Pressure-Sensing Screens?
NextGov.com (12/15/15) Adrienne LaFrance

Pressure-sensitive screens incorporated into computers are in a nascent stage of investigation and development, and it remains to be seen if they will lead to a paradigm shift in the way people use computers. "Anyone who's a repeat early adopter of new iPhones shouldn't be surprised that support for the 6S's flagship feature [3D Touch] remains scattered close to three months in," notes Jacob Kastrenakes. "It was the exact same way at this point when apps had to update for the iPhone 6's larger screen...and apps lagged behind on adding Touch ID support, too." Pressure-sensor technology such as 3D Touch can be more intuitive for some users than others. Originator's alphabet/reading application Endless Reader, which employs 3D Touch to change the look and sound of animated letters, offers a more intuitive experience for children while adults need more training with it, according to Originator's Joe Ghazal. Meanwhile, Smule's Yar Woo says the original version of the Magic Piano app was designed to play a note in response to finger pressure on the iPhone screen. However, he says because 3D Touch is more of a curve, developers added a 30-millisecond latency so the app can tell whether the user will press harder to produce the note.


Humans 2.0: How the Robot Revolution Is Going to Change How We See, Feel, and Talk
TechRepublic (12/22/15) Nick Heath

Robots could redefine what it means to be human via enhancement, which researchers at Britain's Bristol Robotics Laboratory are exploring. The lab's deputy director, Anthony Pipe, envisions a future marked by a more symbiotic human-robot relationship, in which robots augment human capabilities by working alongside people. Examples include a robot swallowed like a pill to provide diagnostic data to doctors while inside a patient's body, complete with remote tactile sensation so clinicians can "feel" for tumors and other abnormalities. Bristol researcher Ben Winstone says he is investigating magnetic resonance induction as a power source for the swallowable robot while it is in the body. Fellow Bristol scientist Paul Bremner is developing a rig that enables remote human control of a robot from a distance, with components that include an Oculus Rift virtual reality headset, a Microsoft Kinect, and an Aldebaran Robotics Nao android. Uses Bremner sees for the rig include virtual conference attendance and interaction via a robot substitute, with Bremner planning to add some semi-autonomy to the robot control, "so that if you're interacting with someone who is themselves an extrovert, when you do a gesture, the robot does a large gesture." Also under development are innovations for surgery, including an exoskeleton worn by a surgeon to control a robot hand with precisely mirrored movement.


How Researchers are Turning 'Star Wars' Droids Into Reality
Notre Dame News (12/15/15)

University of Notre Dame (UND) researchers are working to make droid-style robots a reality, developing a software system that can detect when a person's focus shifts from the task at hand and then get that person to refocus. UND professor Sidney D'Mello wants to make computer interfaces intelligent enough to spot a user's waning attention and take action. The software tracks the user's eye movements with a commercial eye tracker, the user's facial features with a webcam, and the user's interaction patterns. If the system determines the user is not paying attention, it can pause the interaction, notify the user, plan a different type of interaction, or tag the interaction for future examination. Meanwhile, UND professor Laurel Riek is developing robots that can sense, respond, and adapt to humans. One project examines team coordination, and builds computer-vision and machine-learning algorithms that can sense how people coordinate their behaviors in real time. Meanwhile, researchers at UND's Locomotion and Biomechanics Laboratory are developing techniques to better enable Zero Moment Point-based walking robots, which rely on carefully choreographed walking motions, perfectly flat ground, and large feet to ensure stability. One new approach involves a mathematical technique that generates stable gaits even when the robot lacks ankles and has a foot the size of a single point, similar to a ballerina walking on her toes.


New Wearable Keyboards Could Be Sewn Into Clothing
LiveScience (12/15/15) Charles Q. Choi

Researchers have created wearable keyboards built from electronics sewn together like textiles that could enable a new form of human-machine interaction. The challenges in such a breakthrough include making the keyboards large to fit enough keys to be practical, and sufficiently flexible and stretchable to follow the movements of the human body. A new study co-authored by National School of Mines engineer Esma Ismailova details the creation of the wearable keyboards. The researchers first stenciled the outline for an electronic circuit onto polyester fabric using an electrically insulating silicon rubber called PDMS. They then brush-painted an electrically conductive plastic onto the outline, and coated the circuit with additional PDMS. The circuit was connected to a computer via electrodes, with square and rectangular patches functioning as keys. The sleeve-worn, 11-key keyboard can be stretched by up to 30 percent, and the fabric retains about 90 percent of its conductivity after 1,000 cycles of stretching and relaxation. "A wearable keyboard would provide a more intuitive interface for tactile input than the touch-sensitive face of a smartwatch or the hand gestures that control devices such as the Google Glass," Ismailova says.


U Mad Bro? Researchers Measure Emotion With Your Mouse Clicks
BYU News (UT) (12/10/15) Todd Hollingshead

Brigham Young University (BYU) researchers have developed a method of gathering and processing data points from cursor movements to measure a user's emotional state. The researchers found people experiencing anger and other negative emotions such as frustration, confusion, or sadness become less precise in their mouse movements and move the cursor at different speeds. BYU professor Jeffrey Jenkins says the technology will enable websites to "understand not just what you're providing, but what you're feeling." The research shows when users are upset or confused, the mouse no longer follows a straight or gently curving path, and instead movements become jagged and sudden. In addition, a user exhibiting negative emotions moves the mouse slower. Jenkins says the greatest application of his research, and the resulting technology that measures mouse movements, is that Web developers will be able to adapt or fix sore points in websites that bring out negative emotions. The concept also can be applied to mobile devices, in which swipes and taps replace mouse movement. Although the research into mobile devices is still in the early stages, Jenkins is encouraged by the massive amounts of data phones and tablets provide.


Yahoo! Labs Develops Biometric Authentication Method for Touchscreens
Biometric Update (12/14/15) Rawlson King

Yahoo! Labs researchers have developed a biometric authentication technique for touchscreen devices that pairs sensors with low-data-rate transmitters in a wearable wristband to communicate to devices via electrical conductivity. Yahoo! Labs' Christian Holz and Marius Knaust detailed their work at the ACM Symposium on User Interface Software and Technology (UIST) in Charlotte, NC. Their proposed "biometric touch sensing" method enables touch on mobile devices to seamlessly combine authentication processes. With their technique, Holz and Knaust say commodity touchscreens can biometrically identify and authenticate users by every touch, via the touchscreen itself, to the point where password dialogs are unnecessary. "From each touch, the touchscreen senses the [two-dimensional] input coordinates and at the same time obtains biometric features that identify the user," the researchers note. "[The] approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, [the] watch prototype senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal." The researchers note the signal affects the capacitive value touchscreens measure with each touch, and enables a device to identify the user each time they touch the device.


As World Crowds In, Cities Become Digital Laboratories
The Wall Street Journal (12/11/15) Robert Lee Hotz

Scientists, civic entrepreneurs, and city managers are conducting experiments to analyze the functions of New York City and other burgeoning metropolitan areas with advanced digital technologies to study growth patterns. In New York, they hope to convert data generated every day by residents into a sustainable design for living that could become a model for digital smart cities worldwide. "We're building more urban infrastructure in the next few decades than over our entire history," observes Santa Fe Institute researcher Luis Bettencourt. Meanwhile, McKinsey & Co. estimates 30 percent of the global economy and most of its innovation are located in only 100 cities. A multiyear study co-conducted by Bettencourt found all cities exhibit a common pattern: each time a city's populace doubles, every individual measure of human interaction in that city increases by 15 to 20 percent. To deal with the numerous changes accompanying urban growth, cities such as New York are becoming digital labs to explore more efficient, secure, and sustainable management. For example, astrophysicist Gregory Dobler has been shooting a panorama of Manhattan every 10 seconds for two years to record the city's rhythmic pulse across multiple wavelengths of light. Another experiment seeks to transform the city's archaic pay-phone system into a gigantic, ultra-fast, and free municipal Wi-Fi network with the installation of 10,000 curbside transmitters.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.