ACM SIGCHI Banner
Welcome to the November 2016 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members the first Tuesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

The Interactions mobile app is available for free on iOS, Android, and Kindle platforms. Download it today and flip through the full-color magazine pages on your tablet or view it in a simplified low-bandwidth text mode on your phone. And be sure to check out the Interactions website, where you can access current and past articles and read the latest entries in our ever-expanding collection of blogs.

HEADLINES AT A GLANCE


First Steps Towards the Touch Robot
University of Twente (Netherlands) (10/24/16) Joost Bruysters

Researcher Merel Jung at the University of Twente in the Netherlands' CTIT Institute is exploring human-robot social touch interactions. She has successfully engineered a computer-connected mannequin's arm equipped with 64 pressure sensors that can recognize 60 percent of nearly 8,000 touches. Jung has outlined four distinct stages a robot must undergo to respond to touch in the correct manner--perception, recognition, interpretation, and response. Jung has so far concentrated on the first two stages, and the robot arm's 60-percent recognition rate is significant, given that no social context was imparted and various touches are very similar to each other. Potential examples include the distinction between grabbing and squeezing, or stroking roughly and rubbing gently. Moreover, the people touching the mannequin's arm had received no instructions on how to perform their touches, and the computer system was unable to learn how the individual "touchers" functioned. Jung's current research focuses on how robots can interpret touch in a social context. She says robots capable of this function should be better able to respond to touch correctly, bringing the touch robot one step closer to reality.


Listen to the Music of the Traffic in the City
The Economist (10/22/16)

New York University professor Claudio Silva and colleagues are developing smart city technology that analyzes "urban pulses"--the rhythms and patterns of a metropolitan area extracted from online activity measurements--to help inform the decisions of city planners and architects. Silva says this pattern analysis could be accelerated in one scenario by using datasets currently available from social media platforms. For example, Flickr records the location and time of every photo uploaded to the site, and Silva's program can employ the site's data as a surrogate measure of tourist activity, revealing in minutes how travelers are roving through a district. Silva says his work is unique by virtue of the speed with which his team can analyze large datasets, using computational topology algorithms that enable the rapid generation, analysis, and manipulation of multidimensional shapes. They represented the Flickr data as a topological shape by calculating the level of activity at each place of interest from the number of photos captured there. By plotting the results on a grid, the researchers produced a three-dimensional representation of tourist activity across a city at a given moment. They then added a fourth dimension by repeating the process for each hour of data available.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Eye-Tracking Data Can Improve Language Technology and Help Readers
University of Copenhagen (10/24/16)

Postdoctoral researcher Sigrid Klerke at Denmark's University of Copenhagen has demonstrated eye-tracking data that shows whether a word causes a reader problems, which could help mitigate reading problems through the use of software that offers translations of difficult words or suggests easier texts. Klerke avoided building a general model comprised of numerous people and texts, and instead opted for a model customized to individual readers according to their eye movements the moment they read a random text. Klerke says the main benefit of her system over other types of language technology is its omission of textual annotation. "The reader's gaze may be seen as a kind of annotation in real time, containing information that we just are beginning to understand how to use," Klerke notes. "Also, it is much faster getting someone to simply read a text than hiring experts to annotate the same text." Klerke says the system can gauge the reader's level of difficulty with the text regardless of what text he or she encounters, relying only on feedback from the reader's gaze. Since eye-tracking equipment can now be incorporated into mobile phones and tablets, Klerke says installing reading-assistance software should be relatively simple. "When Google and other major companies can begin to access user gaze data via mobile phones and tablets, they can use the feedback to improve their systems," she says.


Scientists Create 'Floating Pixels' Using Soundwaves and Force Fields
University of Sussex (United Kingdom) (10/13/16) James Hakner

Researchers at the universities of Sussex and Bristol in the U.K. used sound waves to levitate tiny objects before spinning and flipping them by applying electric force fields. The JOLED technology transforms tiny, multicolored spheres into real-life pixels, which can cohere into floating displays or animate physical computer game characters. "We've created displays in mid-air that are free-floating, where each pixel in the display can be rotated on the spot to show different colors and images," says University of Sussex professor Sriram Subramanian. The spheres' levitation is accomplished with a series of miniature ultrasound speakers that generate inaudible sound waves to hold the spheres in place. The pixels are coated with titanium dioxide to produce an electrostatic charge, so they can be manipulated in mid air by changes to an electrode-supported force field. "JOLED could be like having a floating e-ink display that can also change its shape," says University of Sussex researcher Deepak Sahoo. Subramanian envisions future research exploring how to make the display multicolored and with high color depth, "so we can show more vivid colors. We also want to examine ways in which such a display could be used to deliver media on-demand." The researchers presented JOLED last month at the ACM User Interface Software and Technology (UIST 2016) Symposium in Tokyo, Japan.


Indian Researchers Design Smart Wheelchair
Indo-Asian News Service (India) (10/21/16)

Researchers at Maharishi Dayanand University and the National Physical Laboratory in India have designed a "human-inspired cognitive wheelchair navigation system" capable of enabling wheelchairs to avoid obstacles on their own and detect the user's fatigue or stress levels. The researchers say the smart wheelchair also could monitor the user's heart rate, temperature, or other vital signs for diagnostic purposes. They note the smart wheelchair controller has enhanced safety features and warning systems. The microcontroller is equipped with an algorithm that features six levels of testing surroundings and the user's voice. The system endows the wheelchair with a collision avoidance and warning system, as well as a framework to infer emotional distress or tiredness in the user and provide a warning of possible problems that might arise in such situations to the user or caregiver. The researchers also have constructed and demonstrated a prototype of their smart wheelchair controller, with the goal of commercializing it. "The commercial version of the prototyped autonomous wheelchair would reduce the burden on caregiving staff in the healthcare industry and improve the quality of life for disabled persons," they say.


Repurposed Sensor Allows Smartwatch to Detect Finger Taps and Other Bio-Acoustic Signals
CMU News (10/18/16) Byron Spice

Researchers at Carnegie Mellon University's (CMU) Human-Computer Interaction Institute (HCII) have repurposed a smartwatch's accelerometer via a software upgrade so the watch can be controlled by the wearer's finger and hand gestures. The smartwatch also would be able to identify objects and activities by monitoring the vibrations that transpire when people hold objects or use tools. "It's as if you're using your hand as a detection device," notes HCII postdoctoral researcher Gierad Laput. "The hand is what people use to interact with the world." Laput and fellow postdoctoral student Robert Xiao and CMU professor Chris Harrison developed the ViBand technology, which was detailed last month at the ACM User Interface Software and Technology 2016 (UIST 2016) Symposium in Tokyo, Japan. The researchers boosted the accelerometer's sampling frequency from 100 to 4,000 times a second, and found it can detect bio-acoustic signals when coupled with the body. "ViBand...enables you to augment your arm," Harrison says. "It's a powerful interface that's always available to you." The CMU team increased the accelerometer's sampling rate frequency using a custom kernel, which Laput says is the only modification needed, and can be conducted as a software update.


Typing While Driving Could Be More Safe With Simple Text Entry Technique
Aalto University (10/11/16)

Researchers at Finland's Aalto University and University of Jyvaskyla tested a new text entry method that could help drivers keep their eyes on the road better than when they are tapping a touchscreen. "We were trying to come up with a solution in which the person's driving would not be distracted," says Aalto professor Antti Oulasvirta. The experiments employed a transparent reflective film laid over the windscreen onto which text was projected. Oulasvirta notes it was vital that the text did not obstruct the view of the road, yet was still close enough to the road for the motorist to see it while keeping an eye on the text being typed. The tests used a T9 keypad featuring a 12-key panel linked to predictive text input and affixed to the steering wheel. "The advantage of the T9 keypad...is that it is normally not necessary to press a key for more than once to insert a letter," Oulasvirta says. "Everyone who has used these keypads knows that you do not need to focus your vision on the keypad in the same way as you do with touchscreens." Under simulated conditions, the number of situations with the driver drifting out of the lane was 70-percent lower.


Online Game Invites Public to Fight Alzheimer's
Cornell Chronicle (10/12/16) Syl Kacepyr

Researchers at Cornell University's Human Computation Institute (HCI) have released "Stall Catchers," a new online game that invites the public to contribute to research on Alzheimer's disease. The game's users scroll through brief black-and-white videos and seek clogged blood vessels within a highlighted area, earning points as more vessels are identified. Cornell professor Chris Schaffer says the game ties into research that successfully identified the underlying mechanism for blood flow reduction in Alzheimer's, but which is limited by the difficulty of locating stalled vessels by manual image examination. "While we can acquire enough data to test a new idea about this process or test a potential therapeutic to treat the blood flow reduction in about a week, it takes us a full year to analyze that data," says Cornell professor Nozomi Nishimura. "My hope is that with help from the public, we can dramatically accelerate the pace of our research." Stall Catchers' platform is based on the successful citizen science project Stardust@home run by University of California, Berkeley professor Andrew Westfall. HCI director Pietro Michelucci says enlisting thousands of people to analyze research data via an online game creates "a huge force multiplier in our fight against this dreadful disease."


WhammyPhone: Bending Sound With a Flexible Smartphone
Queen's University (Canada) (10/14/16)

Researchers at Canada's Queen's University have developed the WhammyPhone, a flexible smartphone whose bendable display generates sound effects on a virtual musical instrument. Incorporated into the device is a 1920 x 1080 full high-definition flexible organic light-emitting diode touchscreen with keys that can play sounds via software running on a computer. WhammyPhone's bend sensor permits users to flex the phone as a means of manipulating the sound, says Roel Vertegaal, a professor at Queen's University's Human Media Lab. He says the bend input can simulate bending a string on a virtual guitar or the bowing of a simulated violin. Vertegaal also notes the device can be used to control loops in electronic dance music, offering DJs more intuitive interaction with their instruments. "The real importance of WhammyPhone is that it provides the same kind of kinesthetic feedback that, say, a string provides when it is bent to alter the pitch," Vertegaal says. "This kind of effect is critical for musicians to control their expression, and provides another level of utility for bend input in smartphones." The researchers presented WhammyPhone last month at the ACM User Interface Software and Technology (UIST 2016) Symposium in Tokyo, Japan.


Dartmouth-Led Team Develops WristWhirl, a Smartwatch Prototype Using Wrist as a Joystick
Dartmouth College (10/14/16)

Dartmouth College researchers led the development of WristWhirl, a prototype smartwatch employing the wearer's wrist as a joystick to perform common touchscreen gestures with one-handed continuous input. "While other studies have explored the use of one-handed continuous gestures using smartwatches, WristWhirl is the first to explore gestural input," says Dartmouth professor Xing-Dong Yang. "Technology like ours shows what smartwatches may be able to do in the future, by allowing users to interact with the device using one hand [the one that the watch is worn on], while freeing up the other hand for other tasks." The wrist's biomechanical ability was studied with participants conducting eight joystick-like gestures while standing and walking. WristWhirl combines a thin-film-transistor display, a plastic watch strap equipped with infrared proximity sensors, and a vibration sensor within the wrist strap linked to a microcontroller board. The team tested four usage scenarios using off-the-shelf games and Google Maps, including drawing to access gesture shortcuts, a music player app using wrist-swipes and tapping, a panning/zooming map app that relied on where the watch was held in relation to one's body, and game input. The project was presented last month at the ACM User Interface Software and Technology (UIST 2016) Symposium in Tokyo, Japan.


Full-Circle Viewing: 360-Degree Electronic Holographic Display
ScienceDaily (10/18/16)

Researchers at South Korea's 5G Giga Communication Research Laboratory of the Electronics, and Telecommunications Research Institute have described a novel tabletop display system that enables multiple viewers to view a hologram showing a full three-dimensional image as they walk around the tabletop. The researchers say the display system offers complete 360-degree access without visual distortion. Team leader Yongjung Lim says this was accomplished with a unique viewing window design based on close scrutiny of the optical image system. "With a tabletop display, a viewing window can be created by using a magnified virtual hologram, but the plane of the image is tilted with respect to the rotational axis and is projected using two parabolic mirrors," Lim notes. "But because the parabolic mirrors do not have an optically-flat surface, visual distortion can result. We needed to solve the visual distortion by designing an aspheric lens." The next challenge Lim's team is targeting is to strategically size the viewing window so it is closely aligned with the effective pixel size of the rotating image of the virtual hologram. Other milestones of interest include producing a full-color instead of monochromatic hologram, and addressing issues of undesirable aberration and brightness mismatch among the display's digital micromirror devices.


Moving Toward Computing at the Speed of Thought
The Conversation (10/20/16) Frances Van Scoy

West Virginia University professor Frances Van Scoy is engaged in research she describes as aiming for "the next phase of human-computer interaction." Van Scoy says it involves real-time monitoring of people's brain activity and identifying specific thoughts, which she also calls "computing at the speed of thought." Van Scoy notes the project employs neuroheadsets that can be built using low-cost, open source resources such as OpenBCI. "Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I've thought about in the past few minutes," Van Scoy says. "If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought." Van Scoy also imagines a system in which a neuroheadset-wearing author conceives of characters, settings, and interactions by thought, which a computer could render as an initial draft for a short story or video. Other breakthrough applications Van Scoy envisions for such technology include new types of digital storytelling. One example is of groups of people acting out narratives reenacted in virtual environments with three-dimensional avatars that match their body movements, building a video from multiple virtual camera angles.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.