ACM TechNews SIGCHI Edition
ACM SIGCHI Banner
Welcome to the October 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


This Sony Researcher Wants to Put You in Someone Else's Head
CNet (09/27/14) Ben Fox Rubin

Sony Computer Science Laboratories' Jun Rekimoto says he seeks to advance augmented reality technology to the point where "we can immersively connect to other humans or drones." The interest in and development of virtual reality technology are experiencing an abrupt acceleration, with Sony and other companies striving to create products that could radically transform how people watch movies, play games, and communicate with each other. "When you combine that viewing experience, it really creates a new way of communicating," says Gartner analyst Brian Blau. "Seeing someone else's viewpoint, I think, is going to be a powerful paradigm." Among Rekimoto's projects is the LiveSphere headset, which features six cameras that can capture 360 degrees around the wearer. Rekimoto says the device, in conjunction with audio instructions from a remote observer, could help users receive guidance for medical procedures, cooking, and other applications. He also cites Sony's Flying Head project, in which a drone follows an individual's movements, with potential uses including assessment of athletes' form and style during practice sessions, and inspecting areas too dangerous for live evaluation. "I think the more important, or more promising, practice is a human augmenting other humans," Rekimoto says.


Reflected Smartphone Transmissions Enable Gesture Control
University of Washington News and Information (09/19/14) Michelle Ma

University of Washington (UW) researchers have developed a method to detect gestures using a smartphone's wireless transmissions. The SideSwipe system relies on the reflection of normal wireless signals to sense and respond to nearby gestures, which enables it to work when a device is stowed away. The researchers say the method could be incorporated into future smartphones and tablets. When the user's hand passes close to the phone, SideSwipe uses antennas to detect the small amount of electromagnetic reflection with sufficient sensitivity to determine which direction the gesture is coming from. It also can differentiate between sliding, tapping, and hovering gestures. "This approach allows us to make the entire space around the phone an interaction space, going beyond a typical touchscreen interface," notes UW professor Shwetak Patel. "You can interact with the phone without even seeing the display by using gestures in the [three-dimensional] space around the phone." UW professor Matt Reynolds notes SideSwipe should not strain the smartphone's battery life because the sensor is based on low-power receivers and simple signal processing. SideSwipe will be presented at ACM's User Interface Software and Technology Symposium, which takes place Oct. 5-8 in Honolulu.


New Lab Focuses on Assistive Technology, Interconnectivity
Texas A&M University (09/30/14)

Researchers at the new Texas A&M (TAM) Embodied Interaction Laboratory (TEIL) are building and testing prototypes of software-based devices designed to form the building blocks of a widespread ecosystem of networked objects, machines, and people. The lab is funded by a U.S. National Science Foundation (NSF) grant under the leadership of TAM professor Francis Quek. He and his students are developing sensors and other technologies to facilitate interconnectivity between the many elements composing the "Internet of everything." Their intended applications range from education and learning to smart home technology to assistive devices for hearing- and vision-impaired people. For example, Quek is using another NSF grant to develop a method using a tactile screen iPad overlay to help sight-impaired readers. "It's hard to answer the question 'what is the one thing we are going to make,'" Quek notes. "The lab is going to be a wild and woolly place, a space for students to imagine, build, and test anything." The TEIL space is inspired by the maker movement, which seeks to bring individuals engaged in engineering-oriented projects into a social environment that includes prototyping, invention, and creativity, and whose purpose is to invert the idea that only large companies can produce ubiquitous computing devices.


MIT Groups Develop Smartphone System THAW That Allows for Direct Interaction Between Devices
Phys.Org (09/18/14) Bob Yirka

Researchers in the Massachusetts Institute of Technology's Tangible Media and Fluid Interface Groups have developed THAW, a smartphone system that enables seamless onscreen interaction with other computer devices. THAW projects a grid onto an underlying video screen, and uses it to orient itself. Imagery enters into the smartphone through its camera, and the software then assumes control, recognizing what is happening and activating a companion application or software that can manipulate objects on the underlying device. THAW team members say the system aims to demonstrate an overarching effort to integrate various user devices for numerous tasks, such as transferring files without menus or Bluetooth devices, or allowing for uninterrupted activities. During testing, THAW enabled a smartphone holder to control action on a computer screen, similar to mouse actions. Other tests focused on video-game applications--in one, a smartphone was equipped with the ability to capture the game action occurring on a fixed device in real time and walk off with it.


Teaching Method Is a Recipe for Success
Newcastle University (UK) (09/22/14)

Newcastle University's LanCook project combines cooking, technology, and language learning by arranging students in pairs and guiding them step-by-step through a recipe in either English, French, Catalan, Finnish, German, Italian, or Spanish. "LanCook tackles a universal problem of classroom language learning and teaching--that students are rehearsing a language instead of using it," says project leader and Newcastle professor Paul Seedhouse. "This really helps to bring that language to life in an engaging and memorable way and increases the learners' proficiency skills, motivation, and confidence. It also integrates the culture of that particular country more effectively, making it an active part of the learning process." The project employs embedded wireless sensor technology integrated with cooking utensils and ingredients, enabling the kitchen to detect and assess their progress as the students perform cooking tasks. Assistance can be provided through an array of audio messages, images, or video. LanCook is funded by the European Union Lifelong Learning Program, which is designed to teach languages in a real-life setting. Research teams have been using a portable digital kitchen to host cooking sessions across Europe as part of a three-year trial.


Weird Device Helps Long-Distance Lovers 'Teleport' Smooches
LiveScience (10/01/14) Agata Blaszczak-Boxe

Two physically separate people can transmit kisses to each other via the Kissenger, a device with a silicone lip-like orifice that can sense and mimic kiss sensations. The sender must kiss the Kissenger orifice so the kiss can be felt by the recipient, who uses a twin device in the same manner. Kissenger inventor and National Taipei University professor Hooman Samani conducted a study with his research team in which the devices were tested on 10 long-distance couples who used the Kissengers for several weeks. Samani says four couples liked the devices so much they asked if they could keep the devices, while other users were afraid of being embarrassed for using the Kissengers in public. The researchers say the devices can "transfer emotions to the loved ones," but also concede there may be social, emotional, and ethical concerns. "On the legal front, these devices also open a debate related to aspects of adultery in relationships," they note in the study. "For example, would usage of the device with another person constitute infidelity by the partner?" The researchers note during its development the device evolved from an egg-shaped "head" with dark-red lips to a teddy-bear-like "head" with ears, a nose, eyes, and heart-shaped lips, to a softball-size version of the device with a pair of silicone lips and bunny-type ears.


Boston U NSF Grant Will Turn Boston Into Smart City
Campus Technology (09/24/14) Dian Schaffhauser

The U.S. National Science Foundation has awarded a grant to Boston University (BU) to research, prototype, and assess "smart city" services for both Boston and Massachusetts. The project's core element is the Smart-City Cloud-Based Open Platform and Ecosystem (SCOPE), developed by the school's Institute for Computing and Computational Science & Engineering. SCOPE will function as a "multisided marketplace" for smart-city services housed on a public open cloud infrastructure, enabling people to access city and state resources, alter behaviors collectively, and support transportation, healthcare, energy distribution, and emergency response advancements. Azer Bestavros, director of the institute and SCOPE's principal investigator, says smart cities are using technology "to connect people with resources, to guide changes in collective behavior, and to foster innovation and economic growth." SCOPE will leverage current BU projects, such as using sensor networking for traffic light control applications. SCOPE is targeting transportation and mobility services to reduce traffic congestion and pollution; energy and environmental services to monitor and measure greenhouse gas emissions; tools for analyzing data and coordinating crowdsourced input for managing city assets; public safety and security services; and incentive programs and community report cards to encourage the adoption of new services and report on outcomes and sustainability.


Remote Healthcare for an Aging Population
ScienceDaily (09/29/14)

Remote monitoring is a way to enhance patient care as well as expedite medical research, according to a multi-university study published in the International Journal of Ad Hoc and Ubiquitous Computing. The researchers suggest a need for developing pervasive technologies that facilitate at-home patient monitoring to ease the burden for healthcare workers, which they say is possible using sensors and intelligent computer systems. The researchers say developing smart care spaces will improve patients' quality of life, especially those with ailments that are particularly straining to care-givers. The research team has reviewed systems that employ sensors and health monitors, and implies that even simple devices such as radio tags, shadow cameras, and electricity consumption monitors might be very cost-effective. They also suggest the benefits to patients and care-givers might be maximized via a holistic approach to smart care space design and technology deployment, instead of relying on the ad hoc use of devices in isolation. "It is apparent that much of the technology required to create smart care spaces already exists, but further research is required to integrate them into a functional whole," the researchers say.


New Study Shows That Yoga and Meditation May Help Train the Brain
University of Minnesota News (09/25/14) Brooke Dillon

University of Minnesota researchers have shown that people who practice yoga and meditation long term can control computers by thought faster and better than inexperienced users, which could have significant ramifications for treatments of paralysis or neurodegenerative disease victims. Studies of 36 participants were split into two groups--one cohort of 12 people with at least 12 months' experience in yoga or meditation, and a second cluster of 24 healthy participants with little to no yoga or meditation experience. Both groups were involved in three tests spaced out over four weeks in which they wore a cap that picked up brain activity, and were asked to move a computer cursor across the screen by imagining left- or right-hand movements. Yoga/meditation-experienced users were twice as likely to complete the task by the end of 30 trials and learned three times faster than their inexperienced counterparts for the left-right cursor movement tests. "This comprehensive study shows for the first time that looking closer at the brain side may provide a valuable tool for reducing obstacles for brain-computer interface success in early stages," says Minnesota professor Bin He. The next phase of the project is for the researchers to examine a group of participants over time who are practicing yoga or meditation for the first time to determine if their brain-computer interface performance improves.


Learning With Body Participation Through Motion-Sensing Technologies
OUPblog (Oxford University) (09/22/14) I-Chun Hung; Nian-Shing Chen

In order to support a more comprehensive learning design, researchers at the National Sun Yat-Sen University provided learners with a holistic learning context founded on embodied cognition, which perceives mental simulations in the brain, bodily states, environment, and situated actions as critical elements of cognition. The researchers found motion-sensing devices could potentially be incorporated into a learning-by-doing activity for improving comprehension of abstract concepts. For example, they note the use of conventional controllers to undertake fundamental optics simulation exercises may not yield benefits to learners because of the routine-like nature of operations. However, the researchers reason if bodily states and situated actions were designed as external information, the extra input can further help learners gain a better understanding of concepts via meaningful and educational body involvement. Motion-sensing technology enables younger learners to directly engage with digital-learning content through gestures, and the researchers' experimented with this scheme to compare learning performance between an embodiment-based group and a group using the keyboard-mouse combination. They found the former group outperformed the latter while neither group had a significantly different cognitive load, although they note the use of new technologies in learning could boost cognitive resource consumption.


Finnish Researchers Developing a Digital Maternity Package
VTT Technical Research Center (09/24/14)

VTT Technical Research Center of Finland is developing a digital maternity package to collect health information from smart devices, electronic services, and guides into a single user interface. The project aims to enable expectant mothers and parents to track their own health and that of their children more easily and comprehensively than current methods permit. One goal is to mitigate concerns associated with the well-being of mothers and infants and ease families' everyday life while exploring whether pregnancy- or infant development-related issues can be monitored at home or managed through digital services. VTT is developing both new smart products and digital services, and mapping current applications that would align with a digital maternity package concept. For example, a future element of such a package could be an intelligent soother and a bed sensor measuring a baby's sleep pattern, its movements, or its suction power. The data is fed to a display on a tablet, smartphone, or computer, accumulating information on the mother's diet and the progress of pregnancy, the child's growth development and sleep pattern, or possibly a chat service with a nurse. "Digital services such as the maternity package under development can help make healthcare more efficient and increase its impact, while improving and expanding the quality of the services offered to people," says VTT's Olli Kuusisto.


How Should We Program Computers to Deceive?
Pacific Standard (09/14) Kate Greene

The question of how deceitful new technologies should be is critical as people's daily lives increasingly revolve around devices containing software designed to adapt to a dynamic environment and potentially predict users' behavior to deliver the best "value." University of Michigan researcher Eytan Adar compiled and analyzed many examples of deceitful design to uncover both benign and problematic deception principles at work. "It's not clear that designers have a good grasp of how to make design decisions that involve transparency or deception," Adar says. He and Microsoft Research scholars Desney Tan and Jaime Teevan authored a paper to highlight and define instances of benevolent and malevolent deceptive technologies and the boundary that separates them. Their research found there are differing levels of deception in terms of human-computer interaction and the way users react to it, ranging from repulsion at software that always deceives in a detectable manner to mistrust and irritation at software whose truthfulness or deceit is inconsistent to forgiveness for software that deceives in a way that benefits the user. Engineers already are considering advanced artificial-intelligence systems that might sense and intuit a person's state of mind, and that will likely need a certain layer of deception to present their conclusions in a sensitive fashion.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe