ACM SIGCHI Banner
Welcome to the January 2014 SIGCHI edition of ACM TechNews.


ACM TechNews - SIGCHI Edition is a sponsored special edition of the ACM TechNews news-briefing service focused on issues in Human Computer Interaction (HCI). This new service serves as a resource for ACM-SIGCHI Members to keep abreast of the latest news in areas related to HCI and is distributed to all ACM SIGCHI members on the first Wednesday of every month.

ACM TechNews is a benefit of ACM membership and is distributed three times per week on Mondays, Wednesday, and Fridays to over 100,000 ACM members from over 100 countries around the world. ACM TechNews provides timely coverage of established and emerging areas of computer science, the latest trends in information technology, and related science, society, and technology news. For more information on ACM TechNews and joining the ACM, please click.

HEADLINES AT A GLANCE


Try Mixed Reality, Where the Virtual and the Real Collide
New Scientist (01/02/14) Sandrine Ceurstemont

Mixed reality (MR) is a term for a seamless combination of physical and virtual environments that form an augmented reality (AR) system. The system is composed of a headset that immerses users in a virtual world, while an attachment designed by University College London's Will Steptoe feeds in real-time, real-world video. Steptoe notes that with tablet-based AR, "the user holds a window onto the mixed reality so there is a clear disconnect between what is physical and what is virtual," but the new MR system enables users to perceive the real world from their normal, embodied perspective. Steptoe's system mixes the two spaces by applying filters available in image editing software to the integrated world. The mixture conceals the imperfections of virtual objects, complicating distinction between real and virtual content. MR enables users to manipulate virtual objects, such as a virtual display that supports Web browsing and simulates other displays, Steptoe says. "You can mimic a tablet or a mobile phone and get it to hover anywhere," he notes. Steptoe says MR could be used in multiplayer gaming as well as telepresence systems, or as part of a lightweight headset such as Google Glass.


Making Smart Buildings Smarter
Government Computer News (12/17/13) Patrick Marshall

The concept of implicit occupancy sensing taps existing IT infrastructure to monitor building occupancy in real time and harness that data to manage building services for maximum efficiency. Elements of such a system were tested in a building at the Lawrence Berkeley National Laboratory complex, with the infrastructure consisting of smartphones, networked computers, routers, and other devices. By tracking the network addresses of devices and requests as well as automatic polling sent across the network, the software devised by the team successfully determined the occupancy of any location in the building in real time. Berkeley researcher Bruce Nordman says the data demonstrated that the number of network spikes peaked around noon. Activity increased in the morning and decreased in the afternoon, unveiling patterns of people coming to work, powering up their computers, using them, then powering off. The system also was capable of triangulating the whereabouts of specific cellphones by detecting the wireless access points reported as available to the device along with its actual connections. One key advantage of implicit occupancy sensing is its operation on infrastructure that is already in place, Nordman notes. "There is no cost to install or maintain this network, and you can get highly granular data," he says. In addition, the system could be extended to include data from any peripherals linked to networked devices.


Robot Scientist Pushes Limits of Virtual Reality
Korea Herald (12/25/13) Oh Kyu-wook

Korea Institute of Science and Technology roboticist You Bum-jae is pushing the boundaries of virtual reality research with a nine-year project to develop technologies to facilitate human interaction that is free of time or space limitations. "The purpose of our research is to enable people to experience virtual and remote worlds as if they were the real world," says You, director of the Center of Human-Centered Interaction for Coexistence. An example of the technology You's project has yielded is Mahru, a network-based humanoid robot that can recognize items and perform tasks such as house cleaning, running a microwave, and doing other household chores. Another example is a robotic surgeon that uses three-dimensional cameras, high-precision sensors, and extremely small instruments to enable a human surgeon to perform operations remotely in conjunction with telepresence technologies. Other technologies You and his colleagues are developing include a three-dimensional teleconferencing system, and a focus of the research is the creation of a "coexistence space" where people can not only communicate visually and audibly, but also touch and feel. "The coexistence space will allow doctors to diagnose patients without actually being there, and will also advance social network services such as Facebook with the physical human senses," You says.


Silicon Valley's New Obsession With Beauty
The New York Times (12/23/13) Claire Cain Miller

Silicon Valley technology companies are vying with each other for design talent in order to ensure that their products are aesthetically appealing as well as intuitive for users. "You can't put out something that's ugly or clunky but functional," notes TalentSky CEO Rick Devine, "so user interface engineers are in high demand." Tech experts say the importance of design in software has grown because of the increased intimacy of software and its proliferation to all aspects of people's lives, while its use on mobile devices requires more thoughtful design due to the smaller display. For example, Google retooled its design over the last several years after co-founder Larry Page became CEO and issued a mandate for cohesive and "beautifully simple" products. "One pillar of great design is beauty--having an emotional quality to our design and an emotional resonance--and that wasn't really baked into the original Google products so much," observes Google's Jon Wiley. Google has expanded its stable of designers, including those with backgrounds in visual, industrial, and animation design. Although designers must have technical understanding, they do not always need engineering skills. "First and foremost I look for empathy, because design is not art, it's actually solving real problems for people," Wiley says.


Getting Your Message Through: Will We Still Be Texting in 2030?
TechRadar (12/31/13) David Nield

New technological breakthroughs are expected to significantly enhance face-to-face communication in the decades ahead, according to experts. "I think we are experiencing a trend toward increased 'layered' communication, with technology being used to manage multiple flows of information and communication at once," says University of Michigan professor Scott Campbell. He expects such layering to grow increasingly seamless and integrated within communications in the future. Meanwhile, analyst Jonathan MacDonald anticipates exponential growth leading to computer chips that are the size of a blood cell and with about 1 billion times their current capability by 2030. "Due to this, it isn't too far fetched to imagine the ability to communicate via emotion without an intermediary device," he says. "It is highly likely that much of the communication machinery will live under our skin. Literally." In addition, advances in motion and voice controllers could make keyboards mostly superfluous in the coming years. If communication technology is relegated to the background by new innovations, this could be a positive development for single, one-on-one interactions rather than many ephemeral ones across multiple social media accounts. "So far, most of the evidence suggests that online and mobile channels are not taking away from face-to-face interaction, but complementing it," Campbell says. "In some cases, especially with mobile, technology fuels face-to-face interaction when used to coordinate meet-ups."


Intel Robot Puts Touchscreens Through Their Paces
Technology Review (12/19/13) Tom Simonite

Intel's Oculus robot is designed to empirically assess the responsiveness and feel of a touchscreen to determine if it will find favor with humans. Oculus analyzes how objects on a device's screen respond to its touch, and uses a camera to capture video at 300 frames per second in ultra-high resolution. Software utilizes the camera footage to quantify how a device reacts to the robot--for example, how fast and accurately the line in a drawing program follows Oculus' finger, how an onscreen keyboard responds to typing, or how well the screen scrolls and bounces when the machine navigates a long list. Numerical scores are rendered as a rating between 1 and 5 using data from cognitive psychology experiments conducted by Intel to learn people's preferences in a touch interface. Those scores are valuable to engineers at companies developing touchscreen devices based on Intel chips, as well as to Intel's chip designers. Oculus can be employed on any touchscreen device, and automatically adjusts to new screen sizes using a secondary camera. "We can predict precisely whether a machine will give people a good experience and give them numbers to say what areas need improving," says Intel's Matt Dunford. He notes that Oculus improves on previous industry machines because it compares devices using data on how people actually perceive touchscreens, while other robots usually evaluate devices' performance against fixed technical specifications.


Apps That Can Communicate Touch, Taste, and Smell: A Taste of What's to Come in 2014
Belfast Telegraph (Ireland) (12/27/13) Rhodri Marsden

Researchers are working to enable the Internet to deliver multisensory experiences involving taste, touch, and smell. For example, City University London professor Adrian Cheok has developed haptic pajamas that enable wearers to deliver virtual hugs, and haptic rings that create a sensation of hand-holding. Cheok also is creating a device with electrodes that excite taste receptors on the tongue to generate an artificial sensation of taste in the brain. Meanwhile, National University of Singapore researchers are similarly working on a "digital lollipop" that elicits artificial tastes. To develop artificial smells, Cheok is collaborating with French neuroscientist Olivier Oullier on a device that uses magnetic actuation. Although such technology sounds futuristic, the Massachusetts Institute of Technology's synthetic neurobiology group has already proven that optical fiber can be connected to neurons, paving the way for brains to use digital networks to communicate sensory information directly to other brains. "We will have direct connection to the brain within our lifetime, although what level that will be I'm not sure," Cheok says. "We don't yet have a language of smell, or of touch; exactly the same pressure in terms of a touch can have a completely different response in the brain, depending on context. But combined with emotion and the subconscious, it'll bring a heightened sense of presence."


The City With 20,000 Smart Servants
Mobile News (01/02/14) Samantha Tomaszczyk

More than 12,000 sensors have been installed around and underneath the Spanish city of Santander to supply feedback in order to service problems as they occur and coordinate services based on demand. The equipment is mainly composed of sensors, repeaters, and gateways, with the sensors positioned under cars or in street lamps. The repeaters read the sensors' transmissions and pass them to gateways where data from all of the sensors is collated and sent for municipal analysis, or used by app developers to supply services. This project is part of an overarching European Commission initiative to use a city as a testbed for computing technologies. The consortium overseeing the project includes Telefonica, Santander, and the University of Cantabria. Most of the sensors are linked to municipality-owned optical fiber, while about 25 percent of the devices run on Telefonica's 3G infrastructure because fiber is missing in certain areas. Telefonica says the sensors only send 5 MB of cloud-stored data each day, and as of October the city was supporting 20,000 smart devices in all. Cantabria professor Luis Munoz and Santander mayor Inigo de la Serna say the data gathered by the smart devices will be used to inform future town planning decisions.


3D Video Calls a Step Closer
Age (Australia) (12/24/13) Drew Turney

Iowa State University researchers are developing an advanced teleconferencing system designed to scan and record a subject in three dimensions (3D), compress and transmit the video across a wired or wireless network in real time, and display it at the destination as a 3D image on any screen. The technology involves positioning two cameras on each side of a light source to face the subject, and record light and shadow distortions. The technique produces a massive volume of stereoscopic video data that is compressed from 700 Mbps to 14 Mbps, which is small enough to stream across any network to any connected device at 30 frames a second. "The redisplay is happening on the holographic display, and the users are establishing correct eye gaze, so if they look at each other they make eye contact," notes system developer Nik Karpinsky. Embedding the technology in mobile device applications and non-video applications is the next step in the project. Karpinsky says the generation of free-standing holograms that do not need to be projected onto a surface is within the realm of possibility. "We're projecting onto glass, which becomes a display based on the angle of projection," he points out. "Assuming we could find a display technology for visible light, we could use it in our system."


Next Up: Humans, Systems Team in Cognitive Computing
Channelworld-India (12/22/13) Anup Varier

With organizations swamped with data to the degree that it inhibits time-critical decisions, a closer study of human-computer interaction is needed. Gartner fellow Jackie Fenn points to three trends at work, namely "augmenting humans with technology--for example, an employee with a wearable computing device; machines replacing humans--for example, a cognitive virtual assistant acting as an automated customer representative; and humans and machines working alongside each other--for example, a mobile robot working with a warehouse employee to move many boxes." IBM is concentrating on the trend of cognitive computing, which applies to many areas, such as network security. IBM Research's Zachary Lemnios notes that modern-day networks are completely ad hoc and mainly mobile, a combination of consumer- and industrial-grade systems whose changeability leaves them accessible to attackers. IBM wants to extend the lessons learned with analytics, apply that knowledge to human-system interactions using speech as the medium, and eliminate the need to code a software tool. Gartner says the primary advantages of machines working alongside people are the ability to access the benefits of productivity and speed from machines, emotional intelligence, and the ability to contend with unknown factors from humans. "Cognitive systems will assist the humans to manage...disruptions and let them operate in this environment," Lemnios predicts. "This is like the transition from horse-drawn carriages to automobile."


Computerizing People May Be Next Step in Tech
San Jose Mercury News (12/22/13) Steve Johnson

A movement to equip people with electronic devices that work inside or attached to the human body to control various appliances is gaining momentum, and some researchers and executives foresee a time when such technology will facilitate thought-based control of computers, prosthetics, and many other devices. Google's Motorola Mobility branch recently publicized a patent proposal for an electronic skin tattoo for the throat that would enable the user to operate other devices vocally. Google CEO Larry Page envisions a future when people will have implants that will respond to users' thinking about a fact by providing answers. Meanwhile, University of California, Berkeley researchers have proposed implanting thousands of tiny sensors, called neural dust, into humans' brains to initially collect detailed data on brain functions, but which could later be applied toward mental device control, according to researcher Dongjin Seo. Author Amal Graafstra projects rapid development in bioengineered and man-machine interfaces within the next 10 to 20 years, noting the trend will "push the boundaries of what it means to be human." Intel futurist Brian David Johnson thinks the concept of smart tattoos will initially be more acceptable to the public than the insertion of computerized pills or gadgets, because "something on your skin, that's a baby step" compared to a device that must be swallowed or surgically implanted.


Video Game Feedback May Help the Injured Heal
NYU-Poly News (12/17/13)

Researchers at the Polytechnic Institute of New York University (NYU-Poly) and Sapienza University of Rome have published findings demonstrating that the use of force feedback technology in combination with science learning in a therapeutic environment may help people heal from injuries. The researchers detailed experiments conducted by a group of professors using low-cost haptic devices in physical rehabilitation. Science education elements also were employed within the experimental tasks to measure the content's effectiveness in boosting participants' engagement with the exercises. The scientists developed a series of experiments on 48 healthy test subjects to see if force feedback delivered through a haptic gaming joystick would positively affect participants' ability to complete on-screen tasks. They generated a two-dimensional virtual map of New York City's Bronx Zoo and asked participants to use the joystick to move a cursor along a set path while reading short paragraphs of science trivia and information about the zoo. The joystick exerted converging and diverging forces, pushing the cursor either toward or away from the proscribed path. "These conditions meant that subjects had to make continual adjustments to counter the force feedback and keep the cursor on track, which affected speed, smoothness, accuracy, and hand position," notes NYU-Poly professor Maurizio Porfiri. A key finding was that the learning modules raised participants' interest and engagement in the activity.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2024, ACM, Inc.