Robots Are Becoming Ready to Work Among Us

Robots will be more useful when they can work alongside us, says Julie Shah.

By Julie Shah on December 17, 2013

Traditionally, robots were designed to work separately from people. That is starting to change as robots begin working alongside humans to courier medicine in hospitals and assemble complex machinery. New legged robots could soon accompany soldiers across treacherous terrain or perform rescue missions at stricken nuclear power facilities. But for the most part, robots still can’t function in human environments without requiring costly changes to people’s own working patterns.

Researchers are now beginning to understand how to build robots that can integrate seamlessly and safely into human spaces. One approach is to give them more humanlike physical capabilities. A human-size robot with legs, arms, and hands can use the same pathways, doors, and tools that we do, so the environment need not be laboriously retrofitted. Of course, a robot does not have to do a job the same way as a person. The Roomba vacuum cleaner appears to bounce randomly around the room, while we would employ a more efficient and methodical approach. However, the Roomba, unlike us, has only one job to do and does not get bored or impatient. In designing a robot’s physical capabilities, we must think carefully about the context in which it will be deployed and remember it isn’t necessarily bound by the considerations guiding the way people work.

 

The same applies as we begin to design robots intelligent enough to work alongside people. It is as impractical to redesign our work practices for robots as it is to redesign our physical world for them. We must instead build robots capable of doing their jobs with only minimal disruption to the people they work with or near.

This will require them to have mental models of what governs our actions. Robots can build these models the same ways people do: through communication, experience, and practice. We do not require that robots have our full human capabilities for decision-making, communication, or perception. Through careful study of effective human work practices, my own research group is designing robots with planning, sensing, and communication capabilities suited to their contexts. For example, our assembly-line robot learns when to retrieve the right tool by observing its human coworkers, without necessarily having to ask. Robots like this one work seamlessly with people and reduce the economic overhead of deploying new systems. As a result, it will soon be practical to extend human capability through human-robot teamwork.

Brain-Inspired Chips Will Allow Smartphones to Understand Us

We should look to biology to figure out how to make smartphones more ­helpful, says M. Anthony Lewis.

By M. Anthony Lewis on December 17, 2013

A modern smartphone is the most powerful information portal the world has known, integrating a traditional telephone with a powerful Internet-connected computer capable of navigating, playing multimedia, and taking photos. I think the next major step in smartphone evolution is obvious: the devices will become intelligent assistants that can perceive the environment and follow our commands. This will become possible thanks to progress in building chips inspired by the functioning of mammalian brains (see “Thinking in Silicon”).

We hope to achieve what I call embedded cognition—intelligence that resides on the mobile handset itself rather than on a distant server. We want devices that are always listening, watching, and paying attention to us, without compromising battery life. We need new kinds of algorithms to process streams of sensory data from sights, sounds, physical sensations, and more. We need our phones to be capable of learning so that they can come to understand their owner. And we need to stuff this intelligence inside compact, power-efficient hardware because we don’t want to transmit data off the smartphone for processing—a requirement that causes delays for users of Apple’s Siri and the Google Now app for Android phones.

 

A team of engineers and neuroscientists at Qualcomm Research is working on a new type of processor to meet those challenges. It takes design cues from the human brain, which despite using only about 20 watts of power is the most impressive and efficient “computer” that we know of at processing data from the real world—the kind we want smartphones to handle too.

The Zeroth processor, as it is called, works on data using silicon “neurons” that are linked into networks and communicate via electrical spikes. A system with a Zeroth processor can learn. In one test, researchers trained a wheeled robot to favor certain areas of a room by rewarding it when it was in the correct place. We also envision sensors modeled on the nervous system. They would conserve energy by reporting only when the environment had changed, instead of transmitting data constantly at all times.

This biologically inspired approach to computing should pave the way for the next major upgrade to the 130-gram marvel we call the smartphone.

Facebook’s Two Faces

By David Talbot

Last spring, Facebook founder Mark Zuckerberg invested in an impressive domain name: internet.org. Then, in August, he posted a video featuring snippets of John F. Kennedy’s “Strategy of Peace” speech and blogged that he would “share a rough proposal for how we can connect the next 5 billion people and a rough plan to work together as an industry to get there.” With that, Facebook and six corporate partners—including Nokia, Samsung, Qualcomm, and Ericsson—became part of a swelling movement of tech companies declaiming a commitment to connectivity, seemingly moved by the fact that only 2.7 billion of the world’s seven billion people have Internet access. In October, Google launched the Alliance for Affordable Internet (whose members include Facebook and Ericsson). It is pushing for cheaper Internet access through policy and regulatory reforms.

Behind the focus on the world’s unconnected lie some complicated realities. The companies involved tend to emphasize delivering more data to people who already have network access rather than extending communications connectivity to people who have none. And despite Zuckerberg’s lofty statements, Facebook in particular is falling short of some of Internet.org’s goals: the company isn’t investing in network extensions in developing countries, and its business practices, in many cases, have obligated Internet service providers in such places to incur extra costs.

Internet.org is still more of a press release than a plan. But its first formal statement, a 74-page white paper cosigned by base station maker Ericsson and chipset maker Qualcomm, is telling: it sets a goal of delivering data 100 times more efficiently to mobile phones, the devices most Internet newcomers will use to link to the Net.

Casting Facebook’s data efficiency plan as “the savior of the developing world” is “hard to swallow.”

Increasing efficiency is a perennial goal. And if it makes it possible for ISPs to offer broadband more cheaply, it could make people better off. (Research from the World Bank says that increasing broadband penetration in developing countries by 10 percent boosts their annual economic growth by 1.4 percentage points.) But getting people more data faster is quite a different objective from introducing connectivity in the first place.

Ground truths

Facebook is a major online presence around the world. Take Africa, where it often ranks first or second in popularity among websites. Yet Facebook doesn’t have data centers there, which means content generated by Facebook members in Kenya, for example, has to traverse undersea fiber-optic cables to data centers on other continents. That costs local ISPs at least $100 per month for each megabit of traffic. This charge wouldn’t apply if Facebook stored user content locally.

The ISPs pass those extra costs on to consumers—which surely can’t help Internet expansion efforts on a continent where only 16 percent of people have Internet access, compared with 39 percent worldwide. “It’s a bit disingenuous,” says PharesKariuki, who runs Angani, a cloud computing startup in Nairobi. “On the one hand, Facebook claims to want to give Africa access through
Internet.org, but when it comes to the business decisions they are making, as far as Africans are concerned, I have not seen anything that reflects that value yet.” (It is worth noting, however, that Akamai, the Web optimization service, is establishing infrastructure in more and more African locations. To the extent that Facebook uses Akamai’s service, it reduces the extra costs that ISPs in those regions would incur.)

As part of Internet.org, Zuckerberg published a white paper titled “Is Connectivity a Human Right?” in which he wrote that the company has “invested more than $1 billion to connect people in the developing world over the past few years.” But the details were absent: spent on what, to connect whom, and to what? Through a spokesman, Zuckerberg turned down an interview request. But on closer inspection, that statement apparently means “connect people to Facebook.”

Facebook spokesman Derick Mains e-mailed a clarification: the company, he wrote, hasn’t invested in any “physical buildout of infrastructure” to connect people. He declined to say where the $1 billion went, giving only one example: Facebook’s $70 million purchase of Snaptu, whose technology makes it possible for apps like Facebook’s to run on the basic phones that are common in developing countries.

Such acquisitions, of course, are meant to improve Facebook’s own operations: the company, like others, is keenly interested in having its service accessible on as many phones as possible. Facebook is also doing important work to develop ways of delivering information more efficiently to smartphones that run the dominant Android operating system, says Jay Parikh, Facebook’s vice president for infrastructure.

Facebook will surely come up with technologies that are useful on all kinds of mobile phones. But Ethan Zuckerman, who has helped lead several Web projects in poor countries, says that “to wrap that into a press release that turns Facebook into the savior of the developing world is hard to swallow.”

Tapping the airwaves

Other Internet companies have gone much further, funding Internet infrastructure projects that also happen to advance their own interests in getting more people to use their services.

One is in the capital city of Kampala, Uganda, a metropolis where you can get relatively slow connectivity from any of about 10 mobile carriers or Internet service providers. In November, Google announced that it had installed 170 kilometers of fiber-optic lines in Kampala, a major step forward that could enable local carriers and ISPs to provide faster speeds at lower prices. (Fewer than 1 percent of sub-Saharan Africans have fixed broadband, defined by the U.N.’s International Telecommunication Union as a data rate of two megabits per second; 11 percent have mobile broadband, defined as 3G or similar service.)

If Facebook really wants to connect more people, it should support cutting-edge wireless networks.

A handful of other projects are meant to provide Internet access where none previously existed at all. One is unfolding in the region around Nanyuki, Kenya, a town at the foot of Mount Kenya. In poor and sparsely populated areas like this, extending fiber makes no sense economically—wireless carriers often fail to recoup their investments in even conventional cellular base stations powered by diesel generators. But in Nanyuki, an experimental low-cost wireless Internet system is radically altering the economics.

It works like this: first, a powerful microwave transmitter delivers a high-bandwidth connection from a fiber terminus to several fixed wireless base stations over tens of kilometers. These base stations retransmit data on unused television frequencies—called “white spaces”—to 40 solar-powered Wi-Fi routers and phone-recharging stations in schools, clinics, businesses, and community centers. The Nanyuki apparatus already serves 20,000 people, and this capacity is set to triple. Most important, it does so for less than $5 per user per month—5 percent of the region’s average annual income
of $1,200.

The company behind this effort is Microsoft, but Google has just completed a similar trial to provide bandwidth to schools in Cape Town, South Africa. Companies are testing many other white-space efforts around the world. The impact could be large: what many places need is simple access to the airwaves, which is frequently restricted by national governments. “If you look around the world—whether in the U.S. or the Philippines—the issues around digital inclusion and universal access are mainly policy challenges,” says Paul Garnett, director of Microsoft’s technology policy group.

Reaching the remotest

But white spaces are only as good as the base stations and power supplies at the farthest endpoints. Vanu Bose, CEO of a company called Vanu that develops cheap cellular base stations, tells the story of an enterprising man in Zambia who collects cell phones every morning from his fellow villagers. He then drives three hours to a spot where he can get a signal from a cellular tower—and switches on all the phones so they can ingest all the text messages and voice mails that have accumulated since the previous day’s excursion.

This workaround is a reminder that there are still more than 200 million people in Africa alone who don’t even have the most basic cellular phone service. For Zambia, Bose has developed what he claims is the lowest-power base station on the market: a rugged unit that can connect to the Internet in a number of ways, including microwave links, satellite links, and white spaces, and serve up access to 1,000 villagers per node. All it needs is 50 watts of power from solar panels, with a few watts left over for a communal phone-charging dock. This provides very basic voice and data service and maybe one low-bandwidth Wi-Fi hot spot.

Broadband it ain’t. But such service can be transformational—enabling families to stay in touch, emergency medical aid to be summoned, educational materials to be delivered. “Internet.org is all about higher-capacity networks and more bandwidth,” Bose says. “But we shouldn’t think about bandwidth first but connectivity of any kind first. They are very different things. One communications transaction per day is infinitely better than zero.”

Beyond hyperefficient setups like Bose’s, Google has prototyped a new concept: fleets of solar-powered balloons in the stratosphere, networking among one another and beaming Internet connectivity to far-flung rural areas at speeds comparable to 3G. It’s been criticized as a marketing stunt, and it may not even work. But in contrast to Facebook’s effort to increase data efficiency, “at least it’s funky and new, at least it’s interesting, at least it’s ambitious,” says Ethan -Zuckerman, who today is director of the Center for Civic Media at MIT’s Media Lab.

Facebook says its focus is in the right place, and that helping more people who already own phones to afford data plans is a crucial job. That’s why the broad outlines of Internet.org involve figuring out how to deliver data more efficiently, in part through new business models. “A good way to look at it is that it’s a first step, and a really hard problem to solve,” says Aaron Bernstein, a former Qualcomm executive who is now a director of mobile partnerships at Facebook. And all the companies and organizations promoting and working toward Internet connectivity agree that there will be no silver bullet. “Only a lot of lead bullets,” as Facebook’s Parikh puts it.

But Facebook must shoot those bullets at the right targets. If the company really wants to make access more affordable, it can make sure its data is in the countries where people are using the service. If it really wants to connect more people, it can fund and support cutting-edge wireless networks. As John F. Kennedy said about the Peace Corps, 24 years before Zuckerberg was born: “Americans are willing to contribute. But the effort must be far greater than we have ever made in the past.”

Should Energy Startups Be Funded as Charities?

Investments in energy start-ups can count as charitable grants. Could this funding approach help get new technologies to market?

Kevin Bullis

Just a few years ago, the conference rooms of the Marriott Hotel in Kendall Square, just across the street from MIT, were abuzz with venture capitalists who were going to solve climate change by funding startups with novel solar cell and biofuel technologies. I remember John Doerr, of Kleiner Perkins, tearing up as he talked about the importance of investing in technology to stop global warming.

Those same conference rooms were far more staid this year, at the annual MIT Venture Capital conference. Indeed, energy startups were talked about as charity cases. Literally.

In the last couple of years, funding for new energy companies has dried up or has shifted to companies with more modest goals, such as crunching data to cut energy bills. VCs got burned after several solar companies and advanced battery companies went bankrupt or were acquired for pennies on the dollar (see “For Energy Startups, a Glass Half Full or Empty?” and “A123 Systems Files for Bankruptcy”).

At the conference I learned that the Will and Jada Smith Foundation, along with four other foundations, are funding an effort to develop a new way for energy startups to get funded—by treating them as charities that no ordinary investor would touch.

As it turns out, there’s a provision in the tax code that says that investments in startups can be counted as charitable grants—even if those investments could in the long term bring in huge returns. The catch? The startups have to be ones that are too risky for ordinary investors.

That’s an increasingly easy requirement to meet for many energy companies these days. VCs are wary of energy startups that require large investments, and will take over a decade to provide returns.

Private foundations, however, don’t mind waiting around for returns. Indeed, they’re used to giving away a lot of their money.

In the United States, tax-exempt private foundations make grants totaling $50 billion a year—that could go a long way to helping energy startups. The donations are part of a requirement that such foundations donate at least five percent of their money to charitable causes. The Smiths’ foundation is providing seed money for a new organization—called PRIME—that is helping private foundations take advantage of a part of the tax code that will allow them make investments in startups, but count them as grants that help them meet that five percent requirement.

According to Sarah Kearney, the executive director of PRIME, private foundations can make investments, and even make big returns on those investments, and still count them as charitable grants as long as they meet two requirements. The first is that the startup serves a philanthropic role. “Advancing science” counts an philanthropy. Addressing climate change and helping poor countries with their energy needs could also count. The second requirement is that ordinary investors wouldn’t touch the startup.

Private foundations don’t often take advantage of this opportunity. One reason is that the team of people in charge of grant making don’t know how to make investments. Another is that there is some risk that the IRS won’t accept the rationale that the investment fits the requirements for being counted as a grant, and getting a confirmation from the IRS can take a year, Kearney says, a long time for a struggling startup to wait for funding.

PRIME is designed to act as an intermediary that takes on that risk, and that has the necessary expertise to make investments. It is registering as a charitable organization with the IRS, so foundations can make ordinary grants to it that they know will count toward that five percent requirement. PRIME will then invest that money in startups that its experts determine count as charities. The organization is at a very early stage, having raised just $180,000 in seed money to get going. It’s in the process of raising funds for its first investments. Much of the early work is being done as “in-kind donations” by energy VCs who want to see more money going to risky energy companies.

Matthew Nordan, a vice president at the venture firm Venrock, is one of the most upbeat energy-related venture capitalists you can find these days (see “A Solar Survivor Has High Hopes”), is one of those VCs. He makes the point that energy is still a huge market, and fortunes are waiting to be made. But he sees the need for new funding models for risky companies. He singled out PRIME as having one of the most promising.

PRIME’s approach is certainly an interesting model, and one that sounds like it could get more funding flowing to energy companies. But ironically, by revealing these companies as being in need of charity, it also highlights just how difficult it is to bring new energy technologies to market, and how hard it is these days to get ordinary investors interested.

1-Femtojoule Optical Interconnect Paves the Way for Low-Power, High Performance Chips

The wires that ferry electronic data across microchips are too power hungry for future microchips. But photons can now do the job instead thanks to a new generation of ultralow power optical interconnects.

Emerging Technology From the arXiv

Moore’s law is the observation that the number of transistors on integrated circuits doubles every two years or so. When Gordon Moore spelt out his eponymous law in 1965, the numbers involved were measured in thousands. Today, microchips contain billions of transistors, with the smallest having dimensions just a few nanometres wide.

All that makes for fantastically fast processing at huge data rates. But there’s another increasingly important factor in chip design: power consumption. Not only must the transistors themselves operate at low power but the power consumed in transmitting the data from one pat of a chip to another must also be low.

And therein lies a serious problem. An interconnecting wire on a chip just 1 millimetre long consumes about 100 femtoJoules for every bit it carries. That may sound small–a femtoJoule is 10^-15 Joules. But with data rates now hitting petabits per second (1 petabit = 10^15 bits), a large chip will eat about 100 Watts. And that’s just for the interconnecting wires. The power that transistors use up is on top of this.

The bottom line is that conventional interconnecting wires are at least ten times more power hungry than the next generation of chips can handle.

So the designers of future chips are turning from electrons to photons to do this job. The idea is to convert the electronic signals from transistors into photons and beam them around the chip at ultralow power.

That requires lasers to create the photons, modulators to encode the photons with data and detectors to receive the photons. But here’s the thing: all this has to be done with a power budget measured in just a few femtoJoules, something that hasn’t been possible.

Until now. Today Michael Watts and pals at the Massachusetts Institute of Technology in Cambridge say they’ve designed and built the first photonic modulator that operates at this ultralow power level. “We propose, demonstrate, and characterize the first modulator to achieve simultaneous high-speed (25 Gigabits per second), low voltage (0.5 peak-to-peak Voltage) and efficient 1 femtoJoule per bit error-free operation,” they say.

The new device is a hollow silicon cylinder that acts as a cavity for trapping light waves. It modulates this light thanks to a phenomenon known as the electro-optic effect in which the refractive index of silicon can be changed by modifying the voltage across it.

The modulator solves a number of problems that electronics engineers have been wrestling with. First, it is entirely compatible with the CMOS (complementary metal oxide semiconductor) process used for manufacturing chips and so can be made inside any existing fabrication plant. Previous attempts to make devices of this kind relied on indium which is not compatible with CMOS.

Next, the device is unaffected by the kind of temperature changes that occur in a chip. That’s been a problem for previous modulators of this type since their critical dimensions, and therefore the control they have over light, have always changed with temperature.

Watt and co fix this by exploiting the fact that the electro-optic effect is also temperature dependent. In the new design, this cancels out any changes in dimension ensuring that a temperature change has no overall effect.

And finally, they’ve got all this working with a measly 1 femtoJoule power budget.

Impressive stuff. Watt and co are justifiably pleased with the result. “The results represent a new paradigm in modulator development,” they say.

At the very least, it should make possible a new generation of powerful chips operating at lower power than ever before.

New Interfaces Inspire Inventive Computer Games

Novel modes of interaction are inspiring independent games companies to come up with completely new types of games.

By Simon Parkin

The cliché is that technological innovation in video game development is the domain of the blockbuster studios. These are companies with the requisite manpower and cash reserves to explore new ways for players to interact with digital games, or to ever more closely replicate the detail and texture of reality on screen. The indie developers, meanwhile, innovate in the area of game design, where they are small and agile enough to take creative risks.

There is some truth to this, and the interplay of technological progress and creativity between the two loose groups has produced a healthy ecosystem. Technological innovations made by the blockbuster developers are passed on to indie devs, while new forms of gameplay uncovered by the indies routinely make their way into the mainstream big-budget releases.

But in 2013 many indie devs have shown their willingness to work with emerging technologies, discovering new ways to interact with games. Indie developers in particular are exploring the new design territory opened by Oculus Rift, the virtual reality headset due for public release next year (see “Can Oculus Rift Turn Virtual Wonder into Commercial Reality?”). These studios are uncovering new types and styles of play that are uniquely suited to the hardware. Here are some of the most interesting examples of technological invention in recent indie games.

Johann Sebastian Joust (PC, PlayStation 3)

J.S. Joust is the brainchild of academic and designer Douglas Wilson. Designed for anywhere from two to seven players, the game is played away from a TV screen and uses the PlayStation Move controllers—Sony’s wand-like batons that have a ping-pong ball-size light at one end. Each player attempts to jolt an opponent’s controller while keeping his or hers steady. If a controller moves too rapidly, the light turns red and its holder is out of the game. The aim is to be the last player standing.

During play music from Johann Sebastian Bach’s Brandenburg concertos is played at a greatly slowed tempo. At regular intervals the music’s speed increases, signalling that the controller’s sensitivity to movement has been decreased, allowing for more vigorous action and interplay between players. A staple of indie game gatherings in beta version since 2012, J.S. Joust is a parlor game that breaks down social barriers by encouraging physical contact—and without a TV screen as a focal point, it allows for thrilling eye contact between players.

The game was slated for commercial release this fall on PC and PlayStation 3 as part of the Sports Friends compendium, but it has yet to surface.

Spin the BottleBumpie’s Party (Wii-U)

Another game that eschews the TV screen, Spin the Bottle: Bumpie’s Party is an indie game for Nintendo’s next-generation console system, Wii-U. Players compete in teams across a variety of riotously creative mini-games, which use a combination of Nintendo Wii’s motion controllers and the Wii-U tablet-like touch-screen interface.

In one of the included games, Rabbit Hunt, players split into two teams. One group leaves the room while the other must hide the Wii remote controllers as best they can, trying to confound the expectations of the opposing team. The other team is then invited back into the room and has a minute or so to find the remotes, which intermittently make the sound of a snuffling rabbit.

As with J.S. Joust, this game transforms the room in which the game is played into part of the set; the mundane contours of a living space are made into an actualized video game level, where routine objects become potential treacherous concealers of tiny rabbits. It’s an inspiring demonstration of how video game hardware can be used in creative ways that the original designers never intended.

DropChord (PC/Mac, iOS, Android, Ouya)

Created by DoubleFine, the studio best known for humorous, narrative-led games (such as Psychonauts, Sesame Street: Once Upon a Monster, and Brütal Legend), DropChord is an abstract music game for tablets that also offers one of the best uses of the Leap Controller, a computer hardware sensor device that supports hand and finger motions as input (see “Leaping Into the Gesture-Control Era”).

The game’s rules are simple: hold two thumbs to the screen and light streaks between them, joining to create a throbbing beam that bisects the darkness. During play you must position the beam in order to strike through the “notes” that appear within the circle. Peril is introduced by way of “scratches,” red dots that you must avoid touching with your beam. Strike all of the notes in a section without a scratch to score a perfect pattern. Your successful swipes trigger firework explosions that light up the screen with light and particles.

While Sony and Microsoft continue to pursue motion-controlled games (the latter’s Kinect 2.0 camera, launched alongside the Xbox One last month, is a particularly powerful piece of hardware), players still view these games with suspicion. DropChord is a rare example of a game that is best suited to motion control play, and demonstrates the powerful potential of this form of interaction where most others have failed.

Private Eye (PC & Oculus Rift)

In this striking detective game for the Oculus Rift virtual reality headset, you play as a wheelchair-bound detective spying on a building through his binoculars. Clearly influenced by Alfred Hitchcock’s Rear Window, Private Eye re-creates the sense of being a largely helpless voyeur with style.

Your only interaction with the game world is via head movements. By surveying the scene and catching important details you must work to solve the mysteries of the neighborhood, from finding a lost football for a group of children to uncovering the local Mafia’s plans.

The ultimate goal, however, is to catch a murderer who you know will kill at 10 p.m. It is a wonderful and inspiring exploration of the power of the new hardware.

Tenya Wanya Teens (PC)

Eccentric auteur Keita Takahashi is best known for his surreal PlayStation series Katamari Damacy, in which players play as an alien prince who rolls a sticky ball through the world, picking up modern living’s detritus as he goes. Tenya Wanya Teens is an equally unconventional proposition, a party game in which two players race to be the first to perform a range of random on-screen actions, from shouting “I love you” at a pretty girl, to successfully peeing into a urinal.

The technological invention comes from the game’s two giant 16-button controllers. Each of the game’s on-screen actions is tied to a different color button (so, for example, you might need to press a red button to order your character to brush his teeth in front of a bathroom mirror).

Humor derives from the fact that the colors of the buttons change at random (via in-button lighting), fooling your brain into pressing the wrong button at the most inopportune moments. In this way you may accidentally cause your character to tell a urinal that he desperately loves it, or pee on a bear. Hilarious and memorable, even if, thanks to the bespoke hardware required, it is scarce at the moment.

 

Escape wasn’t the first game to be announced for Google Glass, the wearable computer due for release next year, but it was the first to be completed and released (albeit only to those few who own development kits).

A far cry from the real-world first-person shooters that some video-game makers have envisioned, Escape is a simple puzzle game that plays out on the surface of one of the device’s lenses. In the game you guide a stick character around a path of dots; it’s the kind of game you’d expect to idly play on your browser to while away the time, but the Google Glass context elevates it to a new level of interest, and the reception to the game demonstrates consumers’ hunger for new game experiences on emerging wearable hardware.

The Emerging Technologies Shaping Future 5G Networks

The fifth generation of mobile communications technology will see the end of the “cell” as the fundamental building block of communication networks.

It may seem as if the fourth generation of mobile communications technology has only just hit the airwaves. But so-called 4G technology has been around in various guises since 2006 and is now widely available in metropolitan areas of the US, Europe and Asia.

It’s no surprise then that communications specialists are beginning to think about the next revolution. So what will 5G bring us?

Today we get some interesting speculation from Federico Boccardi at Alcatel-Lucent’s Bell Labs and a number of pals. These guys have focused on the technologies that are most likely to have a disruptive impact on the next generation of communications tech. And they’ve pinpointed emerging technologies that will force us to rethink the nature of networks and the way devices use them.

The first disruptive technology these guys have fingered will change the idea that radio networks must be made up of “cells” centered on a base station. In current networks, a phone connects to the network by establishing an uplink and a downlink with the local base station.

That looks likely to change. For example, an increasingly likely possibility is that 5G networks will rely on a number of different frequency bands that carry information at different rates and have wildly different propagation characteristics.

So a device might use one band as an uplink at a high rate and another band to downlink at a low rate or vice versa. In other words, the network will change according to a device’s data demands at that instant.

At the same time, new classes of devices are emerging that communicate only with other devices: sensors sending data to a server, for example. These devices will have the ability to decide when and how to send the data most efficiently. That changes the network from a cell-centric one to a device-centric one.

“Our vision is that the cell-centric architecture should evolve into a device-centric one: a given device (human or machine) should be able to communicate by exchanging multiple information flows through several possible sets of heterogeneous nodes,” say Boccardi and co.

Another new technology will involve using millimetre wave transmissions, in addition to the microwave transmission currently in use. Boccardi and co say that the microwave real estate comes at a huge premium. There is only about 600MHz of it. And even though the switch from analogue to digital TV is freeing up some more of the spectrum, it is relatively little, about 80MHz, and comes at a huge price.

So it’s natural to look at the longer wavelengths and higher frequencies of millimetre wave transmissions ranging from 3 to 300 GHz. This should provide orders of magnitude increases in bandwidth.

But it won’t be entirely smooth going. The main problem with these frequencies is their propagation characteristics—the signals are easily blocked by buildings, heavy weather and even by people themselves as they move between the device and the transmitter.

But it should be possible to mitigate most problems using advanced transmission technologies, such as directional antennas that switch in real time as signals become blocked. “Propagation is not an insurmountable challenge,” they say.

Next is the rapidly developing multiple input-multiple output or MIMO technology. Base stations will be equipped with multiple antennas that transmit many signals at the same time. What’s more, a device may have multiple antennas to pick up and transmit several signals at once. This dramatically improves the efficiency with which a network can exploit its frequencies.

However, it will mean larger antennas, perhaps spread out across the surface of skyscrapers. That’s fine in modern cities with plenty of surface area that is relatively easily accessible. It’d be harder to manage in older cities where large panels will be harder to conceal.

Smarter devices should also help shape the networks of the future. So instead of signals being routed by the base station, smart devices will do this job instead, choosing between a variety of different options. For modern smartphones, that should be a relatively straightforward task.

And the final disruptive technology these guys identify is the ability for devices to communicate with each other without using the network at all.  Boccardi and co say this will be essential for the ways in which future networks will be used. For example, a sensor network may have ten thousand devices transmitting temperature data. That will be easier if they can send it from one device to the next rather than through a single base station.

Of course, many of these developments pose significant technology challenges but none of these should be showstoppers. “New research directions will lead to fundamental changes in the design of future 5G cellular networks,” say Boccardi and co confidently.

5G is not a question of if but when.

Material Made from Plastic Bottles Kills Drug-Resistant Fungus

IBM researchers have developed a new polymer-like material to treat fungal infections.

By Susan Young

A material made from plastic bottles can knock out a drug-resistant fungal infection that the Centers for Disease Control and Prevention predicts will become a more serious health problem in coming years.

Antibiotic-resistant bacteria and fungi kill at least23,000 people in the U.S. alone each year, and many  of these microbial infections are acquired by people hospitalized for other reasons. Research groups around the world are exploring a variety of ways to address the problem, including hunting for novel kinds of antibiotics (see “Bacteria Battle Generates New Antibiotics”) and creating sutures coated with bacteria-killing viruses (see “Using Viruses to Kill Bacteria”).

Another approach involves using biologically active materials that punch holes in the membranes surrounding each microbial cell. These membrane-attacking compounds mimic one of the body’s natural defenses—antimicrobial peptides that insert themselves into a microbe’s outer membrane and break open the bug. IBM Research has developed such a compound—a small molecule that self-assembles into a polymer-like complex capable of killing Candida albicans fungi infecting the eyes of mice. The work was published today in Nature Communications.

“Usually, it is difficult to make antifungal agents because fungal cells are very similar to human cells,” says Kenichi Kuroda, a materials chemist at the University of Michigan who is also working on antimicrobial materials. The challenge is that many microbe-killing drugs work by sabotaging a molecular process inside the pathogen’s cells. And while the molecular machinery of bacteria is usually sufficiently distinct from human cellular machinery to avoid overlapping effects, fungal cells are much closer.

The new IBM compound has not yet been tested in humans, but the researchers say that in mice with a Candida infection in their eyes, the compound killed the fungus more effectively than a widely used antifungal drug without causing harm. And whereas Candida developed resistance to an existing antifungal drug after six treatments, it did not develop resistance to the new compound even after 11 treatments, the team reports.

That ability to avoid resistance may be thanks to the fact that the compound kills by disrupting the microbes’ outer membrane. Unlike antibiotics, which typically work slower and therefore enable a population of bacteria to evolve resistance to a drug’s function, “these kinds of biomaterials have a quick action,” says Kuroda, whose is also focusing on attacking microbial membranes.

IBM developed the polymer-like material using techniques that are well-established in microelectronics but relatively new to biology, says James Hedrick, the IBM Research materials scientist leading the work. The compound belongs to a branch of materials sometimes referred to as molecular glasses. The compound starts off as many individual small molecules, but in water, these individual molecules coalesce into a larger structure that is similar to a polymer, but with weaker bonds between each molecule. This means that the material degrades over time.

“With time it’s going to fall apart, and going to pass through the body,” Hedrick says.  “You want them to do their business and then go away, and you don’t want them to accumulate in the body, in waterways, and in our food.”

The starting material comes from a common plastic known as PET. Hedrick says whenever he needs more starting material, he just goes to the nearest recycling bin in the San Jose-based IBM Research building, finds a plastic bottle, and cuts a piece of out of it.

 

Working with collaborators in Singapore who handle the animal testing branch of the project, Hedrick says a similar compound can knock out an antibiotic-resistant bacterial infection known as MRSA. By injecting that compound into the tail veins of mice, the researchers have been able to clear a MRSA infection from their blood.

“We can do many things with [these compounds],” says Hedrick. “We can make them into hydrogels to treat MRSA skin infections and they can go into everything from shampoo to mouthwash.”

SolarCity, Using Tesla Batteries, Aims to Bring Solar Power to the Masses

SolarCity’s new battery system might help solar become a significant source of electricity.

By Kevin Bullis

Today, SolarCity—a company that’s grown quickly by installing solar panels for free and charging customers for the solar power—announced a new business that will extend that model to providing batteries for free, too. SolarCity is a rare success story for investors in clean technology, and its business model has sped the adoption of solar panels.

The batteries could help businesses lower their utility bills by reducing the amount of power they draw from the grid. They could also help address solar power’s intermittency, which could prevent it from becoming a significant source of electricity. The batteries are being supplied by Tesla Motors, whose CEO, Elon Musk, is SolarCity’s chairman.

Other solar companies have failed in recent years. But SolarCity’s business model has helped it grow quickly. It had a successful IPO a year ago, and its stock price has risen from its IPO price of $8 to over $50 today (see “SolarCity IPO Tests Business Model Innovations in Energy”).

CEO Lyndon Rive says that eight years from now, the company might not be able to continue selling solar panel systems unless it packages them with batteries, because of the strain on the grid that solar power can cause. “It could be that, without storage, you won’t be able to connect solar systems to the grid,” he says.

Solar power intermittency isn’t currently a big problem for utilities, since solar panels generate just a tiny fraction of the total electricity supply. But solar power will become a strain on the power grid as it grows. Power from solar panels can drop in less than a second as clouds pass overhead, before surging back again just as fast. The tools that utilities use now to match supply and demand typically can’t respond that fast. Batteries could be a solution, but they’re too expensive to be used widely now. Rive thinks SolarCity can help drive down their costs by scaling up its use of batteries with the new business model.

 

Utilities charge companies for their electricity based on two things. The first is the total amount of electricity they use (measured in kilowatt-hours). The second has to do with their peak demand—a company that needs to draw huge amounts of power for industrial equipment will pay more than one that only needs to charge a couple of laptops, since it will need bigger transformers and other equipment. The fee based on that peak electricity demand can be a big chunk of the total bill, typically between 20 and 60 percent, Rive says.

The battery systems—and the software that controls them—are designed to reduce the peak draw from the grid. Batteries charge up using power from solar panels and supplement with power from the grid when a company needs to draw its highest levels of power—such as during summer afternoons, when air conditioners are running hard.

SolarCity is also testing battery systems with residential customers, who typically don’t pay demand charges. The main draw for homeowners would be the batteries’ ability  to provide backup power if the grid fails. But eventually regulators could adopt rules that allow homeowners to reap profits from allowing utilities to use their batteries to help manage electricity load on the grid.

Rive says SolarCity spent three and a half years developing the battery system and the last year testing it. Because batteries are expensive, it’s ideal to use ones as small as possible. Algorithms try to predict when to charge and discharge the batteries, a decision based partly on forecasts of how much solar power is going to be available and when demand will be greatest.

The batteries use the same technology Tesla uses in its electric cars. But the size of the packs could be far larger, depending on the size of the solar panel system it’s paired with.

SolarCity isn’t the only company looking to use batteries to reduce electricity costs (see “A Startup’s Smart Batteries Reduce Buildings’ Electricity Bills”).Nissan recently announced that it had used the batteries inside several plugged-in Nissan Leaf electric vehicles to reduce electricity costs for one building in Japan, as part of a test of a concept called vehicle to grid (see “Recharging the Grid with Electric Cars”).

Indoor Imagery Shows Mobile Devices the Way

Street View-style imagery of interior spaces lets mobile devices locate themselves more accurately than is possible with GPS.

By Tom Simonite on December 10, 2013

Smartphones locate themselves outdoors using a GPS sensor, but those signals are blocked indoors. A new technique uses a device’s camera to get an indoor location fix to an accuracy of within a meter. The technique could enable new kinds of apps, and may be particularly valuable for wearable computers such as Google Glass.

The new location-fixing method is being developed at the University California, Berkeley. It uses a photo from a device’s camera to work out the location and orientation of the device. It does this by matching the photo against a database of panoramic imagery of a building’s interior, similar to the outside views offered by Google’s Street View. The system can deduce the device’s location because it knows the position of every image in that database.

 

The researchers used a special backpack that captures Street View-style imagery indoors as the wearer carries it around. It has two fisheye cameras, laser scanners, and other sensors. Software uses the data collected to generate a map of the building’s interior, a stitched-together set of panoramas, and a database of individual images that can be used for location lookups.

“You can provide that blue dot you see on a mobile map when out-of-doors for interiors,” says Avideh Zakhor, who leads the Berkeley group developing the system. Zakhor previously sold a 3-D city mapping company to Google that became a major part of the company’s Google Earth 3-D virtual globe.

Zakhor and colleagues have tested their system in buildings on the Berkeley campus and in a mall in Fremont, California. In tests at the mall, they successfully matched more than 96 percent of images taken by a smartphone’s camera against the database of images. When the matches were turned into location fixes, most came out with an error of less than a meter from the device’s true location.

Zakhor says her approach compares favorably with competing methods of determining location indoors in terms of accuracy and the cost of deployment. Alternative methods include using Bluetooth “beacons” or fingerprinting the pattern of Wi-Fi signals inside a building.

Jonathan Ventura, senior researcher at Graz University of Technology, Austria, agrees. “The major advantage of image-based localization is that it works almost everywhere and doesn’t require changing the environment in any way,” he says.

Zakhor’s group isn’t the only one capturing such data: Google has begun taking its Street View product inside and announced last month that it had documented the interiors of 16 airports and over 50 train stations.

Ventura’s own research focuses on augmented reality. He says that if devices can be located very accurately it will allow for virtual and real worlds to be closely aligned. “If we want to render a rich and complex virtual world into a high-resolution image,” he says, “we need to have much more accurate positioning than a consumer GPS receiver can deliver.”

Zakhor is planning tests of her method on computerized glasses, with the intention of having the devices use snapshots to track their location, making it possible to provide a map of an interior space in a person’s field of vision. The Berkeley research group is also working on using data from Wi-Fi signals collected by their backpack to provide a secondary method of deducing a device’s indoor location.