Audi Bets on Bio Gasoline Startup

Startup Global Bioenergies uses genetic engineering to avoid one of the costliest steps in biofuel production.

By Kevin Bullis

Audi is investing in a startup, Paris-based Global Bioenergies, that says it can make cheap gasoline from sugar and other renewable sources. The strategic partnership includes stock options and an unspecified amount of funding.

As with conventional biofuel production, Global Bioenergies technology uses microӧrganisms to ferment sugars to produce fuel. But its process eliminates the second most costly part of producing biofuels—the energy-intensive distillation step. And by making gasoline instead of making ethanol, the startup skirts a major problem hampering growth in biofuels—the fact that the market for ethanol is saturated.

Global Bioenergies has demonstrated its technology in the lab and is building two pilot facilities to produce isobutene, a hydrocarbon that a partner will convert into gasoline through an existing chemical process. The larger of the two pilot facilities will be big enough to support the production of over 100,000 liters of gasoline a year.

The process addresses one of the key challenges with conventional biofuels production—the fuel can kill the microӧrganisms that make it. In a conventional fermentation process, once the concentration of ethanol gets to about 12 percent, it starts to poison the yeast so that it can’t make any more ethanol.

Global Bioenergies has genetically engineered E. coli bacteria to produce a gas (isobutene) that bubbles out of solution, so its concentration in the fermentation tank never reaches toxic levels. As a result the bacteria can go on producing fuel longer than in the conventional process, increasing the output of a plant and reducing capital costs.

The isobutene still needs to be separated from other gases such as carbon dioxide, but Global Energies says this is much cheaper than distillation.

The new process doesn’t address the biggest cost of biofuels today—the cost of the raw materials. It’s designed to run on glucose, the type of sugar produced from corn or sugarcane. But the company is adapting it to work with sugars from non-food sources such as wood chips, which include glucose but also other sugars such as xylose.

 

Audi’s partnership with Global Bioenergies is part of push by the automaker to reduce greenhouse gas emissions in the face of tightening regulations. Audi recently announced two other investments in cleaner fuels. It funded a project to make methane using renewable energy—the methane can be used to run Audi’s natural-gas fueled cars (see “Audi to Make Fuel Using Solar Power”). And it funded Joule Unlimited, which is using photosynthetic microӧrganisms to make ethanol and diesel (see “Audi Backs a Biofuels Startup”).

Is Google Cornering the Market on Deep Learning?

A cutting-edge corner of science is being wooed by Silicon Valley, to the dismay of some academics.

By Antonio Regalado

How much are a dozen deep-learning researchers worth? Apparently, more than $400 million.

This week, Google reportedly paid that much to acquire DeepMind Technologies, a startup based in London that had one of the biggest concentrations of researchers anywhere working on deep learning, a relatively new field of artificial intelligence research that aims to achieve tasks like recognizing faces in video or words in human speech (see “Deep Learning”).

The acquisition, aimed at adding skilled experts rather than specific products, marks an acceleration in efforts by Google, Facebook, and other Internet firms to monopolize the biggest brains in artificial intelligence research.

In an interview last month, before the DeepMind acquisition, Peter Norvig, a director of research at Google, estimated that his company already employed “less than 50 percent but certainly more than 5 percent” of the world’s leading experts in machine learning, the wider discipline of which deep learning is the cutting edge.

Companies like Google expect deep learning to help them create new types of products that can understand and learn from the images, text, and video clogging the Web. And to a significant degree, leading academic scientists have embraced Silicon Valley, where they can command teams of engineers instead of students and have access to the largest, most interesting data sets. “It’s a combination of the computing resources we have and the headcounts we can offer,” Norvig said. “At Google, if you want a copy of the Web, well, we just happen to have one sitting around.”

Yoshua Bengio, an AI researcher at the University of Montreal, estimates that there are only about 50 experts worldwide in deep learning, many of whom are still graduate students. He estimated that DeepMind employed about a dozen of them on its staff of about 50. “I think this is the main reason that Google bought DeepMind. It has one of the largest concentrations of deep learning experts,” Bengio says.

Vying with Google for talent are companies including Amazon, Microsoft, and also Facebook, which in September created its own deep learning group (see “Facebook Launches Advanced AI Effort to Find Meaning in Your Posts”). It recruited perhaps the world’s best-known deep learning scientist, Yann LeCunof New York University, to run it. His NYU colleague, Rob Fergus, also accepted a job at the social network.

 

As advanced machine learning transitions from a primarily scientific pursuit to one with high industrial importance, Google’s bench is probably deepest. Names it has lured from academia into full-time or part-time roles include Sebastian Thrun (who has worked on the company’s autonomous car project); Fernando Pereira, a onetime University of Pennsylvania computer scientist; Stanford’s Andrew Ng; and Singularity University boss Ray Kurzweil.

Last year, Google also grabbed renowned University of Toronto deep-learning researcher Geoff Hinton and a passel of his students when it acquired Hinton’s company, DNNresearch. Hinton now works part-time at Google. “We said to Geoff, ‘We like your stuff. Would you like to run models that are 100 times bigger than anyone else’s?’ That was attractive to him,” Norvig said.

Not everyone is happy about the arrival of the proverbial Google Bus in one of academia’s rarefied precincts. In December, during a scientific meeting in Lake Tahoe, Mark Zuckerberg, the founder and CEO of Facebook, made a surprise appearance accompanied by uniformed guards, according to Alex Rubinsteyn, a bioinformatics researcher at Mount Sinai Medical Center, who complained in a blog post that a cultural “boundary between academia and Silicon Valley” had been crossed.

“In academia, status is research merit, it’s what you know,” Rubinsteyn says. “In Silicon Valley, it’s because you run a company or are rich. And then people around those people also think about getting rich.”

Peter Lee, head of Microsoft Research, told Bloomberg Businessweek that deep learning experts were in such demand that they command the same types of seven-figure salaries as some first-year NFL quarterbacks.

Some have resisted industry’s call. Of the three computer scientists considered among the originators of deep-learning—Hinton, LeCun, and Bengio—only Bengio has so far stayed put in the ivory tower. “I just didn’t think earning 10 times more will make me happier,” he says. “As an academic I can choose what to work on and consider very long-term goals.” Plus, he says, industry grants have started to flow his way as companies realize they’ll soon run out of recruits. This year, he’s planning to increase the number of graduate students he’s training from four to 15.

DeepMind was cofounded two years ago by Demis Hassibis, a 37-year-olddescribed by The Times of London as a game designer, neuroscientist, and onetime chess prodigy. The DeepMind researchers were well known in the scientific community, attending meetings and publishing “fairly high-level” papers in machine learning, although they had not yet released a product, says Bengio.

DeepMind’s expertise is in an area called reinforcement learning, which involves getting computers to learn about the world even from very limited feedback. “Imagine if I only told you what grades you got on a test, but didn’t tell you why, or what the answers were,” says Bengio. “It’s a difficult problem to know how you could do better.”

But in December, DeepMind published a paper showing that its software could do that by learning how to play seven Atari2600 games using as inputs only the information visible on a video screen, such as the score. For three of the games, the classics Breakout, Enduro, and Pong, the computer ended up playing better than an expert human. It performed less well on Q*bert and Space Invaders, games where the best strategy is less obvious.

 

Such skilled computer programs could have important commercial applications, including improving search engines (see “How a Database of the World’s Knowledge Shapes Google’s Future”), and might be particularly useful in helping robots learn to navigate the human world. Google last year acquired several leading robotics companies, including the makers of various types of humanoid robots (see “Google’s Latest Robot Acquisition Is the Smartest Yet.”)

Certainly, large companies wouldn’t be spending so heavily to monopolize talent in artificial intelligence unless they believed that these computer brains will give them a powerful edge. It may sound like a movie plot, but perhaps it’s even time to wonder what the first company in possession of a true AI would do with the power that it provided.

Bengio says not to worry about that. “Industry is interested in applying machine learning, and especially deep learning, to the tasks that they want to solve,” he says. “Those [efforts] are on the way towards AI, but still far from it.”

“Honey Encryption” Will Bamboozle Attackers with Fake Secrets

A new approach to encryption beats attackers by presenting them with fake data.

By Tom Simonite

Ari Juels, an independent researcher who was previously chief scientist at computer security company RSA, thinks something important is missing from the cryptography protecting our sensitive data: trickery.

“Decoys and deception are really underexploited tools in fundamental computer security,” Juels says. Together with Thomas Ristenpart of the University of Wisconsin, he has developed a new encryption system with a devious streak. It gives encrypted data an additional layer of protection by serving up fake data in response to every incorrect guess of the password or encryption key. If the attacker does eventually guess correctly, the real data should be lost amongst the crowd of spoof data.

The new approach could be valuable given how frequently large encrypted stashes of sensitive data fall into the hands of criminals. Some 150 million usernames and passwords were taken from Adobe servers in October 2013, for example.

After capturing encrypted data, criminals often use software to repeatedly guess the password or cryptographic key used to protect it. The design of conventional cryptographic systems makes it easy to know when such a guess is correct or not: the wrong key produces a garbled mess, not a recognizable piece of raw data.

Juels and Ristenpart’s approach, known as Honey Encryption, makes it harder for an attacker to know if they have guessed a password or encryption key correctly or not. When the wrong key is used to decrypt something protected by their system, the Honey Encryption software generates a piece of fake data resembling the true data.

If an attacker used software to make 10,000 attempts to decrypt a credit card number, for example, they would get back 10,000 different fake credit card numbers. “Each decryption is going to look plausible,” says Juels. “The attacker has no way to distinguish a priori which is correct.” Juels previously worked with Ron Rivest, the “R” in RSA, to develop a system called Honey Words to protect password databases by also stuffing them with false passwords.

Juels and Ristenpart will present a paper on Honey Encryption at the Eurocryptcryptography conference later this year. Juels is also working on building a system based on it to protect the data stored by password manager services such as LastPass and Dashlane. These services store all of a person’s different passwords in an encrypted form, protected by a single master password, so that software can automatically enter them into websites.

Password managers are a tasty target for criminals, says Juels. He believes that many people use an insecure master password to protect their collection. “The way they’re constructed discourages the use of a strong password because you’re constantly having to type it in—also on a mobile device in many cases.”

 

Juels predicts that if criminals got hold of a large collection of encrypted password vaults they could probably unlock many of them without too much trouble by guessing at the master passwords. But if those vaults were protected with Honey Encryption, each incorrect attempt to decrypt a vault would yield a fake one instead.

Hristo Bojinov, CEO and founder of mobile software company Anfacto, who has previously worked on the problem of protecting password vaults as a security researcher, says Honey Encryption could help reduce their vulnerability. But he notes that not every type of data will be easy to protect this way since it’s not always possible to know the encrypted data in enough detail to produce believable fakes. “Not all authentication or encryption systems yield themselves to being ‘honeyed.’”

Juels agrees, but is convinced that by now enough password dumps have leaked online to make it possible to create fakes that accurately mimic collections of real passwords. He is currently working on creating the fake password vault generator needed for Honey Encryption to be used to protect password managers. This generator will draw on data from a small collection of leaked password manager vaults, several large collections of leaked passwords, and a model of real-world password use built into a powerful password cracker.

A 96-Antenna System Tests the Next Generation of Wireless

Rice University is testing a highly efficient wireless communications system.

By David Talbot

Even as the world’s carriers build out the latest wireless infrastructure, known as 4G LTE, a new apparatus bristling with 96 antennas taking shape at a Rice University lab in Texas could help define the next generation of wireless technology.

The Rice rig, known as Argos, represents the largest such array yet built and will serve as a test bed for a concept known as “Massive MIMO.”

MIMO, or “multiple-input, multiple-output,” is a wireless networking technique aimed at transferring data more efficiently by having several antennas work together to exploit a natural phenomenon that occurs when signals are reflected en route to a receiver. The phenomenon, known as multipath, can cause interference, but MIMO alters the timing of data transmissions in order to increase throughput using the reflected signals.

MIMO is already used for 4G LTE and in the latest version of Wi-Fi, called 802.11ac; but it typically involves only a handful of transmitting and receiving antennas. Massive MIMO extends this approach by using scores or even hundreds of antennas. It increases capacity further by effectively focusing signals on individual users, allowing numerous signals to be sent over the same frequency at once. Indeed, an earlier version of Argos, with 64 antennas, demonstrated that network capacity could be boosted by more than a factor of 10.

“If you have more antennas, you can serve more users,” says Lin Zhong, associate professor of computer science at Rice and the project’s co-leader. And the architecture allows it to easily scale to hundreds or even thousands of antennas, he says.

Massive MIMO requires more processing power because base stations direct radio signals more narrowly to the phones intended to receive them. This, in turn, requires extra computation to pull off. The point of the Argos test bed is to see how much benefit can be obtained in the real world. Processors distributed throughout the setup allow it to test different network configurations, including how it would work alongside other emerging classes of base stations, known as small cells, serving small areas.

“Massive MIMO is an intellectually interesting project,” says Jeff Reed, director of the wireless research center at Virginia Tech. “You want to know: how scalable is MIMO? How many antennas can you benefit from? These projects are attempting to address that.”

 

An alternative, or perhaps complementary, approach to an eventual 5G standard would use extremely high frequencies, around 28 gigahertz. Wavelengths at this frequency are around two orders of magnitude smaller than the frequencies that carry cellular communications today, allowing more antennas to be packed into the same space, such as within a smartphone. But since 28 gigahertz signals are easily blocked by buildings, and even foliage and rain, they’ve long been seen as unusable except in special line-of-sight applications.

But Samsung and New York University have collaborated to solve this, also by using multi-antenna arrays. They send the same signal over 64 antennas, dividing it up to speed up throughput, and dynamically changing which antennas are used and the direction the signal is sent to get around environmental blockages (see “What 5G Will Be: Crazy Fast Wireless Tested in New York City”).

Meantime, some experiments have been geared toward pushing existing 4G LTE technology further. The technology can, in theory, deliver 75 megabits per second, though it is lower in real-world situations. But some research suggests it can go faster by stitching together streams of data from several wireless channels (see “LTE Advanced Is Poised to Turbocharge Smartphone Data”).

Emerging research done on Argos and in other wireless labs will help to define a new 5G phone standard. Whatever the specifics, it’s likely to include more sharing of spectrum, more small transmitters, new protocols, and new network designs. “To introduce an entirely new wireless technology is a huge task,” Marzetta says.

Android App Warns When You’re Being Watched

Researchers find a way to give Android users prominent warnings when apps are tracking their location.

By David Talbot

A new app notifies people when an Android smartphone app is tracking their location, something not previously possible without modifying the operating system on a device, a practice known as “rooting.”

The new technology comes amid new revelations that the National Security Agency seeks to gather personal data from smartphone apps (see “How App Developers Leave the Door Open to NSA Surveillance”). But it may also help ordinary people better grasp the extent to which apps collect and share their personal information. Even games and dictionary apps routinely track location, as collected from a phone’s GPS or global positioning system sensors.

 

Existing Android interfaces do include a tiny icon showing when location information is being accessed, but few people notice or understand what it means, according to a field study done as part of a new research project led by Janne Lindqvist, an assistant professor at Rutgers University. Lindqvist’s group created an app that puts a prominent banner across the top of the app saying, for example, “Your location is accessed by Dictionary.” The app is being readied for Google Play, the Android app store, within two months.

Lindqvist says Android phone users who used a prototype of his app were shocked to discover how frequently they were being tracked. “People were really surprised that some apps were accessing their location, or how often some apps were accessing their location,” he says.

According to one Pew Research survey, almost 20 percent of smartphone owners surveyed have tried to disconnect location information from their apps, and 70 percent wanted to know more about the location data collected by their smartphone.

The goal of the project, Lindqvist says, is to goad Google and app companies into providing more prominent disclosures, collecting less personal information, and allowing users to select which data they will allow the app to see. A research paper describing the app and the user study can be found here. It was recently accepted for an upcoming computer security conference.

In many cases, location information is used by advertisers to provide targeted ads. But information gained by apps often gets passed around widely to advertising companies (see “Mobile-Ad Firms Seek New Ways to Track You” and “Get Ready for Ads That Follow You from One Device to the Next”).

Google, which maintains the Android platform, has engineered it to block an app from gaining information about other apps. So Lindqvist’s team used an indirect method using a function within Android’s location application programming interface (API) that signals when any app requests location information. “People have previously done this with platform-level changes—meaning you would need to ‘root’ the phone,” says Lindqvist. “But nobody has used an app to do this.”

Google has flip-flopped on how much control it gives users over the information apps can access. In Android version 4.3, available since July of last year, users gained the ability to individually disable and enable apps’ “permissions” one by one, but then Google reversed course in December 2013, removing the feature in an update numbered 4.4.2, according to this finding from the Electronic Frontier Foundation.

The new app and study from Lindqvist’s team could help push Google back toward giving users more control. “Because we know how ubiquitous NSA surveillance is, this is one tool to make people aware,” he says.

The work adds to similar investigative work about Apple’s mobile operating system, iOS. Last year different academic researchers found that Apple wasn’t doing a good job stopping apps from harvesting the unique ID numbers of a device (see “Study Shows Many Apps Defy Apple’s Privacy Advice”). Those researchers released their own app, called ProtectMyPrivacy, that detects what data other apps on an iPhone try to access, notifies the owner, and makes a recommendation about what to do. However, that app requires users to first “jailbreak” or modify Apple’s operating system. Still, unlike Android, Apple allows users to individually control which categories of information an app can access.

“Telling people more about their privacy prominently and in an easy-to-understand manner, especially the location, is important,” says Yuvraj Agarwal, who led that research at the University of California, San Diego, and has since moved on to Carnegie Mellon University. Ultimately, though, Agarwal believes users must be able to take action on an app’s specific permissions. “If my choice is to delete Angry Birds or not, that’s not really a choice,” he says.