French online start-up Criteo shares pop in market debut

By Leila Abboud and Jennifer Saba.

(Reuters) – Shares in French online advertising firm Criteo rose more than 30 percent in its stock market debut on Nasdaq on Wednesday, showing investor appetite for technology start-ups and delivering a payday to its venture capital backers.

Shares in the company, which uses tracking technology to target ads at consumers surfing the web, opened at $31 and were at $41.40 by 1625 GMT, giving the eight-year old start-up a market capitalization of roughly $2.3 billion.

The sale of 8.08 million shares raised $250 million for the Paris-based company that will be used to fuel its international expansion and growth.

The size of the sale and the initial price were raised twice because of investor demand.

The success of Criteo’s share sale is a sign of investor interest in technology listings against the backdrop of a broader rally of the S&P 500 information technology index and just weeks before the much-anticipated market debut of social network Twitter.

Criteo is one of a number of companies, including Google and Facebook, to benefit from the on-line ad boom, the result of major companies following their audience to the web and away from newspapers and magazines.

Founded in Paris by Jean-Baptiste Rudelle in 2005, the start-up became a darling among online advertisers by boosting the rate at which Internet surfers click on display ads.

The company developed a technology known as “re-targeting” which catches users who have visited a shopping website without buying anything, and then showing them ads for similar items on other sites to tempt them back.

Criteo’s customers, including travel website Hotels.com, telecom operator Orange, and retailer Macy’s, only pay when a web surfer actually clicks on the ad.

In a rare move among French start-up founders, Rudelle moved to Silicon Valley to expand the company that is in 37 countries.

“The U.S. is our number one market today, and a very strategic market for us,” said Rudelle, explaining the choice of listing in New York instead of Paris.

“Being listed on the Nasdaq says that we are here to stay and committed to our clients and partners.”

Criteo has roughly doubled its revenues every year since 2010 to reach 271.9 million euros in 2012. It made a profit of 800,000 euros last year but swung to a loss of 4.9 million in the first six months of 2013 because of increased investments.

There have been 26 U.S. technology listings this year, according to Thomson Reuters data, compared with 30 in 2012.

The sale could herald a pay-day for venture capital firms, which have ploughed some $64 million into Criteo.

Geneva-based Index Ventures was the largest shareholder with a 23.4 percent stake before the share sale. Others include Idinvest Partners with 22.6 percent, Elaia Partners with 13.5 percent and Bessemer Venture Partners with 9.5 percent.

All the funds will be selling relatively small portions of their stakes in the listing, according to the offer documents.

Rudelle will own 8.4-8.6 percent of the group.

JP Morgan, Deutsche Bank Securities and Jefferies are the lead underwriters for the issue.

The Clever Circuit That Doubles Bandwidth

A Stanford startup’s new radio can send and receive information on the same frequency—an advance that could double the speed of  wireless networks.

By David Talbot

A startup spun out of Stanford says it has solved an age-old problem in radio communications with a new circuit and algorithm that allow data to be sent and received on the same radio frequency—thus doubling wireless capacity, at least in theory.

The company, Kumu Networks, has demonstrated the feat in a prototype and says it has agreed to run trials of the technology with unspecified major wireless carriers early next year.

The underlying technology, known as full-duplex radio, tackles a problem known as “self-interference.” As radios send and receive signals, the ones they send are billions of times stronger than the ones they receive. Any attempt to receive data on any given frequency is thwarted by the fact that the radio’s receiver is also picking up its own outgoing signal.

For this reason, most radios—including the ones in your smartphone, the base stations serving them, and Wi-Fi routers—send information out on one frequency and receive on another, or use the same frequency but rapidly toggle back and forth. Because of this inefficiency, radios use more wireless spectrum than is necessary.

To solve this, Kumu built an extremely fast circuit that can predict, moment by moment, how much interference a radio’s transmitter is about to create, and then generates a compensatory signal to cancel it out. The circuit generates a new signal with each packet of data sent, making it possible to work even in mobile devices, where the process of canceling signals is more complex because the objects they bounce off are constantly changing. “This was considered impossible to do for the past 100 years,” says Sachin Katti, assistant professor of electrical engineering and computer science at Stanford, and Kumu’s chief executive and cofounder.

Other companies, including satellite modem maker Comtech, previously used self-cancellation to boost bandwidth on satellite communications. But the Stanford team is the first to demonstrate it in the radios used in networks such as LTE and Wi-Fi, which required cancelling signals that are five orders of magnitude stronger. (More details can be found in this paper.)

Jeff Reed, director of the wireless research center at Virginia Tech, says the new radio rig appears to be a major advance, but he’s awaiting real-world results. “If their claims are true, those are some very impressive numbers,” Reed says. “It requires very precise timing to pull this off.”

This full-duplex technology isn’t the only trick that can seemingly pull new wireless capacity out of thin air. New ways of encoding data stand the chance of making wireless networks as much as 10 times more efficient in some cases (see “A Bandwidth Breakthrough”). Various research efforts are honing new ultrafast sensing and switching tricks to change frequencies on the fly, thus making far better use of available spectrum (see “Frequency Hopping Radio Wastes Less Spectrum”). And emerging software tools allow rapid reconfiguration of wired and wireless networks, creating new efficiencies (see “TR10: Software-Defined Networking”). “A lot of the spectrum is massively underutilized, and this is one of the tools to throw in there to make better use of spectrum,” says Muriel Medard, a professor at MIT’s Research Laboratory of Electronics, and a leader in the field of network coding.

 

Kumu’s technology—even if it works perfectly—won’t provide a big benefit in all situations. In cases where most traffic is going in one direction—such as during a video download—full-duplex technology opens up capacity that you don’t actually need, like adding inbound lanes during evening outbound rush-hour traffic. Nonetheless, Katti sees benefits “on every wireless device in existence from cell phones and towers to Wi-Fi to Bluetooth and everything in between.” Kumu Networks has received $10 million from investors, including Khosla Ventures and New Enterprise Associates.

Startup Gets Computers to Read Faces, Seeks Purpose Beyond Ads

A technology for reading emotions on faces can help companies sell candy. Now its creators hope it also can take on bigger problems.

Last year more than 1,000 people in four countries sat down and watched 115 television ads, such as one featuring anthropomorphized M&M candies boogying in a bar. All the while, webcams pointed at their faces and streamed images of their expressions to a server in Waltham, Massachusetts.

In Waltham, an algorithm developed by a startup company called Affectiva performed what is known as facial coding: it tracked the panelists’ raised eyebrows, furrowed brows, smirks, half-smirks, frowns, and smiles. (Watch a video of the technology in action below this story or here.) When this face data was later merged with real-world sales data, it turned out that the facial measurements could be used to predict with 75 percent accuracy whether sales of the advertised products would increase, decrease, or stay the same after the commercials aired. By comparison, surveys of panelists’ feelings about the ads could predict the products’ sales with 70 percent accuracy.

Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.

Given that the market for TV ads in the United States alone exceeds $70 billion, insights from facial coding are “a big deal to business people,” says Rosalind Picard, who heads the affective computing group at MIT’s Media Lab and cofounded the company; she left the company earlier this year but is still an investor.

Even so, facial coding has not yet delivered on the broader, more altruistic visions of its creators. Helping to sell more chocolate is great, but when will facial coding help people with autism read social cues, boost teachers’ ability to see which students are struggling, or make computers empathetic?

Answers may start to come next month, when Affectiva launches a software development kit that will let its platform be used for approved apps. The hope, says Rana el Kaliouby, the company’s chief science officer and the other cofounder (see “Innovators Under 35: Rana el Kaliouby”), is to spread the technology beyond marketing. While she would not name the actual or potential partners, she said that “companies can use our technology for anything from gaming and entertainment to education and learning environments.”

Applications such as educational assistance—informing teachers when students are confused, or helping autistic kids read emotions on other people’s faces—figured strongly in the company’s conception. Affectiva, which launched four years ago and now has 35 employees and $20 million in venture funding, grew out of the Picard lab’s manifesto declaring that computers would do society a service if they could recognize and react to human emotions.

Over the years, the lab mocked up prototype technologies. These included a pressure-sensing mouse that could feel when your hand clenched in agitation; a robot called Kismet that could smile and raise its eyebrows; the “Galvactivator,” a skin conductivity sensor to measure heartbeat and sweating; and the facial coding system, developed and refined by el Kaliouby.

Affectiva bet on two initial products: a wrist-worn gadget called the Q sensor that could measure skin conductance, temperature, and activity levels (which can be indicators of stress, anxiety, sleep problems, seizures, and some other medical conditions); and Affdex, the facial coding software. But while the Q sensor seemed to show early promise (see “Wrist Sensor Tells You How Stressed Out You Are” and “Sensor Detects Emotions through the Skin”), in April the company discontinued the product, seeing little potential market beyond researchers working on applications such as measuring physiological signs that presage seizures. That leaves the company with Affdex, which is mainly being used by market research companies, including Insight Express and Millward Brown, and consumer product companies like Unilever and Mars.

 

Now, as the company preps its development kit, the market research work may provide an indirect payoff. After spending three years convening webcam-based panels around the world, Affectiva has amassed a database of more than one billion facial reactions. The accuracy of the system could pave the way for applications that read the emotions on people’s faces using ordinary home computers and portable devices. “Affectiva is tackling a hugely difficult problem, facial expression analysis in difficult and unconstrained environments, that a large portion of the academic community has been avoiding,” says Tadas Baltrusaitis, a doctoral student at the University of Cambridge, who has written several papers on facial coding.

What’s more, by using panelists from 52 countries, Affectiva has been teasing out lessons specific to gender, culture, and topic. Facial coding has particular value when people are unwilling to self-report their feelings. For example, el Kaliouby says, when Indian women were shown an ad for skin lotion, every one of them smiled when a husband touched his wife’s midriff—but none of the women would later acknowledge or mention that scene, much less admit to having enjoyed it.

Education may be ripe for the technology. A host of studies have shown the potential; one by researchers at the University of California, San Diego—who have founded a competing startup called Emotient —showed that facial expressions predicted the perceived difficulty of a video lecture and the student’s preferred viewing speed. Another showed that facial coding could measure student engagement during an iPad-based tutoring session, and that these measures of engagement, in turn, predicted how the students would later perform on tests.

Such technologies may be particularly helpful to students with learning disabilities, says Winslow Burleson, an assistant professor at Arizona State University, author of a paper describing these potential uses of facial coding and other technologies. Similarly, the technology could help clinicians tell whether a patient understands instructions. Or it could improve computer games by detecting player emotions and using that feedback to change the game or enhance a virtual character.

Taken together, the insights from many such studies suggest a role for Affdex in online classrooms, says Picard. “In a real classroom you have a sense of whether the students are actively attentive,” she says. “As you go to online learning, you don’t even know if they are there. Now you can measure not just whether they are present and attentive, but if you are speaking—if you crack a joke, do they smile or smirk?”

Nonetheless, Baltrusaitis says many questions remain about which emotional states in students are relevant, and what should be done when those states are detected. “I think the field will need to develop a bit further before we see this being rolled out in classrooms or online courses,” he says.

The coming year should reveal a great deal about whether facial coding can have benefits beyond TV commercials. Affdex faces competition from other apps and startups, and even some marketers remain skeptical that facial coding is better than traditional methods of testing ads. Not all reactions are expressed on the face, and many other measurement tools claim to read people’s emotions, says Ilya Vedrashko, who heads a consumer intelligence research group at Hill Holliday, an ad agency in Boston.

Yet with every new face, the technology gets stronger. That’s why el Kaliouby believes it is poised to take on bigger problems. “We want to make facial coding technology ubiquitous,” she says.

AI Startup Says It Has Defeated Captchas

Brain-mimicking software can reliably solve a test meant to separate humans from machines.

Captchas, those hard-to-read jumbles of letters and numbers that many websites use to foil spammers and automated bots, aren’t necessarily impossible for computers to handle. An artificial-intelligence company called Vicarious says its technology can solve numerous types of Captchas more than 90 percent of the time.

It’s not the first time that computer scientists have managed to fool this method of separating man from machine. But Vicarious says its technique is more reliable and more useful than others because it doesn’t require mountains of training data for it to recognize letters and numbers consistently. Nor does it take a lot of computing power. Vicarious does it with a visual perception system that can mimic the brain’s ability to process visual information and recognize objects.

 

The purposes go well beyond Captchas: Vicarious hopes to eventually sell systems that can easily extract text and numbers from images (such as in Google’s Street View maps), diagnose diseases by checking out medical images, or let you know how many calories you’re about to eat by looking at your lunch. “Anything people do with their eyes right now is something we aim to be able to automate,” says cofounder D. Scott Phoenix.

Vicarious expands on an old idea of using an artificial neural network that is modeled on the brain and builds connections between artificial neurons (see “10 Breakthrough Technologies: Deep Learning”). One big difference in Vicarious’s approach, says cofounder Dileep George, is that its system can be trained with moving images rather than only static ones.

Vicarious set its cognition algorithms to work on solving Captchas as a way of testing its approach. After training its system to recognize numbers and letters, it could solve Captchas from PayPal, Yahoo, Google, and other online services. The company says its average accuracy rate ranges from 90 to 99 percent, depending on the type of Captcha (for example, some feature characters arranged within a grid of rectangles, while others might have characters in front of a wavy background). The system performed best with Captchas composed of letters that look like they’re made out of fingerprints.

“Captcha” stands for “completely automated public Turing test to tell computers and humans apart.” They were created in 2000 by researchers at Carnegie Mellon University and are solved by millions of Web users daily.

That’s not about to change: Vicarious isn’t going to release its system publicly. And besides, as Luis von Ahn, one of the creators of the Captcha, points out, many people have shown evidence of computerized Captcha-solving over the years. Von Ahn even helpfully passed along a link to a list of such instances.

With Firefox OS, an $80 Smartphone Tries to Prove Its Worth

Despite limitations, the Firefox OS-running ZTE Open shows promise for low-cost smartphones.

While the word “smartphone” usually evokes images of pricey iPhones and Android handsets, plenty of inexpensive smartphones are also hitting the market—ripe for the millions of cell phone owners who want a smartphone, but can’t (or don’t want to) pay hundreds of dollars for one.

For Mozilla, which makes the popular Firefox Web browser, this looks like the most promising target market for its recently released Firefox OS, an open-source, largely Web-based mobile operating system intended to run on lower-cost smartphones. The first phones running the OS began selling this summer in several markets around the world.

The company is taking on an audacious challenge, going up against established operating systems like Google’s Android, as well as a slew of less well-known mobile operating systems. And if it wants to succeed, Mozilla has to ensure that those making Firefox OS-running phones—which include ZTE and LG—build products that consumers actually want to use, regardless of how much less they cost than many others on the market.

Curious to see how Mozilla’s efforts are playing out, I decided to check out one of these phones just after the release of a significant update to the Firefox OS this month: the ZTE Open ($80, unlocked, and available in the U.S. on eBay), which the Chinese smartphone maker undoubtedly sees as a way to grow sales by offering an inexpensive handset that uses an alternative OS. I tried to test it while keeping in mind how I might feel if this were not only my first smartphone, but also my first computer, which will undoubtedly be the case for some buyers.

My initial verdict? The Firefox OS is off to a good start, and for $80, the ZTE Open is an okay handset. With many improvements over time—some of which will presumably come from the developer community, which Mozilla hopes will build a slew of Web-based apps for the platform—the OS and smartphones like the ZTE Open could be an excellent choice for those who want basic smartphone capabilities but are not going to pay for a high-end handset.

The handset’s price tag is quite lower than some similar devices. Buying the ZTE Open through Telefonica’s Movistar in Spain, for example, cost 49 euros (about $68) when I last checked; you’d have to pay more than twice that—116 euros, or about $160—for the next cheapest available prepaid smartphone, a Sony Xperia E that runs Android and has similar specifications. Through Movistar in Colombia, the device costs about $80 (U.S.), while a Samsung Galaxy Young Android smartphone costs about $158.

That low price shows in various ways. The first thing you may notice is that the ZTE Open could use some help in the fashion department. It looks a lot more like a smartphone from a couple of years ago than the hottest new handset. It’s squat and chunky, with a soft-feeling plastic back and display frame in pearly Firefox orange (that said, it feels good and solid in your hand, and I wasn’t afraid it would break if I dropped it). Its face is dominated by a touch screen that measures 3.5 inches at the diagonal, with a capacitive “home” button centered below it.

The Firefox OS is extremely intuitive and easy to find your way around, clearly taking many cues from iOS and Android. When you unlock the phone, you see a row of rounded app icons at the bottom of the screen for easy access to top functions (such as making calls, sending messages, and opening the Firefox Web browser—you can change these to suit your habits). There’s a swipe-down notification screen that also gives easy access to wireless and other device settings, and a Marketplace app that allows you to download Web apps (apps built using Web technologies like HTML5) from some big names including Facebook, Twitter, and YouTube.

Perhaps the most interesting thing about the Firefox OS is the way it tries to blur the divide between native and Web software. Atop the phone’s main home screen is a handy search bar; whatever you type in there will bring up results both on and off the phone. Search for “dinner,” for example, and you’ll get a list of round icons corresponding with dinner-related apps already installed on your device, as well as recipe- and dining-related websites shown as though they, too, were apps. Click on a result, such as Yelp, and it will automatically search for restaurants serving dinner near you. You can add this specific Yelp search to your home screen for future easy access. This is a clever way around the problem of a lack of apps (there is no Firefox OS app), but you’re really just adding a link to the Yelp mobile site to your phone in the form of a rounded icon. The fact that apps are built using Web technologies—such as HTML, CSS, and JavaScript—may also entice Web developers to build their first mobile apps.

There are a number of apps included on the handset, such as Nokia’s Here Maps, Facebook, and AccuWeather. And while some, like Here Maps, felt like self-contained app experiences, others (like the New York Times app) looked more like mobile websites. There also didn’t seem to be any notifications for any of the apps I had on the phone (there was an option in the phone’s settings to show notifications on the phone’s lock screen, which I enabled, so presumably Mozilla is still working on that).

The handset hardware is very low, but not entirely no-frills, with a three-megapixel rear camera, rear speaker, and Bluetooth, as well as the ability to function as a Wi-Fi hotspot. There’s hardly any storage space on the phone itself; if you want to take and use photos, videos, and music, you’ll need to pop in a microSD memory card.

One of the phone’s biggest issues seems to be its speed (or lack thereof), which is limited in part by its processor and memory (a one-gigahertz Qualcomm CPU and 256 megabytes of RAM—the same as the Galaxy Young but a measly amount compared to the latest iPhones and high-end Android handsets) and wireless network capabilities (2G and 3G, not LTE). In the U.S., you’ll need to use it with either T-Mobile’s or AT&T’s network; I tested it on T-Mobile’s network and found it somewhat pokey, especially when loading media-heavy Web pages, but generally okay. This is expected, given the low price, but I’m hopeful that improvements to the OS can help in the near term. In fact, I already noticed a bit of a speed difference between using a phone running the most recent Firefox OS and the last version, which is a good sign.

I also had problems with its touch capabilities, which often seemed unwilling to do what I wanted. Tapping app icons and virtual buttons often took several tries, as did tapping a field to enter text (such as a username and password or a URL). Numerous times I swiped right or left to move between the phone’s virtual home screens without seeing a change, or thought I was tapping one button and somehow hit another, which was annoying. It’s irritating, but hopefully the abundance of touch screens in mobile devices and improvements in the technology will soon make it affordable to add better touch screens to low-end phones, too.

 

The display itself isn’t great either, with 480 x 320 pixel resolution, which gives videos and still images a washed-out, less-than-sharp appearance. But it’s good enough for watching some YouTube clips and basic Web surfing, social networking, and messaging, as well as doing some simple photo editing (you get a few built-in options like filters, though nothing fancy).

Phone calls sounded decent, but somewhat fuzzy, and I am a bit concerned about the battery life, which I was able to run down to 50 percent in about three and a half hours of heavy usage.

Since phones running the Firefox OS are heavily Web-dependent—the search feature, for instance, customizes its results and backgrounds with the aid of the Internet—I’m also worried about how they will function in the absence of reliable networks. Even with a strong Wi-Fi network and access to a fairly dependable T-Mobile 3G network, the phone was prone to stuttering on the Web and having trouble loading pages or conducting searches in the included Nokia Here Maps app. This could be a big problem in less-developed areas, where wireless networks and Wi-Fi hotspots are less abundant and functional, and could make users extremely frustrated.

Presumably, as with the other shortcomings, Firefox has this in mind as it moves forward with its OS development. There’s still a lot of work to be done, but I’m excited to see the results.

New Gene Therapy Company Launches

 

Spark Therapeutics hopes to commercialize multiple gene-based treatments developed at the Children’s Hospital of Philadelphia.

 A new biotechnology company will take over human trials of two gene therapies that could offer one-time treatments for a form of childhood blindness and hemophilia B.
The gene therapies were developed by researchers at the Children’s Hospital of Philadelphia, which has committed $50 million to the new company called Spark Therapeutics. The launch is the latest hint that after decades of research and some early setbacks, gene therapy may be on its way to realizing its potential as a powerful treatment for inherited disease.

In December 2012, the European Union gave permission to Dutch company Uniqure to sell its gene therapy for a fat-processing disorder, making Glybera the first gene therapy to make its way into a Western market (see “Gene Therapy on the Mend as Treatment Gets Western Approval”). However, Glybera has not been approved by the U.S., nor has any other gene therapy.

Spark has a chance to be the first gene-therapy company to see FDA approval. Results for a late-stage trial of a gene therapy for Leber’s Congenital Amaurosis, an inherited condition that leads to a loss of vision and eventually blindness, are expected by mid-2015. That treatment is one of several gene therapies in or nearing late-stage testing contending to be the first gene therapy approved by the FDA for sale in the U.S. (see “When Will Gene Therapy Come to the U.S.”).

In addition to taking the reins for two-ongoing human trials, Spark will also work on gene therapies for other eye and blood conditions as well as neurodegenerative diseases, says CEO Jeff Marrazzo.  The gene therapy technology developed at the Children’s Hospital has been “speeding down the tracks,” he says, and the company will provide the “vehicle to get these therapies to the people who need them.”

 

Flame-Shaping Electric Fields Could Make Power Plants Cleaner

 

ClearSign’s pollution-reducing technology could help power plants burn less fuel and make more money.

By Kevin Bullis on October 23, 2013

A Seattle company called ClearSign Combustion has developed a trick that it says could nearly eliminate key pollutants from power plants and refineries, and make such installations much more efficient. The technique involves electric fields to control the combustion of fuel by manipulating the shape and brightness of flames.

The technology could offer a cheaper way to reduce pollution in poor countries. And because ClearSign’s approach to reducing pollution also reduces the amount of fuel a power plant consumes, it can pay for itself, the company says. The need for better pollution controls is clear now in China, where hazardous pollution has been shutting down schools and roads this week.

The company claims that its technology could reduce fuel consumption by as much as 30 percent. Some outside experts say that in practice the likely improvement would be far less, possibly only a few percent, although even that would still result in large savings.

Much of the pollution from a power plant is the result of problems with combustion. If parts of a flame get too hot, it can lead to the formation of nitrogen oxides, which contribute to smog. Similarly, incomplete burning, which can result from the poor mixing of fuel and air, can form soot (see “Cheaper, Cleaner Combustion”).

ClearSign uses high-voltage electric fields to manipulate the electrically charged molecules in a combustion flame. This can improve the way air and fuel mix together, and can spread out a flame to prevent hot spots that cause pollution.

The idea of using electricity to shape flames has been around for decades. But conventional approaches typically involve plasma, and the plasma needs large amounts of energy. ClearSign says its technology only uses one-tenth of 1 percent of the energy in the fuel that a power plant consumes. It works using electrodes within the flame. The electrode produces high voltages that influence the movement of ions; by varying the voltage, it’s possible to control the way the flame forms. The technology is particularly effective at reducing smog-forming NOx emissions, carbon monoxide, and soot.

“There’s been interest in electric fields for some time, but nothing with as strong an effect as they’ve demonstrated,” says Michael Frenklach, a professor of mechanical engineering at the University of California, Berkeley.

In addition to reducing pollution, the technology can improve the efficiency of a power plant or a refinery in several ways. Improved mixing of fuel and air means less fuel is wasted by incomplete combustion; the technology can also improve heat transfer from the flame to the water in a boiler, so less fuel is needed to make steam, which is used to drive turbines in a power plant. But the biggest potential for fuel savings could be in reducing or eliminating the need for conventional pollution controls, which can consume significant amounts of energy, and can be expensive.

 

A Successful Moon Shot for Laser Communications

 

A test of high-bandwidth optical communications from lunar orbiter to Earth stations succeeds.

 

There was no “Mr. Watson—come here—I want to see you” moment. But a pioneering space-based optical communications test has been a big success. And that means optical systems stand a higher chance not only dominating future space data transmissions (with radio systems serving as a backup) but of enabling new satellite networks that would boost the capacity of the terrestrial Internet.

A planned test of the Lunar Laser Communications Demonstration (see “NASA Moonshot Will Test Laser Communications”) aboard a probe in lunar orbit is working just as planned, delivering download speeds six times faster than the fastest radio system used for moon communications,Don Boroson, the researcher at MIT’s Lincoln Lab who led the project, says, “We have successfully hit all our marks—all the downlink rates up to 622 Mbps [and] our two uplink rates up to 20 Mbps.”

One of the toughest parts of the task: aligning ground telescopes to continually see the incoming infrared laser beam dispatched from a probe whizzing around the moon. This “signal acquisition”  was “fast and reliable,” he added. His team even transmitted high-definition video of “shuttle launches, space station antics, and Earth images,” he said. “Also, some little videos we took of ourselves in the operations center.”

Ground-based detectors were set up in California, New Mexico, and one of the Canary Islands. The big difficulty with sending optical signals through the air is that they can be blocked by clouds. Still, in the future, networks of satellites could transmit data among each other and then to ground stations in various places, giving a bandwidth boost to the ground-based fiber network.

 

A Lifeline for a Cellulosic-Biofuel Company

$100 million in new funding will keep the woodchip-to-gasoline company Kior afloat, for now.

Yesterday Kior, a company that turns wood chips into gasoline and diesel fuel, announced that it had raised $100 million, which should be enough to keep it in business for another year or so and help it build a new biorefinery. The funding is a lifeline for a business that just a couple of months ago looked close to failure. But the company, which operates the largest U.S. refinery for converting cellulosic biomass into fuel (see “Kior ‘Biocrude’ Plant a Step Toward Advanced Biofuels”), is still a long way from being profitable.

Cellulosic biofuels could, at least in theory, reduce oil imports and greenhouse-gas emissions, and the U.S. Congress has required fuel companies to buy billions of gallons of it. But in spite of this mandate, very little is produced. Although dozens of companies have trotted out lab-scale technologies for breaking down recalcitrant biomass and turning it into fuel, they’ve struggled to commercialize these systems, in part because it’s been difficult to raise funds to build large refineries and in part because the methods often fail to perform as well at a large scale as they do in the lab. (For example, one company, Range Fuels, found that its system became clogged up with tar.) As result, the government mandate has repeatedly been waived (see “The Death of Range Fuels Shouldn’t Doom All Biofuels” and “The Cellulosic Industry Faces Big Challenges”).

 

Kior itself has run into technical difficulties that have kept it from running its huge biofuel plant at full scale. The plant is designed to produce 13 million gallons of fuel per year and started producing its first fuel—diesel—in March 2013. The company said it would ship a total of 300,000 to 500,000 gallons by midyear, but it only managed to ship 75,000 gallons. The shortfall in production resulted in lower-than-expected revenue and a loss of $38.5 million in the second quarter, up from $23 million for the same quarter a year before. With little revenue and high costs, some analysts started to worry that the company would run out of money.

The $100 million investment buys the company time, and by some measures it’s making good progress, says Mike Ritzenthaler, a senior research analyst at Piper Jaffray. For example, he notes that production levels are increasing, and the company looks on track to produce a million gallons of fuel by the end of the year. Kior also has the advantage of making gasoline rather than ethanol, the market for which is saturated in the United States.

But big challenges remain. If Kior hopes to break even and eventually turn a profit, it needs the economies of scale that come from even bigger refineries, and building those will require more funding. Funding for cellulosic plants has been particularly hard to come by, since investors are reluctant to take a risk on the new technology.

Anonymity Network Tor Needs a Tune-up to Protect Users from Surveillance

Fixes are planned for Internet anonymity tool Tor after researchers showed that national intelligence agencies could plausibly unmask users.

By Tom Simonite on October 25, 2013

All the same, the Tor Project is trying to develop critical adjustments to how its tool works to strengthen it against potential compromise. Researchers at the U.S. Naval Research Laboratory have discovered that Tor’s design is more vulnerable than previously realized to a kind of attack the NSA or government agencies in other countries might mount to deanonymize people using Tor.

Tor prevents people using the Internet from leaving many of the usual traces that can allow a government or ISP to know which websites or other services they are connecting to. Users of the tool range from people trying to evade corporate firewalls to activists, dissidents, criminals, and U.S. government workers with more sophisticated adversaries to avoid.

When people install the Tor client software, their outgoing and incoming traffic takes an indirect route around the Internet, hopping through a network of “relay” computers run by volunteers around the world. Packets of data hopping through that network are encrypted so that relays know only their previous and next destination (see “Dissent Made Safer”). This means that even if a relay is compromised, the identity of users, and details of their browsing, should not be revealed.

However, new research shows how a government agency could work out the true source and destination of Tor traffic with relative ease. Aaron Johnson of the U.S. Naval Research Laboratory and colleagues found that the network is vulnerable to a type of attack known as traffic analysis.

This type of attack involves observing Internet traffic data going into and out of the Tor network and looking for patterns that reveal the Internet services that a specific Internet connection, and presumably its owner,  is using Tor to access. Johnson and colleagues showed that the method could be very effective for an organization that both contributed relays to the Tor network and could monitor some Internet traffic via ISPs.

“Our analysis shows that 80 percent of all types of users may be deanonymized by a relatively moderate Tor-relay adversary within six months,” the researchers write in a paper on their findings. “These results are somewhat gloomy for the current security of the Tor network.” The work of Johnson and his colleagues will be presented at the ACM Conference on Computer and Communications Security in Berlin next month.

Johnson told MIT Technology Review that people using the Tor network to protect against low-powered adversaries such as corporate firewalls aren’t likely to be affected by the problem. But he thinks people using Tor to evade the attention of national agencies have reason to be concerned. “There are many plausible cases in which someone would be in a position to control an ISP,” says Johnson.

Johnson says that the workings of Tor need to be adjusted to mitigate the problem his research has uncovered. That sentiment is shared by Roger Dingledine, one of Tor’s original developers and the project’s current director (see “TR35: Roger Dingledine”).

“It’s clear from this paper that there *do* exist realistic scenarios where Tor users are at high risk from an adversary watching the nearby Internet infrastructure,” Dingledine wrote in a blog post last week. He notes that someone using Tor to visit a service hosted in the same country—he gives the example of Syria—would be particularly at risk. In that situation traffic correlation would be easy, because authorities could monitor the Internet infrastructure serving both the Tor user and the service he or she is connecting to.

Dingledine is considering changes to the Tor protocol that might help. In the current design, the Tor client selects three entry points into the Tor network and uses them for 30 days before choosing a new set. But each time new “guards” are selected the client runs the risk of choosing one an attacker using traffic analysis can monitor or control. Setting the Tor client to select fewer guards and to change them less often would make traffic correlation attacks less effective. But more research is needed before such a change can be made to Tor’s design.

Whether the NSA or any other country’s national security agency is actively trying to use traffic analysis against Tor is unclear. This month’s reports, based on documents leaked by Edward Snowden, didn’t say whether the NSA was doing so. But a 2007 presentation released by the Guardian and a 2006 NSA research report on Tor released by the Washington Post did mention such techniques.

Stevens Le Blond, a researcher at the Max Planck Institute for Software Systems in Kaiserslautern, Germany, guesses that by now the NSA and equivalent agencies likely could use traffic correlation should they want to. “Since 2006, the academic community has done much work on traffic analysis and has developed attacks that are much more sophisticated than the ones described in this report.” Le Blond calls the potential for attacks like those detailed by Johnson “a big issue.”

Le Blond is working on the design of an alternative anonymity network called Aqua, designed to protect against traffic correlation. Traffic entering and exiting an Aqua network is made to be indistinguishable through a mixture of careful timing, and blending in some fake traffic. However, Aqua’s design is yet to be implemented in usable software and can so far only protect file sharing rather than all types of Internet usage.

In fact, despite its shortcomings, Tor remains essentially the only practical tool available to people that need or want to anonymize their Internet traffic, saysDavid Choffnes, an assistant professor at Northeastern University who helped design Aqua. “The landscape right now for privacy systems is poor because it’s incredibly hard to put out a system that works, and there’s an order of magnitude more work that looks at how to attack these systems than to build new ones.”