Facebook Launches Advanced AI Effort to Find Meaning in Your Posts

A technique called deep learning could help Facebook understand its users and their data better.

By Tom Simonite.

Facebook is set to get an even better understanding of the 700 million people who share details of their personal lives using the social network each day.

A new research group within the company is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook could allow for novel features, and perhaps boost the company’s ad targeting.

Deep learning has shown potential to enable software to do things such as work out the emotions or events described in text even if they aren’t explicitly referenced, recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.

The eight-strong group, known internally as the AI team, only recently started work, and details of its experiments are still secret. But Facebook’s chief technology officer, Mike Schroepfer, will say that one obvious place to use deep learning is to improve the news feed, the personalized list of recent updates he calls Facebook’s “killer app.” The company already uses conventional machine learning techniques to prune the 1,500 updates that average Facebook users could possibly see down to 30 to 60 that are judged to be most likely to be important to them. Schroepfer says Facebook needs to get better at picking the best updates due to the growing volume of data its users generate and changes in how people use the social network.

“The data set is increasing in size, people are getting more friends, and with the advent of mobile, people are online more frequently,” Schroepfer told MIT Technology Review. “It’s not that I look at my news feed once at the end of the day; I constantly pull out my phone while I’m waiting for my friend, or I’m at the coffee shop. We have five minutes to really delight you.”

Shroepfer says deep learning could also be used to help people organize their photos, or choose which is the best one to share on Facebook.

Facebook’s foray into deep learning sees it following its competitors Google and Microsoft, which have used the approach to impressive effect in the past year. Google has hired and acquired leading talent in the field (see “10 Breakthrough Technologies 2013: Deep Learning”), and last year created software that taught itself to recognize cats and other objects by reviewing stills from YouTube videos. The underlying deep learning technology was later used to slash the error rate of Google’s voice recognition services (see “Google’s Virtual Brain Goes to Work”).

Researchers at Microsoft have used deep learning to build a system that translates speech from English to Mandarin Chinese in real time (see “Microsoft Brings Star Trek’s Voice Translator to Life”). Chinese Web giant Baidu also recently established a Silicon Valley research lab to work on deep learning.

 

Less complex forms of machine learning have underpinned some of the most useful features developed by major technology companies in recent years, such as spam detection systems and facial recognition in images. The largest companies have now begun investing heavily in deep learning because it can deliver significant gains over those more established techniques, says Elliot Turner, founder and CEO of AlchemyAPI, which rents access to its own deep learning software for text and images.

“Research into understanding images, text, and language has been going on for decades, but the typical improvement a new technique might offer was a fraction of a percent,” he says. “In tasks like vision or speech, we’re seeing 30 percent-plus improvements with deep learning.” The newer technique also allows much faster progress in training a new piece of software, says Turner.

Conventional forms of machine learning are slower because before data can be fed into learning software, experts must manually choose which features of it the software should pay attention to, and they must label the data to signify, for example, that certain images contain cars.

Deep learning systems can learn with much less human intervention because they can figure out for themselves which features of the raw data are most useful to understanding it. They can even work on data that hasn’t been labeled, as Google’s cat recognizing software did. Systems able to do that typically use software that simulates networks of brain cells, known as neural nets, to process data, and require more powerful collections of computers to run.

Facebook’s AI group will work on both applications that can help the company’s products and on more general research on the topic that will be made public, says Srinivas Narayanan, an engineering manager at Facebook helping to assemble the new group. He says one way Facebook can help advance deep learning is by drawing on its recent work creating new types of hardware and software to handle large data sets (see “Inside Facebook’s Not-So-Secret New Data Center”). “It’s both a software and a hardware problem together; the way you scale these networks requires very deep integration of the two,” he says.

How Twitter Can Cash In with New Technology

Twitter seeks to do better at inferring its users’ consumer and political preferences, gender, age, and more.

By David Talbot.

Twitter began selling promoted tweets in 2010, but it has always faced challenges in knowing which of those ads should be delivered to which Twitter accounts. Most Twitter users don’t give up their locations, and many don’t reveal their identities in their profiles. And mining tweets themselves for insights is hard because the language is not only short but filled with slang and abbreviations.

Now, as Twitter plans to sell shares to the public, its success will depend in part on how much better it can get at deciphering tweets. Solving that technological puzzle would help Twitter get better at selling the right promoted messages at the right times, and it could possibly lead to new revenue-producing services.

Twitter hasn’t done badly so far; the analyst firm eMarketer predicts ad revenue will double this year, to $583 million. But the company is still trying to get smarter about analyzing tweets. It has bought startups such as Bluefin Labs, which can tell which TV show—and even which precise airing of a TV advertisement—people have tweeted about (see “A Social-Media Decoder”). It has also invested in companies such as Trendly, a Web analytics provider that reveals how promoted tweets are being read and shared. And just last week, Twitter blogged that it is continually running experiments on how to do better at tasks such as suggesting relevant content.

For its next steps, Twitter might consider tapping the latest academic research. Here are some areas it could concentrate on.

Location

Fewer than 1 percent of tweets are “geotagged,” or voluntarily labeled by users with location coördinates. Much of the time, Twitter can use your computer’s IP address and get a good approximation. But that’s not the same as knowing where you are. In mobile computing, IP addresses are reassigned frequently—and some people take steps to obscure their true IP address.

But recent research has shown that the locations of friends—defined as people you follow on Twitter who are also following you—can be used to infer your location to within 10 kilometers half the time. It turns out that many Twitter friends live near one another, says David Jurgens, a computer scientist at Sapienza University of Rome, who did this research while at HRL Laboratories in Malibu, California. If some of your friends have made geotagged tweets or revealed their location in a Twitter profile, Jurgens says, that may be enough to show where you probably are.

Demographics

Natural-language processing gets better all the time. Hundreds of markers—word choices, abbreviations, slang terms, and letter and punctuation combinations—signify ever-finer strata of demographic groups and their interests.

Some things, like political leanings, are often not hard to figure out from the right hashtags or from sentiments associated with terms like “Obamacare,” says Dan Weld, a computer scientist at the University of Washington.

Meanwhile, Derek Ruths, a computer scientist who explores natural-language processing at McGill University, has recently shown that linguistic cues can identify U.S. Twitter users’ political orientation with 70 to 90 percent accuracy and can even identify their age (within five years) with 80 percent accuracy. For example, words that most strongly suggest someone is between the ages of 25 and 30 include “for,” “on”, “photo,” “I’m,” and “just,” he says. Generally, these users have a somewhat stronger allegiance to grammar than younger, slang-loving users, he says. And as with location, the profiles of the people they follow provide clues to their demographics.

But even if Twitter can make pretty good guesses about 90 percent of its users, “even missing 10 percent means you miss a lot of people,” says Ruths. “If I were Twitter, I’d want to close that 10 percent gap. And you’d want to find out real details like who someone’s mother is. If it’s Mom’s birthday, you want to tell those people how to order flowers. Twitter can’t do that—yet.”

Making Sense of Breaking News

One of the major uses of Twitter is to report on breaking news events (see “Can Twitter Make Money?”). With so many people tweeting little nuggets of news and other current information, tools have even been built to tease out play-by-play sports action (see “Researchers Turn Twitter into Real-Time Sports Commentator”).

But in major emergencies—like a terrorist attack or earthquake—so many tweets are generated that making sense of them in real time is tricky. Twitter might highlight the most meaningful ones, to cement itself as a must-visit service, but how?

A group at the University of Colorado, Boulder, is using natural-language processing to highlight the most relevant tweets in a disaster. Recent research shows significant progress in differentiating tweets about personal reflections, emotional expressions, or prayers from ones containing hard information about where a fire is burning or whether medical supplies are needed.

In one project, the group was able to identify valuable, news-containing tweets with 80 percent accuracy; these tend to contain language that is formal, objective, and lacking in personal pronouns. Last year they extended that work to classify the important tweets by categories such as damage reports, requests for aid, and advice. “We are trying to figure out which tweets have the most useful information to the people on the ground,” says Martha Palmer, a professor of linguistics and computer science at Boulder.

The Evolution of Ad Tech

Going from Mad Men to Math Men. How technology has fundamentally changed the art and science of advertising.

Once upon a time the marriage of advertising to media was a simple party for two.  And even when the traditional media landscape expanded to online, marketers continued to work directly with publishers’ sales teams to buy advertising space. After all, the golden rule was “media as proxy for audience.”

But then the scale of the Internet exploded exponentially. One hundred billion ad impressions (each time an online ad is displayed is an impression) reach the market every single day, presenting 100 billion opportunities to place those ads. According to comScore, that added up to nearly 6 trillion display ad impressions delivered in 2012.

Something else happened as a result of the Internet’s growth: voluminous amounts of data appeared and so did the opportunity to use it for finding and targeting specific online consumers. At last, marketers delighted; the right ads could be delivered to the “right” people, anywhere they appeared online. To do this, marketers would analyze the data to determine patterns of consumer behavior and pinpoint what products or services the user was most likely to respond to in order to influence sales.

With all this new online advertising inventory inevitably came unsold ad space, so called “remnant inventory.” Around 2001 ad networks emerged to help facilitate the purchase of that remnant space in bulk from publishers, and the sale of that space to marketers.  But there were problems. The networks did not allow much room for transparency upfront, making it harder for marketers to determine who was really seeing their ads. When some of the weaknesses of the network model started to become exposed, the marketplace reacted by introducing ad exchanges and real-time bidding (RTB) in 2007.  Ad exchanges and RTB allowed advertisers to bid for advertising space via an auction model and deliver the ad impressions that were won in milliseconds—all behind the screens during the time it took for the online user to download a webpage. It also created new opportunities for targeting, as more data about the audience viewing the ad was being shared with the marketer in order to create demand and thus determine a fair market price for the ad space.

The big promise of real-time bidding in online advertising is increased efficiency, increased effectiveness, and ultimately, increased profits for the advertiser and a tidy sum for the publisher as well. And it’s that promise that has poured so much cash and attention into the ad tech space.

“It’s about getting marketers closer to their customers. The ability to give them more information about their audience so they can make more informed decisions, both with regard to when and where they deliver their message, and at what price,” said Edward Montes, CEO of Digilant.

Kirk McDonald, President of Pubmatic, added, “It’s not art or science or going from ‘Mad Men’ to ‘Math Men’. It’s about balancing art and science for balanced decisions.”

Both Pubmatic and Digilant are players in the complex new system of ad tech companies facilitating RTB. With the scale of online advertising and the volume of data growing so dramatically, it’s becoming a technically intensive game to compete with one another. Companies require better, faster machine learning, smarter people, and a solid backing of cash to get up and running. However, beckoned by potential opportunity, new companies are entering an increasingly crowded field, eager to take part of the roughly $2 billion and growing annual pie.

“The difficulty is everyone jumping on the bandwagon at the same time. There is a beguiling set of companies you have to be familiar with,” said Jon Slade, Commercial Director for Digital Advertising at The Financial Times.

The competition “is almost like an arms race,” said Scott Neville, Chief Marketing Officer atIPONWEB.

In a new industry with little in the way of standards and great variation among companies supposedly in the same category, a clear picture of the space can be elusive. In particular what can get lost is who gets paid for what, who does what, and which are the most effective and honest players currently operating in the market.

Tom Hespos, founder of Underscore Marketing and among the more critical voices of the industry, said, “For many, digital advertising has become a black box where they dump money and hope for the best.”

And with so many hands trying to get a cut of the industry, there are growing calls for more consolidation and more transparency.

Historically, trends tend to swing from one extreme to the other before settling back in the middle. At this time the industry is still swinging towards the machines. But, there is a drag on the pendulum back to requiring humans to interpret data, as ultimately ad placement is still not about machines selling to machines, but about humans selling products to humans.

Twitter Plans to Go Public

Twitter is the next giant social network with plans to cash in.

Twitter today said it had officially submitted paperwork for a planned public offering of stock. The company disclosed that it had filed the documents via a Tweet at 6 p.m.

A Twitter IPO could be the most anticipated technology stock offering since Facebook went public in May 2012, and things could get just as complicated.

Facebook’s stock sagged, then clawed back up, as the company grappled with whether it could successfully advertise on mobile devices (see “How Facebook Slew the Mobile Monster”). Facebook is worth $108 billion today.

Earlier this year, Twitter was valued by some investors at $9.8 billion. But it could be worth much more than that now.

In the lead-up to its IPO plans, Twitter has become more aggressive about advertising on the site.  For instance, in July, Twitter announced a new product called TV ad targeting, which lets advertisers aim messages at users that are mentioning certain TV programs or ads (see “Now Television Advertisers Know You’re Tweeting” and “A Social-Media Decoder”).

Twitter has played an increasingly important role as a source of news and information, including in countries roiled by protests and uprisings, where the service is used by organizers (see “Streetbook”). It is blocked in China.

An IPO will increase pressure on Twitter to raise revenues from advertising—and use technologies to track what people are doing, saying, and watching. That could bring it into conflict with some users, including those who switched to the site because it seemed less commercial.

Earlier this year, Twitter’s advertising revenues were estimated at $582 million with half from people accessing the site from mobile devices. Alexa ranks Twitter in 10th position among the most popular websites.

NASA Moonshot Will Test Laser Communications

NASA launches a moon satellite this week that will test ultrafast optical data transmission.

A new communications technology slated for launch by NASA this Friday will provide a record-smashing 600 megabits-per-second downloads. The resulting probe will orbit the moon and send communications back to Earth via lasers.

The plan hints at how lasers could give a boost to terrestrial Internet coverage, too. Within a few years, commercial Internet satellite services are expected to use optical connections—instead of today’s radio links—providing far greater bandwidth. A Virginia startup, Laser Light Communications, is in the early stages of designing such a system and hopes to launch a fleet of 12 satellites in four years.

Already, some companies provide short-range through-the-air optical connections for tasks such as connecting campus or office buildings when an obstruction such as a river or road makes laying fiber infeasible. “There are a bunch of technologies that all come together for new applications and improved service, not just one,” says Heinz Willebrand, president and CEO of Lightpointe, a San Diego-based company whose technology provides up to 2.5 gigabits per second for a few hundred meters.

One new technology figuring in NASA’s moon probe: a superconducting nanowire detector, cooled to three kelvins. That gadget, developed at MIT and its Lincoln Laboratory, is designed to detect single photons sent nearly a quarter of a million miles from infrared lasers on an orbiting lunar probe, which is being launched Friday to measure dust in the lunar atmosphere.

The new communications system, dubbed Lunar Laser Communications Demonstration, will deliver six times greater download speeds compared to the fastest radio system used for moon communications. It will use telescopes that are just under one meter in diameter to pick up the signal. But it could be reëngineered to provide 2.5 gigabits per second, if the ground telescope designed to detect the signals were enlarged to three meters in diameter, says Don Boroson, the Lincoln Lab researcher who led the project. “This is demonstrating the first optical data transmission for a deep-ish space mission. If you resize it and partly reëngineer it, you could potentially do it to Mars,” he says.

Because clouds block photons, detectors are being installed at three spots: one each in California and New Mexico, and a third on the Canary Islands. On this mission, though, the system will merely be tested. Most operations will be handled by radio technologies—upgraded versions of the system that delivered Neil Armstrong’s “One small step for man” transmission in 1969. But if all goes well, optical systems will likely dominate space transmissions in the future, with radio systems serving as a backup.

In addition to the nanowire detector, the system depends on high-speed encoding and decoding of data, and a separate set of calculations and adjustments to keep the telescopes pointed at each other. “There are a bunch of technologies that are new and exciting,” Boroson says.

But what may be even more exciting for bandwidth-hungry Earthlings is the prospect of a satellite-based all-optical network to augment the ground-based one.

Laser Light Communications is putting together components for a commercial system that would provide all-optical satellite-to-ground and satellite-to-satellite communications. The company aims to supercharge Internet bandwidth around the world with a space-based optical network to complement the global fiber one (see “New Oceans of Data”).

The idea is that the system will often create shorter continent-spanning links than are available on the ground while bypassing any bottlenecks. What’s more, in the case of failures—such as the severed undersea fiber cable that blacked out much of the Middle East and parts of India in 2008 (see “Analyzing the Internet Collapse”)—it would offer alternative routes and greater resiliency.

 

 

The company is planning an initial 48 ground stations for its system. If clouds block downlinks or uplinks at one site, it can dump the data at a different receiver—perhaps just a few hundred miles away—achieving very high reliability, says Robert Brumley, CEO of Pegasus Global Holdings, which is launching the company based on federally funded defense research in the area of optical communications.

Many more could be installed: the detector units would be small enough to be fitted atop an office building or even a truck, such as to handle feeds for live television, Brumley adds.

Under the system, eight satellites whizzing around the planet at an altitude of about 12,000 kilometers would create a total system capacity of six terabits per second—and download speeds of 200 gigabits per second, about 100 times faster than today’s radio links. “We’re aiming for worldwide coverage at service levels and connectivity options previously unattainable by other satellite platforms,” says Brumley.  But the company’s main aim is to become a wholesale supplier of bandwidth to other carriers–possibly even including other satellite services–and not to become a competitor, he added.

The recently launched satellite company O3B—which stands for “the other three billion”—provides between 150 megabits per second and two gigabits per second using radio frequencies. Other companies, Intelsat and Inmarsat, also deliver speeds in that ballpark.

Another Internet-boosting idea, Google’s “Project Loon,” envisions balloons circling the Earth in the stratosphere to provide coverage to underserved areas. But that would also use radio signals (see “African Entrepreneurs Deflate Google’s Internet Balloon Idea”), Google says.

India lures chip makers, says IBM and STMicro interested

By Devidutta Tripathy and Harichandan Arakali

NEW DELHI/BANGALORE | Fri Sep 13, 2013 1:59pm EDT

(Reuters) – Two consortiums, including IBM and STMicroelectronics, have proposed building semiconductor wafer plants in India costing a total of $8 billion, a minister said after the government approved concessions to lure chipmakers.

India, which wants local production of chips to cut long-term import bills, has renewed a drive to attract investments after a previous attempt failed.

The government hopes other chipmakers will show interest in building further plants after the federal cabinet on Thursday approved concessions including subsidies on capital spending, interest-free loans and tax breaks, Communications and Information Technology Minister Kapil Sibal said.

“India needs not less than 15 fabs (fabrication units),” Sibal told reporters on Friday.

He said that given the huge investments, long build-up period of plants and low freight costs to import chips from abroad there is “no great interest”, and the only way to attract investments was through offering such major concessions.

Ganesh Ramamoorthy, a research director at Gartner, said there was little incentive for chipmakers to come to India.

“Globally there are established fabs that are struggling to maintain their profitability,” Ramamoorthy said.

“Will they be exporting it, will they be competing with other global fabs, or will India be generating enough demand … these are difficult questions,” the analyst added.

DETAILED REPORTS IN TWO MONTHS

The minister said the two consortiums would be asked to submit within two months their detailed project reports, including the production mix and marketing plans. The detailed project reports would be evaluated by a third party, he said.

One of the consortiums is made up of India’s Jaiprakash Associates and Israel’s TowerJazz with IBM as technology partner. It has proposed a plant near New Delhi at a cost of 263 billion rupees ($4 billion), the government said.

The second comprises Hindustan Semiconductor Manufacturing Corp and Malaysia’s Silterra with STMicroelectronics as the technology partner. The proposed investment at 252.5 billion rupees for a plant in the western state of Gujarat.

A Jaiprakash spokesman declined comment. The Indian units of IBM and STMicroelectronics said they would not give an immediate comment.

The technology providers must take at least a 10 percent stake in the projects, while the Indian government would get an 11 percent stake in each project. The government would part-fund the investments through interest-free loans for 10 years.

India’s demand for electronics products is forecast to rise nearly 10 times during this decade to reach $400 billion by 2020, causing policy makers to worry that electronics imports, with no major local manufacturing, could exceed those of oil.

As sales of smartphones, computers and television sets surge, annual imports of semiconductors are expected to touch $50 billion by 2020 from $7 billion in 2010, according to an Indian government presentation.

Typically, semiconductor foundries take about two years to be up and running, Gartner’s Ramamoorthy said. Meanwhile global companies such as Taiwan Semiconductor Manufacturing Co are already exploring wafer technologies that are much more advanced than those India is proposing to make, he said. ($1 = 63.3925 Indian rupees)

(The story inserts dropped word “not” five graphs from end)

Dell to focus on expanding sales capacity, emerging markets

(Reuters) – Dell Inc Chief Executive Michael Dell said in an interview with CNBC on Friday the focus of the company, which he is taking private, will include expanding sales capacity and growing in emerging markets and tablets.

Dell, who won a battle with activist Carl Icahn to win control of the computer company, also said he will shift from a quarterly focus to a “five-year, ten-year focus.”

He does not foresee a Dell entry into the cell phone market.

J&J kicks off $5 billion clinical diagnostics unit sale: sources

By Soyoung Kim and Greg Roumeliotis and Jessica Toonkel

NEW YORK | Fri Sep 6, 2013 6:20pm

(Reuters) – Johnson & Johnson has launched a sale process for its Ortho Clinical Diagnostics unit, which makes blood screening equipment and laboratory blood tests and could fetch around $5 billion, three people familiar with the matter said on Friday.

J&J has asked JPMorgan Chase & Co to run the sale and is preparing to send detailed financial information in coming weeks to potential buyers, including some of the world’s largest private equity firms and a number of healthcare companies, the people said.

Early estimates suggest the unit’s earnings before interest, tax, depreciation and amortization are between $400 million and $500 million, suggesting a possible valuation of roughly $5 billion, the people said.

The unit, whose tests are considered older and less profitable than modern molecular diagnostics that examine gene mutations for signs of disease, has annual sales of about $2 billion.

The people asked not to be identified discussing details of the process. J&J declined to comment, while a JPMorgan spokeswoman had no immediate comment.

Healthcare conglomerate J&J said in January it would explore strategic alternatives for the unit and cautioned that the process could take anywhere from about 12 to 24 months.

Industrial and healthcare conglomerates General Electric and Danaher Corp. are likely to take a serious look at bidding for the J&J business, said one of the sources and another who had heard about the sales process.

GE declined to comment. A call to Danaher was not immediately returned.

J&J’s decision to divest the division comes as drugmakers are shedding businesses and cutting costs due to overseas price controls and pressure on payments from insurers and the government. Pfizer Inc, for instance, just spun off its animal health products business, and Abbott Laboratories split off its branded drugs unit early this year.

Ortho Clinical Diagnostics, whose revenue growth has been relatively flat, is No. 5 in the clinical diagnostics market, as measured in sales. Typically, J&J’s businesses rank first or second in their respective markets.

Clinical diagnostics are less attractive than molecular diagnostics, which could see strong revenue growth in coming years as examination of genes helps doctors steer patients to appropriate treatments.

But some analysts, including Les Funtleyder of Poliwogg, have said private equity buyers might be interested in the stable cash flow the J&J unit could provide.

(Reporting by Soyoung Kim, Greg Roumeliotis, Jessica Toonkel and Ransdell Pierson in New York; Editing by Gerald E. McCormick, Bernard Orr)

System Lets Surgeons Image the Brain While they Operate On It

A real-time MRI system can help surgeons perform faster and safer brain operations.

By Susan Young

A new system for visualizing the brain during surgery is helping neurosurgeons more accurately diagnose and treat patients and is even allowing them to perform some procedures that until now have been extremely difficult or even impossible.

Neurosurgeons can use the imaging technology during surgeries that require small objects—biopsy needles, implants, or tubes to deliver drugs—to be placed at precise locations in the brain. The system provides live magnetic resonance images (MRI) that allow surgeons to monitor their progress during the operation.

Typically, neurosurgeons use an MRI before a surgery to plan the trajectory of the operation, based on the brain’s position relative to a guidance frame that’s screwed onto the patient’s skull, says Robert Gross, a neurosurgeon at Emory University. But the brain can shift before the actual surgery takes place, he says, rendering that MRI inaccurate. To check on what’s happening inside a patient’s skull, doctors have to stop the surgery and perhaps even move the patient out of the operating room.

To address these issues, researchers have been developing new neurosurgical guidance systems that can work with the strong magnets and electronic signals used by MRI scanners. The medical-device company Medtronic, for example, offers a real-time MRI imaging system for neurosurgery. But Gross says the most useful system on the market is offered by MRI interventions, a medical device company based in Memphis, Tennessee.

How Microsoft Might Benefit from the Nokia Deal

If it can cleverly blend hardware and software in new ways, reach new markets, and take advantage of Nokia’s patent portfolio, Microsoft’s billions could be well spent.

By David Talbot

Nokia might have gotten the better of Microsoft this week in selling its once-dominant handset business to Microsoft and entering into a broad patent agreement in a deal worth $7.1 billion. Microsoft’s stock price took a big hit. And no wonder: given the declining state of Nokia’s business, the deal seemed like a desperate attempt to prop up the largest manufacturer of phones that run Windows before it went under or switched to Google’s Android system.

But there are at least four ways Microsoft might come up a winner in the long run. With than 4 percent of the global market for smartphone operating systems, Microsoft has little more to lose and a lot, potentially, to gain as it tries to claw market share from Android and Apple.

 

Here’s how Microsoft could benefit significantly.

1. Skype, the dominant voice-over-Internet service owned by Microsoft, could become more powerful. Microsoft can now push Skype across its Xbox gaming/TV console, Nokia devices, Surface tablets, all PCs, and Android and Apple phones. That’s more of the world than Apple or Google can address with their FaceTime or Hangouts chat services. Skype is being steadily integrated more deeply into Windows; it will be preinstalled in Windows 8.1 on the desktop. It could become a way for Microsoft to compete with conventional cellular carriers on voice and messaging, where there’s money to be made.

More generally, Microsoft might now be able to do something that Apple or Google haven’t or can’t: integrate mobile devices and desktops into a more seamless experience. Google is limited in this regard because it doesn’t control PCs, although it is doing things like putting Google Now, the company’s intelligent personal assistant, into its main website. Apple has required users to use iTunes, and more recently iCloud, to sync up their phones with their laptops, but perhaps Microsoft can use Skype and other apps as the basis of a simpler and more compelling multi-device experience.

2. Leaving aside the patents that Microsoft acquired, Nokia retains ownership of some of the most valuable and fundamental patents—known as “utility patents” in the wireless industry. While Microsoft didn’t buy those, it did license all of them for 10 years, giving it a free reign that rival phone makers won’t necessarily have.

Over the last two decades or so, Nokia spent more than $55 billion on R&D and made acquisitions that gave it a war chest of 30,000 patents. Many of these cover fundamental operations and ones for wireless standards like GSM. One of Nokia’s most valuable patents is one describing a “method for mapping, translating, and dynamically reconciling data.” This is now fundamental to syncing calendars on different devices. And now that it is free of its handset business, Nokia can focus more on monetizing this fundamental IP—in court, if necessary. Nokia will no longer have to worry about countersuits alleging infringements by technologies in its phones, since it will no longer be making or selling any. And if Nokia does decide to go for the jugular, Microsoft’s neck will be protected.

Nokia has shown aggressiveness before. In a 2009 suit against Apple, Nokia claimed that the iPhone maker had violated 46 Nokia patents, on everything from wireless standards to touch-screen controls. Apple agreed to settle two years ago. Nokia gets more than $600 million every year in patent-related revenue.

Nokia’s announcement included this clue: the company plans to “expand its industry-leading technology licensing program, spanning technologies that enable mobility today and tomorrow.”

3. Microsoft may gain a deeper store of research knowledge to draw from. Nokia spent lavishly on R&D—including more than $5 billion last year alone—and had 27,551 R&D employees at the end of 2012. It’s true that the value of their collective output is dubious: Nokia R&D failed to produce technologies that could dent the dominance of Apple and Samsung in the smartphone business.

Oskar Sodergren, a Nokia spokesman, says that while the Nokia Research Center stays with Nokia, all R&D staff related to mobile products and smartphones will all transfer to Microsoft. Presumably, these are the people who gave us things like “Morph Concept” technologies—in which a phone or watch can be made flexible and transparent, with built-in solar-power recharging and integrated sensors.

A Microsoft research spokeswoman, Chrissy Vaughn, says the company was not elaborating on how the research units might merge. But Steve Ballmer, Microsoft’s outgoing CEO, said in a press call yesterday that “Finland will become the hub and the center for our phone R&D.” The two companies have said that all of Nokia’s 4,700 Finnish employees who now work in devices and services will become Microsoft employees.

4. The smartphone business is still ramping up quickly, which means there remains a lot of opportunity, especially in international markets that are far from saturated. Nokia sells more than 200 million phones annually—and most of them are not in Europe or North America. Although Microsoft will have to compete with legions of low-cost manufacturers, it might be able to use Nokia’s international manufacturing and distribution to its advantage—assuming, of course, that it can do something truly novel on the phones themselves.