After Watson, IBM Looks to Build ‘Brain in a Box’

By  Jennifer Booton

Imagine Watson with reason and better communication skills.

The Watson supercomputer may be able to beat reigning Jeopardy champions, but scientists at IBM (IBM) are developing new, super-smart computer chips designed from the human brain — and that might ultimately prove much more impressive.

These new silicon “neurosynaptic chips,” which will be fed using about the same amount of energy it takes to power a light bulb, will fuel a software ecosystem that researchers hope will one day enable a new generation of apps that mimic the human brain’s abilities of sensory perception, action and cognition.

It’s akin to giving sensors like microphones and speakers brains of their own, allowing them to consume data to be processed through trillions of synapses and neurons in a way that allows them to draw intelligent conclusions.

IBM’s ultimate goal is to build a chip ecosystem with ten billion neurons and a hundred trillion synapses, while consuming just a kilowatt of power and occupying less than a two-liter soda bottle.

“We are fundamentally expanding the boundary of what computers can do,” said Dharmendra Modha, principal investigator of IBM’s SyNAPSE cognitive computing project. “This could have far reaching impacts on technology, business, government and society.”

The researchers envision a wave of new, innovative “smart” products derived from these chips that would alter the way humans live in virtually all walks of life, including commerce, logistics, location, society, even the environment.

“Modern computing systems were designed decades ago for sequential processing according to a pre-defined program,” IBM said in a release. “In contrast, the brain—which operates comparatively slowly and at low precision—excels at tasks such as recognizing, interpreting, and acting upon patterns.”

These chips would give way to a whole new “cognitive-type of processing,” said Bill Risk, who works on the IBM Research SyNAPSE Project, marking one of the most dramatic changes to computing since the traditional von Neumann architecture comprised of zeros and ones was adopted in the mid-1940s.

“These operations result in actions rather than just stored information, and that’s a whole different world,” said Roger Kay, president of Endpoint Technologies Associates, who has written about the research. “It really allows for a human-like assessment of problems.”

It is quite a complex system, and it is still in early stages of development. But IBM researchers have rapidly completed the first three phases of what will likely by a multi-stage project, collaborating with a number of academic partners and collecting some $53 million in funding. They are hopeful the pace of advancement will continue.

Modha cautioned, however, this new type of computing wouldn’t serve as a replacement for today’s computers but a complementary sibling, with traditional analog architecture serving as the left brain with its speed and analytic ability, and the next era of computing acting as the right cortex, operating much more slowly but more cognitively.

“Together, they help to complete the computing technology we have,” Modha said.

Providing a real-life example of how their partnership might one-day work, Kay imagined a medical professional giving triage to a patient.

Digital computers would provide basic functions such as the patient’s vitals, while the cognitive computer would cross reference data collected at the scene in real-time with stored information on the digital computer to assess the situation and provide relevant treatment recommendations.

“It could be a drug overdose or an arterial blockage, a human might not know which is which [from the naked eye],” explains Kay. “But a [cognitive] computer could read the symptoms, reference literature, then vote using a confidence level that can kind of infer which one is more likely the case.”

Endless Possibilities Seen

The IBM researchers have put together building blocks of data to make cognitive applications easier to build and to create an ecosystem for developers. The data come in the form of “corelets” that each serve a particular function, such as the ability to perceive sound or colors.

So far they have developed 150 corelets with the intention to eventually allow third parties to go through rigorous testing to submit more. Eventually, corelets could be used to build “real-life cognitive systems,” researchers hope.

To help get the ball rolling, the researchers envisioned a slew of product ideas that would make perfect use of these genius chips in real-world functions.

Here are just a few:

-An autonomous robot dubbed “Tumbleweed” could be deployed for search and rescue missions in emergency situations. Researchers picture the sphere-shaped device, outfitted with “multi-modal sensing” via 32 mini cameras and speakers, surveying a disaster and identifying people in need. It might be able to communicate with them, letting them know help is on its way or directing them to safety.

-For personal use, low-power, light-weight glasses could be designed for the near blind. Using these chips, which would recognize and analyze objects through cameras, they’d be able to plot a route through a crowded room with obstacles, directing the visually-impaired through speakers.

-Putting these chips to use in a business function, the researchers foresee a product they’ve dubbed the “conversation flower” that could process audio and video feeds on conference calls to identify specific people by their voice and appearance while automatically transcribing the conversation.

-Giving a glimpse into its potential use in the medical world, a thermometer could be developed that could not only measure temperature, but could also be outfitted with a camera that would be able to detect smell and recognize certain bacterial presence based on their unique odor, giving an alert if medical attention is needed.

-In an environmental function, researchers could see this technology being outfitted on sensor buoys, monitoring shipping lanes for safety and environmental protection.

Given the fluid motion of the project, it’s unclear how long it will take for the first generation of cognitive computers to begin applying themselves in real-world applications, but Modha and his team are optimistic they will be crafted sooner than later.

“We need cognitive systems that understand the environment, can deal with ambiguity and can act in a real-time, real-life context,” Modha said. “We want to create a brain in a box.”

Report: Samsung Smart Watch Coming in Three Weeks

By Jennifer Booton

Samsung is expected to unveil its Galaxy Gear smartwatch powered by Google’s (GOOG) Android operating system on Sept. 4 just ahead of the IFA consumer electronics show in Berlin, according to a report by Bloomberg.

The watch, which would be one of the first wearable technologies unveiled, is expected to allow users to make calls, access emails and surf the web.

The Galaxy Gear won’t use flexible display technology, according to one of Bloomberg’s anonymous sources, but the company is working on developing a “bendable screen.”

The device will likely be unveiled at the same time as Samsung’s next-generation tablet-phone hybrid the Galaxy Note 3.

Samsung told FOX Business it doesn’t comment on “rumors or speculation.”

Device makers, including Apple (AAPL), are racing to have the first wristwatch-like device that many believe to be one of the first major steps into a new generation of wearable mobile devices.

Google Glass is also a type of wearable technology.

 

 

Facebook testing one-click checkout for mobile shopping

By Julianne Pepitone

NEW YORK (CNNMoney)

Add Facebook to the large list of companies hoping to crack the mobile payments nut.

Facebook said Thursday that it will launch an experimental program that allows users to store their credit card information on the site. The social network will then automatically fill in relevant billing information when users buy products on partners’ mobile apps.

Facebook (FB) described the experiment as “a very small test” with one or two initial merchant partners. Tech blog AllThingsD was the first to report on the program.

For now, Facebook made clear that its aspirations are limited to providing one-click access for checkout. It won’t actually process payments like eBay’s (EBAY, Fortune 500) PayPal or Google Checkout.

Still, by wading into mobile payments, Facebook is tackling a hot but somewhat inscrutable space. Everyone seems to agree that customers don’t want to type out a bunch of billing information on teeny phone screens, but creating a frictionless alternative has remained elusive.

“It’s a very competitive space,” said Evercore Partners analyst Ken Sena. “I think there is the potential to get excited about it too early, but we’re still very much in the experimental phase.”

Those experimenters include PayPal, Amazon (AMZN, Fortune 500), Google, Square and others — and even the biggest of those companies have had trouble gaining mainstream interest.

Google (GOOG, Fortune 500) launched its Wallet service in 2011, and it hasn’t grown much since. Amazon Payments, a system similar to Facebook’s experiment, is popular with small businesses and the crowdfunding site Kickstarter, but it doesn’t necessarily target mobile payments or consumers specifically. Square is the most successful of many startups in the field, dabbling in everything from payments processing to special cash registers — but Square has failed to take off broadly with consumers.

A huge amount of online retail is still routed through Amazon, and PayPal continues to dominate processing for non-Amazon purchases. The mobile payments revolution has yet to come.

Facebook is starting out slow, but it has a massive user base of more than a billion — and a ton of data about them. Although Facebook didn’t preview whether it has any future plans to expand its mobile payments offering, the opportunity could be big.

Sony-Viacom deal won’t make a la carte TV a reality

By Julianne Peptone

NEW YORK (CNNMoney)

Attention cable and satellite subscribers: A new option for your TV service could be coming soon.

Sony has reportedly struck a preliminary deal to carry Viacom (VIA) content on its upcoming pay-TV offering — and the twist is that the shows would air at the same time that traditional cable or satellite customers can view them.

If the tentative deal, reported by the Wall Street Journalgoes through, it would help Sony (SNE)stand out from other companies that are trying to launch an Internet-based TV subscription services, such as Intel (INTC, Fortune 500). Apple has also been long rumored to be releasing its own “iTV” television, but its negotiations with cable providers have reportedly stalled. Becoming a cable competitor, as Sony appears to be doing, could be a path forward for Apple (AAPL, Fortune 500).

But a content coup for Sony doesn’t necessarily mean consumers will win out.

Unless Sony is able to upend the cable industry completely (good luck with that), you still won’t be able to start picking and choosing channels a la carte rather than paying for a raft of unwatched networks. Your cable bills won’t go down.

Instead, Sony and other Internet-based pay-TV services will probably look a lot like … regular pay-TV.

Sony whose service would stream through its smart TVs and PlayStation gaming consoles, would simply become another alternative to Comcast, Dish Network and DirecTV. That’s because the TV business is deeply entrenched. Networks and cable providers are dependent on each other to survive, and they have no financial incentive to stop making a ton of money.

While networks like Viacom have proven willing to play ball with new entrants on the pay-TV scene, they have no motivation to offer Sony anything other than what they give regular cable providers.

Viacom offering Sony or Intel an un-bundled set of channels, for example, would alienate their bigger partners in the cable industry. Plus, it would be costly: a recent report from Needham Insights showed switching to an a la carte model would cut cable and network revenue in half, to about $70 billion.

And so Sony is highly unlikely to cut special or cheap deals that will make a serious difference for TV customers. While more choice is always a good thing for consumers, it’s likely this choice will look very similar to what’s already out there. To top of page

Forget the Hyperloop, Brace for Supersonic Travel

By Jennifer Booton

Space X and Tesla (TSLA) founder Elon Musk seems pretty optimistic a train will one day be able to travel at mach speeds for short distances, but what about those longer trips that take many hours?

For New Yorkers who want to take the quick 3,000-mile trip to Los Angeles or the 7,000-mile journey to Tokyo, a solution faster than the speed of sound might soon be on the horizon.

Quiet supersonic jets, perhaps ones Musk builds himself one day, have attracted fresh attention and investment over the last few years from some major players, including NASA, Gulfstream, Boeing (BA) and Lockheed Martin (LMT).

“A quiet supersonic plane immediately solves every long distance city pair without the need for a vast new worldwide infrastructure,” says Musk, who’s responsible for Tesla electric cars and Dragon X, the first commercial spacecraft to visit the International Space Station.

While Musk’s Hyperloop vacuum-train idea revealed earlier this week might solve travel problems between high-traffic cities located less than 900 miles apart, the entrepreneur says supersonic air travel will be “much faster and cheaper” for longer distances, knocking several hours from thousand-mile trips and making long-haul business travel more feasible than ever before.

No commercial supersonic flight has operated since the Concorde’s final run in 2003 after 27 years of transporting people across the ocean from London’s Heathrow and Paris’s Charles De Gaulle to New York’s JFK in half the time it takes today.

But NASA is focused more than ever on reducing the sonic boom – one of the main impediments of supersonic over-land air travel because of the nuisance it poses to people on the ground – while others are working on making supersonic jets more efficient and less dependent on fuel.

“I think one day we will see more supersonic aircraft,” says Dimitri Papamoschou, a professor of mechanical and aerospace engineering at the University of California, Irvine.

Money Problems

Of course, overcoming the looming hurdles of next-generation supersonic transport requires billions of dollars, and a lack of funds has killed a number of these projects over the last 40 years.

A “cheap supersonic transport” between L.A. and N.Y. may not be available until “very far in the future,” or within 25-50 years, estimates Papamoschou, who has studied the sound produced by propulsion engines.

If money weren’t a problem, prototypes would begin popping up within the decade for larger supersonic transports, says Dr. Gecheng Zha, the director of the Aerodynamics and Computational Fluid Dynamics (CFD) Lab at the University of Miami.

Actual operation of a supersonic passenger jet big enough to seat as many as Boeing’s carbon composite 787 Dreamliner would take at least another decade beyond that, says Zha, whose “Supersonic Bidirectional Flying Wing” idea was awarded a $100,000 grant in 2012 from NASA.

“We can build one now but who is going to buy it?” asks Papamoschou. “The economics right now just don’t work out.”

The World’s Fastest Business Trip

But there are companies, including Gulfstream, Aerion and Supersonic Aerospace International, as well as NASA engineers and scientists that are working on concepts for much smaller supersonic business jets; and that could be available much sooner.

Aerion,  which has been partnering with NASA’s Dryden Flight Research Center for the last decade, defines its next-generation business jet with Mach 1.6 capabilities as an “evolutionary solution with revolutionary results.”

The “virtual visualization” of Aerion’s supersonic business jet – which it says would feature demonstrated wing technology and proven Pratt & Whitney propulsion systems — is based on “concrete data collected over years of testing and revision of the model.” The manufacturer hopes its development will influence “all general aviation flight to come – supersonic and subsonic.”

The 8-12-passenger Aerion SBJ is expected to be brought to the market through a joint venture with an “established aircraft manufacturer” by the end of the decade, Aerion says.

While it didn’t say which jet maker would be chosen — both Boeing and Airbus have done work on supersonic jets in the past — Boeing over the last two years has built design studies of a concept it has dubbed Icon II that it says can carry 120 passengers at up to Mach 1.8 speeds (more than 1,000 miles per hour) for 5,000 nautical miles.

Boeing did not immediately respond to a request for comment regarding this article, but in an earlier study, the Chicago-based jet maker acknowledged that “advanced technologies can reduce fuel burn enough that a supersonic aircraft could be viable economically and environmentally, in multiple markets.”

Meanwhile, Supersonic Aerospace International’s 20-passenger QSST-X “virtually boomless” supersonic aircraft designed by Lockheed Martin is expected to have a range of more than 5,000 miles at Mach 1.6. The company touts both its speed and quietness, claiming it can catapult passengers from New York to Moscow in 4.5 hours, about half the time it takes today.

Of course, corporate and wealthy customers looking to travel at supersonic speeds will have to pay a pretty penny. Papamoschou sees supersonic transport selling at a premium to subsonic flights even if the costs to operate them come down as technologies improve.

“For them, time is money and they’re probably willing to pay a premium,” he said. “But for ordinary passengers, it will take quite a bit of time until we see something that’s considerably faster than today’s airplanes.”

Aerion says it has received letters of intent for roughly 50 aircraft from a mix of individuals and corporations around the world. The orders remained “largely intact through the recession,” which Aerion says reinforces “the market demand.”

IBM Scientists Show Blueprints for Brainlike Computing

IBM researchers unveil TrueNorth, a new computer architecture that imitates how a brain works

By Aviva Hope Rutkin

To create a computer as powerful as the human brain, perhaps we first need to build one that works more like a brain. Today, at the International Joint Conference on Neural Networks in Dallas, IBM researchers will unveil a radically new computer architecture designed to bring that goal within reach. Using simulations of enormous complexity, they show that the architecture, named TrueNorth, could lead to a new generation of machines that function more like biological brains.

The announcement builds on IBM’s ongoing projects in cognitive computing. In 2011, the research team released computer chips that use a network of “neurosynaptic cores” to manage information in a way that resembles the functioning of neurons in a brain (see “IBM’s New Chips Compute More Like We Do”). With TrueNorth, the researchers demonstrate a way to use those chips for specific tasks, and they show that the approach could be used to build, among other things, a more efficient biologically inspired visual sensor.

“It doesn’t make sense to take a programming language from the previous era and try to adapt it to a new architecture. It’s like a square peg in a round hole,” said Dharmendra Modha, lead researcher. “You have to rethink the very notion of what programming means.”

 

In a series of three papers released today, Modha’s team details the TrueNorth system and its possible applications.

Most modern computer systems are built on the Von Neumann architecture—with separate units for storing information and processing it sequentially—and they use programming languages designed specifically for that architecture. Instead, TrueNorth stores and processes information in a distributed, parallel way, like the neurons and synapses in a brain.

Modha’s team has also developed software that runs on a conventional supercomputer but simulates the functioning of a massive network of neurosynaptic cores—with 100 trillion virtual synapses and two billion neurosynaptic cores.

Each core of the simulated neurosynaptic computer contains its own network of 256 “neurons,” which operate using a new mathematical model. In this model, the digital neurons mimic the independent nature of biological neurons, developing different response times and firing patterns in response to input from neighboring neurons.

“Programs” are written using special blueprints called corelets. Each corelet specifies the basic functioning of a network of neurosynaptic cores. Individual corelets can be linked into more and more complex structures—nested, Modha says, “like Russian dolls.”

TrueNorth comes with a library of 150 pre-designed corelets, each for a particular task. One corelet can detect motion, for example, while another can sort images by color. Also included with TrueNorth is a curriculum to help academics and, eventually, customers learn to use the system.

Karlheinz Meie, co-director of the European Union’s Human Brain Project, says that untraditional computing architectures like TrueNorth aren’t meant as a replacement for existing devices but as gateways into entirely new markets for technology. They might, for example, be used to solve some problems involving big data that the traditional Von Neumann approach cannot untangle.

“If you look at which architecture can already [solve these problems] today, it’s the brain,” says Meier. “We learn from data. We do not have predetermined algorithms. We are able to make predictions and causal relationships even in situations we have never seen before.”

For example, the researchers hope to use TrueNorth to develop systems as powerful as human vision. The brain sorts through more than one terabyte of visual data each day but requires little power to do so. IBM and iniLabs, a partner company in Zurich, plan to involve TrueNorth in the development of a visual sensor.

The team envisions the technology one day making its way into everyday machines like smartphones and automobiles. They plan to continue refining the software, which is derived from a basic model of how the brain functions and is not restricted by enduring questions about how the brain really works.

“At this point, we are not wanting for more insights from neuroscience today. We are not limited by it,” says Modha. “We are extending the boundaries of what computers can do efficiently.”

Electric Therapy for Medical-Device Malware

Researchers show how to spot viruses on equipment like drug mixers and pregnancy monitors: by examining their power usage.

By David Talbot on August 9, 2013

 

Hospital rooms beep and flash with many devices that are increasingly getting infected with malware (see “Computer Viruses Are ‘Rampant’ on Medical Devices in Hospitals”). But for several reasons, these gadgets are often incompatible with commercial security software.

Now, new technology developed by academic researchers could catch most malware on the devices just by noting subtle changes in their power consumption. This could give hospitals a quick way to spot equipment with dangerous vulnerabilities and take the machines offline. The technology could also apply to computer workstations used in industrial control settings such as power plants.

The system, dubbed WattsUpDoc, is based on work involving Kevin Fu, who heads a research group on medical-device security at the University of Michigan and has uncovered several vulnerabilities in medical equipment. The research group tested WattsUpDoc on an industrial-control workstation and on a compounder, a machine commonly used in hospitals to mix drugs. In both cases the devices ran on modified versions of the Windows operating system.

The malware detector first learned the devices’ normal power-consumption patterns. Then it was tested on machines deliberately infected with malware. It was able to detect abnormal activity more than 94 percent of the time when it had been trained to recognize that malware, and between 84 and 91 percent of the time with previously unseen malware.

The technology, which is scheduled to be presented at a conference next week, “highlights a novel way of monitoring,” says John Halamka, CIO of Beth Israel Deaconess Medical Center in Boston.

 

The next step, says Fu, is to do far more field testing. It is likely to be a year or more before the device could be commercialized, he adds.

The eventual goal is for the technology to alert hospital IT administrators that something is amiss, even if the exact virus is never identified. That’s important, because there are hundreds of thousands of medical devices in the field that probably won’t get changed to address their underlying vulnerabilities, says Shane Clark, a grad student at the University of Massachusetts, who works with Fu and developed the prototype. “This is about ‘We’ve got a problem right now, and it’s hard to get any weight behind policy and design changes for everything out there. So what can we do right now to improve the situation?’” Clark says.

Hospital devices such as pregnancy monitors, compounders, and picture-storage systems for MRI machines are vulnerable to infection because they are typically connected to an internal network that is, in turn, connected to the Internet. In June the U.S. Food and Drug Administration warned that malware was a growing problem and encouraged device makers to update software.

The FDA said that no known injuries had resulted from medical malware and that the computer infections were not known to be deliberately targeting medical equipment. But Clark says viruses can still inhibit medical care: “You need to mix a solution, but the compounder is running slow and keeps rebooting, or is unresponsive.”

Unfortunately, he adds, “you can’t just slap a copy of McAfee antivirus on your medical device.” That’s because even though many medical devices run Windows, they often use custom versions of the operating system that are incompatible with conventional antivirus software. And some machines can’t be loaded with these protections because their manufacturers prohibit third-party applications.

Other computer security researchers have been working on detecting malware by using power consumption as a proxy for unusual behavior (see “Tiny Changes in Energy Use Could Mean Your Computer Is Under Attack”). The key with hospital equipment is getting a very detailed profile of normal usage and being able to both detect changes and avoid false alarms.

Consumer Genetic Test Can Predict Your Drug Response

A startup called Genome Liberty is developing a consumer genetics test to gauge an individual’s ability to metabolize prescription drugs.
By Susan Young on August 13, 2013.

A personal genetics startup thinks that there is one set of DNA variants that everyone should know: the ones that help determine how you respond to drugs.

Genome Liberty, a New Jersey-based startup, wants to provide a $99 test that will tell customers, based on their genetics, if they should take a nonstandard dose of a drug because their body will break it down faster or slower than most people. In some cases the test might suggest a particular drug someone should take or avoid. “The idea is to give you a card to keep in your wallet, or an iPhone app, which says which medications you shouldn’t take,” says cofounder Jeffrey Rosenfeld, a genome scientist at Rutgers University.

 

The company would offer these tests directly to consumers, who could then relay any relevant information to their doctors, Rosenfeld says.

Genome Liberty isn’t the first company to offer such tests. The consumer genetics company 23andMealso offers some drug response tests in its genome scan, which also includes tests for things like eye color and the genetic risk for developing serious diseases. Rosenfeld says Genome Liberty wanted a more focused test. “The idea is to provide information that is usable, that you can act on,” he says.

Consumers who get the Genome Liberty drug response test would send a sample of saliva to the company’s lab. The company scans the genome for DNA variations in 11 liver enzyme genes, which are a subset of the dozens of genes encoding enzymes for drug metabolism. Enzymes in the liver process drugs and can either deactivate or activate drugs, depending on the compound. Different people carry different versions or amounts of many of these enzymes, which can affect how they respond to drugs. Some patients may process a drug more quickly, more slowly, or perhaps not at all.

Genome Liberty says that variants in those 11 enzymes can affect the activity of nearly 80 drugs in the body. The test “will tell people which medications they should take and which they should avoid based on markers in their DNA,” says Rosenfeld.

The company is using a crowdfunding site to raise money to develop its test, which is available for pre-order. The recent U.S. Supreme Court decision that limited the patent claims that companies can make on genes (see “U.S. Supreme Court Says ‘Natural’ Human Genes May Not Be Patented”) helped spur Genome Liberty to launch. “We were worried about whether we could start this company or not,” says Rosenfeld. Until that decision, he says, most genes were covered by a patent.

It is still not clear whether genetic tests sold directly to consumers will come under regulatory scrutiny. In 2010, the FDA warned 23andMe and other consumer genetics companies that their services amount to medical devices and thus need regulatory approval. But since then, the U.S. government has not come up with clear rules for these companies. Nevertheless, 23andMe applied for regulatory approval for portions of its test last year (see “Personal Genetics Company Seeks Regulatory Approval”).

Another question is whether doctors will make use of information from a consumer genetics test. Physicians don’t always trust the results of direct-to-consumer tests and may not have clear medical guidelines for how to use it (see “Why We Have a Right to Consumer Genetics”). But the connections between the liver enzyme variants and drug response are well-supported, says Rosenfeld. If a doctor doesn’t want to accept these results, he says, then “find a different doctor.”

Better Weather Analysis Could Lead to Cheaper Renewables

Predictive analytics can lower the costs associated with connecting wind and solar to the grid, says IBM.

By Martin LaMonica on August 13, 2013

Because the output from wind and solar power plants varies, they need backup—either fossil fuel plants or energy storage—to compensate for dips and spikes. But it’s rarely clear just how much the output will vary, so that backup power is often on standby even when it’s not needed.

Now IBM has developed software to address this problem. The software performs advanced data analysis that IBM hopes can improve predictions of renewables’ power output, and thus reduce the need for backup power. Using multiple data sources, including wind turbine sensors, weather forecasts, and images of clouds, the software can forecast power output as little as 15 minutes and as much as a month in advance. It’s now operating at a combined solar and wind demonstration project in Zhangbei, China.

 

If a plant’s operators could more accurately forecast the output of renewable power sources, they’d have less reason to rely on energy storage, which is typically needed now to provide a smooth flow of power into the transmission grid. “In the industry, storage is seen as the next disruptive technology,” says Michael Valocchi, vice president in IBM’s energy and utilities consulting business. “(But) if I can really predict in this manner, it’s not that I don’t need storage, but it makes storage less important.”

Utilities often rely on specialized companies to produce wind and solar forecasts based on weather models and other meteorological data, including anemometers on wind turbines. But wind measurements taken from turbines are often unreliable because energy has already been extracted from the incoming wind, and because vibrations affect readings, says IBM researcher Lloyd Treinish, the chief scientist of IBM’s weather modeling system. For its project in China, IBM analyzed data from all the turbines to come up with a more accurate representation of actual wind speed and direction, he says.

IBM also built a meteorological model specific to this site in northern China and installed video cameras to track the movements of clouds to inform solar forecasts. The entire data set is fed into a supercomputer to generate the forecasts.

There are a number of other efforts to improve weather forecasting via better data collection and analysis. The latest generation of wind turbines from General Electric, for example, features a control system designed to better predict power output by analyzing tens of thousands of sensor data points a second. The U.S. Department of Energy has funded a few research projects, including one at the University of California, San Diego, to capture images of clouds with special devices featuring fish-eye cameras. The project analyzes those images with algorithms to produce a prediction of how much solar power a plant might produce over the next 15 minutes.

Ela notes that while utilities’ forecasts of electricity demand have become sophisticated over the decades, their forecasts of the supply of that power are still immature.

Drawing the Line on Altering Human Minds

By NICK BILTON
In my column this week, “Computer-Brain Interfaces Making Big Leaps,” I noted that a number of researchers and scientists were coming closer to technology usually reserved for science fiction: hacking our brains to remove unwanted and sad memories.

Although the idea of deleting a memory might sound appealing to some — who doesn’t want to forget that first heartbreak? — it might have disastrous consequences for our brains. It’s one thing to digitally enhance our memories with gadgets like iPhones and Google Glass, it’s something entirely different to delete or change past memories using technology.

Some readers asked if this was taking technology too far, saying such advancements cross a moral or ethical line that science should not pass.

“The human brain is intricate and a lot of damage can occur,” warned Jolan from Brooklyn in a comment on the column.

“If science wants to play with people’s thinking, then they ought to first decide about moral and ethical values of who they work for and the consequences of their actions,” wrote Mr. Magoo 5 from North Carolina.

Given today’s surveillance society, where the National Security Agency, Federal Bureau of Investigation and countless foreign governments monitor communications, connecting our brains and thoughts to the Internet might be asking for even more government trouble.

“What a mess that would be. Can you imagine N.S.A. hoovering up your thoughts from the Internet?” wrote Maurie Beck, from Encino, Calif. “You would need encryption software, but that might not be any different from software used today.”

“A hacker’s dream?” wrote another commentator. These types of hacks could start to resemble the government surveillance under “Big Brother” in George Orwell’s famous book “1984.”

But beyond the surveillance and ethical implications of hacking our brains and our memories, the biggest outcry from readers came in the form of philosophical worry.

“Forgetting your mistakes can be fatal,” wrote John B, a reader from Virginia.

“If our brains are wired, like computers are then, our minds will no longer have privacy,” wrote an anonymous reader. “The person I just met will be able to enter my head and know what I am thinking, possibly without me knowing. My joys and phobias would be public domain. That would make life very, very unpleasant for everyone.”

“A pacemaker is one thing. A cochlear implant sounds useful,” wrote SRSwain from Costa Rica. ”A spinal cord bypass to operate prosthetic limbs, or superacute hearing and vision, but magical transformation of memories and sensoria: No thanks.”