backtop


Print 65 comment(s) - last by johnsmith9875.. on Mar 4 at 1:49 AM

But there are limits that could hold wind back from growing

A new study from Harvard University's School of Engineering and Applied Sciences says that the generating capacity of large-scale wind farms isn't quite as high as scientists previously thought.

The study was led by Harvard applied physicist David Keith, who showed that we may not have access to as much wind power as once thought. Keith is an internationally renowned expert on climate science.

According to Keith's study, individual wind turbines each create a "wind shadow," which is where air is slowed by the drag on the turbine's blades. Wind farms with as many turbines packed into an area as possible but with just the right amount of spacing in between them are optimal for decreasing this drag.

However, the larger these wind farms are, the more they communicate and regional-scale wind patterns are even more important. Keith said previous generating capacity of large-scale wind farms ignored the drags and these wind patterns.

Keith's study said that the generating capacity of large-scale wind farms that are larger than 100 square kilometers could peak anywhere from 0.5 and 1 watts per square meter. Prior estimates put these figures at 2 to 7 watts per square meter.


“If wind power’s going to make a contribution to global energy requirements that’s serious, 10 or 20 percent or more, then it really has to contribute on the scale of terawatts in the next half-century or less,” said Keith.

But there are limits that could hold wind back from growing. Keith said that if wind were to exceed 100 terawatts, it would have a huge impact on global winds and eventually climate -- which could negatively affect climate more than doubling CO2.

“Our findings don't mean that we shouldn’t pursue wind power—wind is much better for the environment than conventional coal—but these geophysical limits may be meaningful if we really want to scale wind power up to supply a third, let’s say, of our primary energy,” said Keith. 

“It’s clear the theoretical upper limit to wind power is huge, if you don't care about the impacts of covering the whole world with wind turbines. What’s not clear—and this is a topic for future research—is what the practical limit to wind power would be if you consider all of the real-world constraints. You'd have to assume that wind turbines need to be located relatively close to where people actually live and where there's a fairly constant wind supply, and that they have to deal with environmental constraints. You can’t just put them everywhere.”

Keith concluded that we'll need to find sources for tens of terawatts of carbon-free power "within a human lifetime" in order to stabilize the Earth's climate.

“It’s worth asking about the scalability of each potential energy source—whether it can supply, say, 3 terawatts, which would be 10 percent of our global energy need, or whether it’s more like 0.3 terawatts and 1 percent," said Keith.

Source: Harvard University



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By rudolphna on 2/28/2013 1:57:06 AM , Rating: 3
Ahhh, something I very much enjoy talking about. That is to say, why the chernobyl disaster happened.

As this guy said.

The soviet RBMK1000 reactors, like those used at Chernobyl, were actually quite impressive in many ways. They were cheap to build, and easy to operate compared to most other reactors at the time. However, it had some major flaws, that, coupled with operators who weren't aware of those flaws because it was kept hidden, meant that the disaster was inevitable.

Prior to the accident, the operators of the Chernobyl NPP were planning to run a test on the Unit #4 to see how long the turbines would continue to generate power to power the reactors main coolant pumps, before the diesel generators kicked in. For the test, they wanted the reactor to be in a certain power range. As they decreased power by inserting the control rods, a side effect of the design of the reactor meant that with lower power, a certain element was produced that "poisoned" the core, further lowering output.

After a time, the reactor reached a dangerously low ~30MW output, practically in a shutdown state. In order to compensate for this, and try to bring it back up to the roughly 700MW they needed for the test, and to combat the core poisoning, they fully retracted almost all of the 200+ control rods. The design of the reactor was unique, it used graphite as a neutron modulator, along with light water, to keep the reaction under control. This is similar to a Boiling water reactor in terms of operation, but the steam doesn't all go through the generator, some steam and water that go through the steam seperator go back through the main coolant pumps and into the reactor.

The temperature of the coolant inlet greaty affected the power output of the reactor. So at the time the test began, the reactor was very unstable, the control rods fully retracted, the reaction being kept under control by the cooler water, as the steam/heat was being used to drive the turbines.

When they began the test and shut down the turbine, heat was no longer being dissapated by the turbines, and the coolant inlet temperature of the reactor increased dramatically, which in turn increased the reactivity. As this happened, the water in the core started to boil, generating steam bubbles, or "voids"

This is where the Soviet design was quite bad. The RBMK reactors had a very, very high positive void coefficient. Compare that to pretty much all reactors elsewhere, including our own here int he states. What that means is, as bubbles, or "Voids" are created, the rate of reaction increases, versus in a negative void coefficient reactor, which means that voids DECREASE the reaction. So as the water boiled, the reaction increased, increasing steam generation, a positive feedback loop.

The power output of the reactor increased to ridiculous levels, and the unit operators SCRAMed the reactor, inserting the control rods. This is where the other major flaw of the RBMK reactors lay. The control rods had graphite tips, which cleared water out of the channels as they were inserted. This had the unintended effect of actually increasing the reaction rate in the bottom of the core, which was what caused the initial steam explosion inside the core.

That explosion damaged and cracked the control rods, preventing them from fully inserting, only being inserted partway at the time. The reaction continued to increase as the water boiled away. The pressure inside the vessel grew and grew, until the 1000ton pressure vessel head literally blew off, damaging the building, and exposing the reactor core to the outside. In the explosion, tons of highly radioactive fuel and graphite moderator were scattered, and caught on fire, sending radiation into the atmosphere.

If they had figured out what happened sooner, and had evacuated sooner, it wouldn't have been so bad. But the dosimeters/geiger coutners they had at the time only measued up to 3.6mRoentegens/hr. So they decided that was as high as it was and chose not to inform the public about it. It wasn't until later, after firement and others had died from the radiation trying to put out the fire, that they found out that the radiation levels were actually over 10,000Roentegens/hr.

And thus you have the Chernobyl disaster. I'm not a nuclear engineer myself, but I have a very good ability to understand and comphrehend how things work by reading and looking at it. It wouldn't have been nearly so bad if the reactor operators had known about the flaws, but they were told that it was perfect and there couldn't be any design flaws. The RBMK1000 was perfect, as far as they knew.


"If they're going to pirate somebody, we want it to be us rather than somebody else." -- Microsoft Business Group President Jeff Raikes














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki