, , , , , , ,

Producing steam to drive a turbine and generator is relatively easy, and a light water reactor running at 350°C does this readily. As the above section and Figure show, other types of reactor are required for higher temperatures. A 2010 US Department of Energy document quotes 500°C for a liquid metal cooled reactor (FNR), 860°C for a molten salt reactor (MSR), and 950°C for a high temperature gas-cooled reactor (HTR). Lower-temperature reactors can be used with supplemental gas heating to reach higher temperatures, though employing an LWR would not be practical or economic.

The DOE said that high reactor outlet temperatures in the range 750 to 950°C were required to satisfy all end user requirements evaluated to date for the Next Generation Nuclear Plant.



I noticed this last night as I was looking for the other nuclear reactor form in use at some research facilities. But, I had just read a very interesting article about the size, scale and weight currently in use for turbines that are being run by the nuclear power industry’s large scale reactors, such as commonly in use. And, it occurred to me that 40 tons is a lot of weight to be moving for a turbine blade which could very well be the reason many ideas are shelved, and why a temperature of 750 – 950 degrees C is required as noted above by the DOE’s guidelines.

That is the same as moving an entire rocket in the dense space requirements of a nuclear power plant in size and scale – just to turn the turbines from the steam to make electricity. It was also created upon a design and materials choices were made from a time when robotic manufacturing was not operational and materials science had not created many of the new high-strength, lower weight types of materials we can choose from now, (even on those massive scales of size and extreme conditions of heat and pressure.)

Here is the link to that article about the size of turbine blades and components being placed in the nuclear plant systems (and probably other power generating systems) which the industry fuel systems and power workhorse “sources” are required to move:


A generator rotor weights in excess of 200 tons, according to Craig Hanson, vice president and product line manager for nuclear plant builder for Babcock & Wilcox. And, for each nuclear plant, there are three to four turbine rotors. ( . . . )

In the late 1960s, designers discovered that larger forgings had better mechanical properties, requiring less welding and therefore less inspection requirements over the life of a plant. These larger forgings became a signature of Generation II plants and all others that have followed.

But, by choosing larger forgings, even the most powerful domestic steel producers, such as U.S. Steel and the now-defunct Bethlehem Steel, were shut out of the supply chain.

“In the interest of efficiency, the companies that built nuclear reactors made their reactors bigger,” says Mike Kamnikar, senior vice president for marketing and business development at The Ellwood Group, a forging group. “The biggest ingot that could be made by Bethlehem Steel or U.S. Steel in the 1970’s was roughly 380 tons. Bethlehem and U.S. Steel each had 8,000- ton presses, but the presses didn’t have enough clearance to make these big rings, which were over 200 inches in diameter.”

Four of the most complex parts of a nuclear power plant — the containment vessel, the reactor vessel components, the turbine rotors and steam generators — are made from over 4,000 tons of steel forgings, and almost none of those components are manufactured in the United States.




My Note –

No wonder it has to be 950 degrees Celsius to move the damn turbine rotors – Damn.

So, what dingleberry made the decision that we must move 200 ton rotors to make electricity? That is like making a massive flywheel out of the densest, heaviest material such as lead and then demanding excessive power to be manufactured simply to get its motion started for no other reason than the material used for it. Maybe that made sense in designs from the 1930’s which were being used for 1960’s decisions and scale ups which took no consideration of the alloys, unique materials and composites or manufacturing process choices we have today. If that material strength and durability could be created without weighing 200 tons – what temperature range to move it could be opened up for those systems? As if it isn’t bad enough that nuclear power is no more than a $10 billion dollar steam kettle, the fact is – choices being made about some components are driving the requirements for its throughput power. That doesn’t even make any good sense. These people have a lot more money and intelligent resources than I have, why haven’t they redesigned the rotor materials to accommodate new choices available in the marketplace today? I don’t understand.

There are potentially system choices that could be made using other novel approaches from geothermal sources to nuclear fusion, but not if the only temperature range to be required for them in order to move 200 ton rotors / turbine blades runs over 750 degrees Celsius. And, I’m guessing it is the top of that range which is more desirable for those massive constructs to move efficiently for producing electricity. That is insane. The only way that would make sense would be if there were no other choices of materials available to do that work without the weight inherent in steel. And, steel isn’t the strongest material we have today, nor the least costly to produce either.

– cricketdiane


Well. How about that?

No wonder it is costing so much to produce these power plants and costing so much to create electricity with them as well. There has to be better answers than that. And, on top of it – as much as I do want the global economy stimulated as well – I’m an American first and there is no advantage to American economic foundations, for producing large forgings of steel for these items which are made elsewhere, shipped by companies based elsewhere and supporting every other economy besides our own as these power plants are built using unnecessary material requirements and constraints. So, with nuclear power plants, not only are we moving 200 ton rotors to get electricity, we have all the other drawbacks of the system as well – it utilizes our money and funding to do it, why don’t the engineers and scientists simply redesign it in a form that is more appropriate to today’s materials science menu.



This was the nifty reactor design I was looking up last night which I had found earlier (and there is another one that I remember too, which I still want to find) –

MIT Reactor Core

Nov 26th 2009alm11961HW

Although it seems strange, the Massachusetts Institute of Technology- in the city of Cambridge has on its campus a small nuclear reactor core surrounded on the outside by concrete. It was built in 1958, then renovated in 1975. This reactor runs on enriched Uranium 235 and is used to generate neutrons. It does not generate enough high pressure or hot temperatures to make heat energy.



And this one – which General Fusion has a very nifty device already designed – (they’re working on it now) – Need to tell them to redesign the systems’ harness and rotor materials to make it viable obviously – I mean in the turbine system it will be required to run. Damn ridiculous – 200 ton rotors, that is 2,000, 2,200 pounds per ton or something – what kind of math is that?? Superfluid transport and then have to move flywheels of lead (or actually something massively worse and more constrained than that.)

Magnetized target fusion (MTF) is a relatively new approach to producing fusion power that combines features of the more widely studied magnetic confinement fusion (MCF) and inertial confinement fusion (ICF) approaches. Like the magnetic approach, the fusion fuel is confined at lower density by magnetic fields while it is heated into a plasma. Like the inertial approach, fusion is initiated by rapidly squeezing the target to greatly increase fuel density, and thus temperature. Although the resulting density is far lower than in traditional ICF, it is thought that the combination of longer confinement times and better heat retention will let MTF yield the same efficiencies, yet be far easier to build.

MTF is currently being studied mostly by the Los Alamos National Laboratory (LANL) and Air Force Research Laboratory (AFRL), and by Canadian startup company, General Fusion.



(I’m still thinking about the nuclear power industry’s insistence on massively scaled turbines with weight configurations discussed in the industry article near the top of this post.)

I bet there have been lots of scientists and engineers who didn’t understand why their work was being considered as less than desirable by the DOE and the energy industry when the real target that was being missed was this 950 degree Celsius mark required to turn rotors with the weight of a small skyscraper each. The decisions being made against certain energy forms and choices would have been decided by (DOE et al.) based on the idea that everything had to match into that existing system application (and its constraints) in order to viable. That means, geothermal sources wouldn’t have even been in the playbook and neither would a multitude of other choices. And rather than to redesign the system constraints with those massive forgings of appreciable weight in steel made into something more applicable to today’s materials, every other source possibility was simply treated as some bastardized child wasting the taxpayers’ money even as they allocated some pittance to it.

It seems we could take the same “system” and choose a geothermal source and manner of access to it – but make the turbine components of new materials with lower weight to strength ratios with durability, high structural integrity and (tested) long term reliability characteristics and within the next three years, place it online to provide electricity much faster than trying to create the temperatures of the sun through fusion and harness it for the power to turn 200 ton turbine rotors. Honestly.

And, it also seems that this would be the best time to consider redesigning the rotor materials in light of the fact that companies all over the US and the world are begging for business and contracts to use the wonderful new things they know how to do now. The materials that are available, the new manufacturing processes and the new carbon nanomaterials companies are desperate for the opportunities to make these applications of the things they have available. And, they would do it right now.

I’m so sick of hearing throughout my adult life that we are forever 30 years away from doing anything. Maybe that works to get funding for more research and more research and more research, but at what point is that costing us far more than the time that we continue to wait for any of that research to be available to make our lives better? I can understand why the nuclear industry may not want to make any changes to the system they have in place right now and the ways they are doing it. My guess is that they make money at every single stage of the process and may even own part of the profits of the large forging manufacturers, the mine operations to provide the raw materials and the shippers that ship these things at every stage of the process. I wouldn’t doubt it. But some of these decisions need to be re-analyzed in light of what we know now. And, many of these decisions including cost-to-benefit ratios have changed significantly. Probabilistic assessments failed to compute accurately the various scenarios that were even contemplated, which reality has now shown us could, and in fact, would be likely, let alone to have accurately depicted the drawbacks and dangers that have recently been discovered.

We have at least 8 million businesses involved in some form of something which has to do with the energy industry, easily. We have plenty of money throughout those industry sources which the energy sector businesses enjoy almost without reserve or even with any further consideration of what they are asking to do. Surely, some of those funds and intelligent brain power could be used to resolve these issues for them, which include decisions made on power systems and energy sources (and design decisions) that were made with facts of some earlier time rather than the facts of today.

If a decision was even based in today which failed to account for those changes in information – it would be faulty. In 1960 and in 1970, the software didn’t exist to do the things we can do today, the equipment available for testing and for modeling did not exist in its current forms, the robotic manufacturing with computer software control systems did not exist in the same range of possibilities and the raw materials costs along with shipping and processing costs were of a completely different scale when those decisions were made than it is today.

It isn’t enough to have added a few new figures to the analysis to explain the difference and make an adjusted comparison. The entire supply chain is different now and what may have not been viable in the past, is in many cases, more viable today and existing systems in the manner they were originally designed and computed for costs – may be far more costly than anticipated. Those original 40 year old facts and figures for comparison simply do need to be re-analyzed and what is remarkable is that – it shouldn’t take years, upon years, upon years of manhours to do that. (Although I’m sure there will be a way to do it like that where in fifty more years, we are still waiting for those results – much as we are today on some things.)


950 Degrees Celsius to make a power system that works . . .

So it can turn four 200 ton turbine blade rotors with the weight of a mid-sized skyscraper each – in order to make electricity.

And, anything that can’t do that isn’t even considered with any appreciable respect and funding . . .




“they” (in business, energy industry and government energy agencies) want scientists and engineers to create a small sun on earth using fusion so they can power a steam kettle to make electricity much as they are doing now through nuclear fission to heat water.

Yep, that’s about it . . .

That’s ridiculous.

That is wrong on so many levels and in so many ways as to be unbelievably misguided.

But, then who am I to say – I’m sure it must be me that is misguided about it. Having a small contained sun on our planet thirty years from now driving a fusion reactor to put steam into some massively weighted system of components to make steam to drive turbines also massively weighted – is probably the “right way to do it” in their estimation.

Well, obviously.

And in the meantime, as our planets’ population hits the 7 billion persons mark with its increased need for power generating capacity, electricity in general, fuel sources and increased desire for more extensive power grids, and even as raw materials get scarier in the harvesting of them being used – we are supposed to say nothing and wait another thirty years for these power options to be available while enduring nuclear fission based steam kettle systems we  have or are building now with all their dangers and drawbacks.

Hmmm… I don’t think so. That would have to be wrong.

– cricketdiane


Okay – on to other things –

This article (linked below)  is very interesting about a nuclear powered bomber the US produced in 1944 – 1957/1960’s program time period. It was apparently successful, but the idea of having a flying nuclear generator overhead was unappealing –  as I can imagine. Although that doesn’t seem to matter with planes that are armed with nuclear missiles or bombs on board. Hmmm. Interesting technology notes on the nuclear power systems considered and tested during the program can be found in this wonderful article about the plane –


In 1949, the program ran a series of tests, known as the Heat Transfer Reactor Experiment (HTRE), involving three types of reactors, with the purpose of determining the most efficient method of transferring energy from the reactor. After an extensive trial series, the HTRE-3 emerged as the selected transfer system. The HTRE-3 was a Direct-Cycle Configuration. In a direct cycle system, the air entered the engine through the compressor of the turbojet, it then moved to a plenum intake that directs the air to the core of the reactor.

At this point the air, serving as the reactor coolant, is super-heated as it travels through the core. After that stage, it goes to another plenum intake; from there the air is directed to the turbine section of the engine and eventually to the tailpipe. This configuration allowed the aircraft engine to start on chemical power and then switch to nuclear heat as soon as the core reached optimized operational temperatures, thus providing the proposed aircraft the ability to take-off and land on conventional power.

Another system considered was the Indirect-Cycle Configuration (not shown here, my note). In this configuration, the air did not go through the reactor core, air instead passed through a heat exchanger. The heat generated by the reactor is carried by liquid metal or highly pressurized water, to the heat exchanger where the air is, thus heating the air in its way to the turbine.


And this one –


Neutron Activated Graphite Neutron Activated Graphite.
Lorraine McDermott, School of Materials, Manchester
Autoradiographic image of neutron activated graphite from the British Experimental Pile Zero (BEPO) nuclear reactor core. The core was operational from 1948-1968 with a final decommissioning date scheduled for 2022. Autoradiography produces a visual distribution pattern of radiation, where the specimen is the source of the radiation. Autoradiography therefore provides information on the distribution of radioactivity within a sample. This information is being used to understand how thermal and leaching treatments may reduce the activity of nuclear graphite waste. The area of this autoradiography image is 6 x 9 mm. Hot (i.e. red) colours indicate higher activity.

(and other nifty stuff)


A reminder that any major critical incident at a nuclear power plant is not an isolated contained event – it affects the entire world along with food sources available that are required to serve populations –

Early projections of fallout dispersal from Fukushima – (now radioactive materials are in food sources including beef, sea sources, and others of various measures have been found in drinking water, milk, vegetables, etc. in Japan) –


A challenge that lies ahead will be how to clean up massive amounts of debris from the tsunami and quake, some of which may now be radioactive. “They’re going to have t come with a plan and a repository,” Jemmex said, adding that includes creating designated clean-up zones to allow materials to cool down.

(etc. – includes world map with expected contamination effects – regardless of the degree – this means it is not simply the business of the energy industry, the nuclear industry owners, the individual nation involved or the moneyed decision makers in some isolation – considering the damage possible is extensively life altering for both neighbors and those far removed from the location of the event. – my note)

Core damage confirmed at 3 reactors; spent fuel rods a rising concern at 4th;
U.S. urges evacuation within 80 kilometers (50 Miles) around stricken plants

March 16, 2011 (San Diego) – The United Nations has released a forecast indicating a radioactive plume from damaged Japanese nuclear reactors at Fujushima Daiichi cold reach the Aleutian Islands off Alaska on Thursday and Southern California late on Friday, then east to Nevada, Utah, Arizona, and likely points beyond.

The U.N. has not issued a statement on how much radiation the plume could contain, however numerous other experts have indicated that amounts are expected to be small and below levels likely to harm human health. The U.S. Environmental Protection Agency is setting up additional radiation monitors on the West Coast as a precaution. An existing monitor in San Diego is currently non-operational, according to the EPA’s RadNet real-time radiation monitoring database online. ( . . . )


This wikipedia page explains the various states of matter and has really nifty pictures, too. It presents an overview and a new explanation of the definitions that have come to be accepted. (this is just a little of it – well worth reading through all of it).


Under extremely high pressure, ordinary matter undergoes a transition to a series of exotic states of matter collectively known as degenerate matter. In these conditions, the structure of matter is supported by the Pauli exclusion principle. These are of great interest to astrophysicists, because these high-pressure conditions are believed to exist inside stars that have used up their nuclear fusion “fuel”, such as white dwarfs and neutron stars.

Electron-degenerate matter is found inside white dwarf stars. Electrons remain bound to atoms but are able to transfer to adjacent atoms. Neutron-degenerate matter is found in neutron stars. Vast gravitational pressure compresses atoms so strongly that the electrons are forced to combine with protons via inverse beta-decay, resulting in a superdense conglomeration of neutrons. (Normally free neutrons outside an atomic nucleus will decay with a half life of just under 15 minutes, but in a neutron star, as in the nucleus of an atom, other effects stabilize the neutrons.)


Main article: Supersolid

A supersolid is a spatially ordered material (that is, a solid or crystal) with superfluid properties. Similar to a superfluid, a supersolid is able to move without friction but retains a rigid shape. Although a supersolid is a solid, it exhibits so many characteristic properties different from other solids that many argue it is another state of matter.[12]




Brief explanation of nuclear propulsion used mainly in submarines (fission heat products to steam process) –


In the early 1950s work was initiated at the Idaho National Engineering and Environmental Laboratory to develop reactor prototypes for the US Navy. The Naval Reactors Facility, a part of the Bettis Atomic Power Laboratory, was established to support development of naval nuclear propulsion. The facility is operated by Westinghouse Electric Corporation under the direct supervision of the DOE’s Office of Naval Reactors. The facility supports the Naval Nuclear Propulsion Program by carrying out assigned testing, examination, and spent fuel management activities.

The facility consists of three naval nuclear reactor prototype plants, the Expended Core Facility, and various support buildings. The submarine thermal reactor prototype was constructed in 1951 and shut down in 1989; the large ship reactor prototype was constructed in 1958 and shut down in 1994; and the submarine reactor plant prototype was constructed in 1965 and shut down in 1995. The prototypes were used to train sailors for the nuclear navy and for research and development purposes. The Expended Core Facility, which receives, inspects, and conducts research on naval nuclear fuel, was constructed in 1958 and is still operational.

The initial power run of the prototype reactor (S1W) for the first nuclear submarine, the Nautilus, was conducted at the INEEL in 1953. The A1W prototype facility consists of a dual-pressurized water reactor plant within a portion of the steel hull designed to replicate the aircraft carrier Enterprise. This facility began operations in 1958 and was the first designed to have two reactors providing power to the propeller shaft of one ship. The S5G reactor is a prototype pressurized water reactor that operates in either a forced or natural circulation flow mode. Coolant flow through the reactor is caused by thermal circulation rather than pumps. The S5G prototype plant was installed in an actual submarine hull section capable of simulating the rolling motions of a ship at sea. The unique contributions of these three reactor prototypes to the development of the United States Nuclear Navy make them potentially eligible for nomination to the National Register of Historic Places.

The Test Reactor Area (TRA) occupies 102 acres in the southwest portion of the INEL. The TRA was established in the early 1950s with the development of the Materials Test Reactor. Two other major reactors were subsequently built at the TRA: the Engineering Test Reactor and the Advanced Test Reactor. The Engineering Test Reactor has been inactive since January 1982. The Materials Test Reactor was shut down in 1970, and the building is now used for offices, storage, and experimental test areas. The major program at the TRA is now the Advanced Test Reactor. Since the Advanced Test Reactor achieved criticality in 1967, it’s been used almost exclusively by the Department of Energy’s Naval Reactors Program. After almost 30 years of operation, this reactor is still considered a premier test facility. And it’s projected to remain a major facility for research, radiation testing, and isotope production into the next century.



Federation of American Scientists website – entry found here -(well worth reading all of it – has great diagrams too.) –


Here is their main page link –