The nucleus of an atom, like most everything else, is more complicated than we first thought. Just how much more complicated is the subject of a Petascale Early Science project led by Oak Ridge National Laboratory’s David Dean.
According to findings outlined by Dean and his colleagues in the May 20, 2011, edition of the journal Physical Review Letters, researchers who want to understand how and why a nucleus hangs together as it does and disintegrates when and how it does have a very tough job ahead of them.
Dean’s team, however, determined that the two-body force is not enough; researchers must also tackle the far more difficult challenge of calculating combinations of three particles at a time (three protons, three neutrons, or two of one and one of the other). This approach yields results that are both different from and more accurate than those of the two-body force.
(definitely read this one – it includes a brief explanation of the forces known to be at work.)
DOE Energy Files Access Portal –
A note on one of my cards says –
Germany looking for 10GW (to replace nuclear power facilities)
Oh and the best choice from the DOE Energy Portal – (in my opinion, is this one) –
(which offers multi-disciplinary tools) – and this one especially –
Federal R&D Project Summaries – Descriptions, awards, and summaries of federally funded research
And these –
Argonne Library’s Resources on the Internet – A repository of Internet sites for scientific research created and maintained by the library staff at Argonne National Laboratory
National Academies Press – The National Academies Press (NAP) was created by the National Academies to publish the reports issued by the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council, all operating under a charter granted by the Congress of the United States. The NAP publishes more than 200 books a year on a wide range of topics in science, engineering, and health.
Code of Federal Regulations – Government Printing Office (GPO) database containing text of public regulations issued by the agencies of the U.S. government
AND especially this one –
National Institute of Standards and Technology (NIST) – Information on products and services including reference materials and data, calibrations, standards information, and other services
AND this one –
Oak Ridge National Laboratory Technical Reports – Full text technical reports from Oak Ridge National Laboratory
These two software packages are interesting – however the first is from 2000 and the second from 2005 – there are probably better ones now – (11 years old and 6 years old, respectively) –
DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.
|PACKAGE ID||000138MLTPL01 DYNA3D2000*|
|KWIC Title||Explicit 3-D Hydrodynamic FEM Program|
CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation laws is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.
|PACKAGE ID||000663SUN0002 CFDLIB05|
|KWIC Title||Computational Fluid Dynamics Library|
Arrangements for them have to be made through the Dept. of Energy Resource Portal here – or from the pages linked above –
However, there are probably better choices that are newer and handle information more effectively and efficiently.
Be aware that some software programs will average values as part of their programming approach. Regardless of whether this is a typical manner in which to handle data – to do so alters the facts upon whose integrity nearly all of our scientific and engineering theories are extrapolated.
There was a nifty 3-d modeling software program being pushed through a lot of physics and science websites to the researchers starting a few years ago. I was very excited about its ability to take large data arrays and model them until I discovered during a web live presentation about it from their companies technical reps that it averaged the values within the arrays as part of its paradigm for processing the information. So, behind the scenes – in the subroutines of the program, the data is actually being altered and then presented visually. I hated that about it and stopped having any interest in it.
And, here’s why –
First, I’ve found that it is a more common practice, both in science and in culling real world results data as well as in the display of those results using some of the modeling software that has been available over the last however many years.
And, second – I considered what that would mean in even the simplest scenario that I could consider – for instance, the cohesion values of concrete and cement. And, when I think of the impacts that averaging would have on that in particular, it is rather horrifying.
Third, that is probably a part of what has resulted in unnecessary dangers to human life and safety in some construction choices. In the example of cohesion factors for cement and concrete, those values (which might have been altered to average the data array in a lab, science or engineering environment for ease of handling, etc.) support decisions that are made at the design, construction and the financial decision-makers’ levels. So, what if a building is being designed, engineered and then built using these altered values (however slight) for the concrete being depended upon for strength and reliability? And, then what if, the financiers push to have corners cut further, citing as a margin of safety – values when viewed objectively – were simply altered by averaging. There is then, two places where the margin of safety supposedly built into the engineering and construction of these projects are narrowed and possibly to the point of exceeding the very range in the original margin of safety. Cohesion, for instance, is not a small thing that would have no impact, whether it is in the explanations of what is happening at the atomic level within molecules or within the structural materials that make up most of our living and working structures, dams, levees and other high priority projects for the public good.
I’m not naming the software in my description above, because they would probably frown upon it. And, it is apparently an all too common manner of handling large data sets of specific experimental real-world results in order to model them, compute the modeling of them or give visual interpretations of the things that data is suggesting. However, people who are choosing the software to be used by our labs would have to look specifically for whether the software does this averaging of values in its handling of the routines – my guess, is they might and they might not. Some results are visually stunning, but obviously wrong. And, how much of that is a result of this particular thing despite the equations being used having measured accuracy?
I noticed this – very interesting, too. –
Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.
In the case of many accidents, probabilistic risk assessment models do not account for unexpected failure modes:
At Japan’s Kashiwazaki Kariwa reactors, for example, after the 2007 Chuetsu earthquake some radioactive materials escaped into the sea when ground subsidence pulled underground electric cables downward and created an opening in the reactor’s basement wall. As a Tokyo Electric Power Company official remarked then, “It was beyond our imagination that a space could be made in the hole on the outer wall for the electric cables.”
When it comes to future safety, nuclear designers and operators often assume that they know what is likely to happen, which is what allows them to assert that they have planned for all possible contingencies. Yet there is one weakness of the probabilistic risk assessment method that has been emphatically demonstrated with the Fukushima I nuclear accidents — the difficulty of modeling common-cause or common-mode failures:
And in its “References” section – it lists these two of importance, certainly –
- ^ Centrale Nucléaire de Fessenheim : appréciation du risque sismique RÉSONANCE Ingénieurs-Conseils SA, published 2007-09-05, accessed 2011-03-30
- ^ a b c d e f M. V. Ramana (19 April 2011). “Beyond our imagination: Fukushima and the problem of assessing risk”. Bulletin of the Atomic Scientists. http://thebulletin.org/web-edition/features/beyond-our-imagination-fukushima-and-the-problem-of-assessing-risk.
AND This –
(in another entry)
Cost–benefit analysis is often used by governments and others, e.g. businesses, to evaluate the desirability of a given intervention. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs (i.e. whether it is worth intervening at all), and by how much (i.e. which intervention to choose). The aim is to gauge the efficiency of the interventions relative to each other and the status quo.
The costs of an intervention are usually financial. The overall benefits of a government intervention are often evaluated in terms of the public’s willingness to pay for them, minus their willingness to pay to avoid any adverse effects. The guiding principle of evaluating benefits is to list all parties affected by an intervention and place a value, usually monetary, on the (positive or negative) effect it has on their welfare as it would be valued by them. Putting actual values on these is often difficult; surveys or inferences from market behavior are often used.
One source of controversy is placing a monetary value of human life, e.g. when assessing road safety measures or life-saving medicines. However, this can sometimes be avoided by using the related technique of cost-utility analysis, in which benefits are expressed in non-monetary units such as quality-adjusted life years. For example, road safety can be measured in terms of ‘cost per life saved’, without placing a financial value on the life itself.
CBA usually tries to put all relevant costs and benefits on a common temporal footing using time value of money formulas. This is often done by converting the future expected streams of costs and benefits into a present value amount using a suitable discount rate.
Risk associated with the outcome of projects is also usually taken into account using probability theory.
A peer-reviewed study  of the accuracy of cost estimates in transportation infrastructure planning found that for rail projects actual costs turned out to be on average 44.7 percent higher than estimated costs, and for roads 20.4 percent higher (Flyvbjerg, Holm, and Buhl, 2002). For benefits, another peer-reviewed study  found that actual rail ridership was on average 51.4 percent lower than estimated ridership; for roads it was found that for half of all projects estimated traffic was wrong by more than 20 percent (Flyvbjerg, Holm, and Buhl, 2005). Comparative studies indicate that similar inaccuracies apply to fields other than transportation. These studies indicate that the outcomes of cost–benefit analyses should be treated with caution because they may be highly inaccurate. Inaccurate cost–benefit analyses likely to lead to inefficient decisions, as defined by Pareto and Kaldor-Hicks efficiency .These outcomes (almost always tending to underestimation unless significant new approaches are used) are to be expected because such estimates:
- Rely heavily on past like projects (often differing markedly in function or size and certainly in the skill levels of the team members)
- Rely heavily on the project’s members to identify (remember from their collective past experiences) the significant cost drivers
- Rely on very crude heuristics to estimate the money cost of the intangible elements
- Are unable to completely dispel the usually unconscious biases of the team members (who often have a vested interest in a decision to go ahead) and the natural psychological tendency to “think positive” (whatever that involves)
Another challenge to cost–benefit analysis comes from determining which costs should be included in an analysis (the significant cost drivers). This is often controversial because organizations or interest groups may think that some costs should be included or excluded from a study.
In the case of the Ford Pinto (where, because of design flaws, the Pinto was liable to burst into flames in a rear-impact collision), the Ford company’s decision was not to issue a recall. Ford’s cost–benefit analysis had estimated that based on the number of cars in use and the probable accident rate, deaths due to the design flaw would run about $49.5 million (the amount Ford would pay out of court to settle wrongful death lawsuits). This was estimated to be less than the cost of issuing a recall ($137.5 million) . In the event, Ford overlooked (or considered insignificant) the costs of the negative publicity so engendered, which turned out to be quite significant (because it led to the recall anyway and to measurable losses in sales).
In the case of environmental and occupational health regulation, it has been argued that if modern cost-benefit analyses had been applied prospectively to proposed regulations such as removing lead from gasoline, not turning the Grand Canyon into a hydroelectric dam, and regulating workers’ exposure to vinyl chloride, these regulations would not have been implemented even though they are considered to be highly successful in retrospect. The Clean Air Act has been cited in retrospective studies as a case where benefits exceeded costs, but the knowledge of the benefits (attributable largely to the benefits of reducing particulate pollution) was not available until many years later.
My Note –
First, it looks like at one time these analysis forms were used to actually consider the various choices and then at some point became an intentional method to support or undermine certain choices using those analysis forms. That is backwards, but seems to be common now and over the last thirty years, particularly in the last twenty-five or so years – and especially in America.
Second, it appears to me that the course taken for choices by industry, businesses and government, often too, by lobbies and industry serving groups, detracts from “fixing” a known problem which could negatively people’s lives and safety and is known to be a continuing risk to people’s lives and safety.
And, third, – I find no excuse for doing it that way.
And, fourth – It is not in the best interest of our society to do it that way, regardless of the cost to benefit analysis that supports doing it that way.
So, you might be wondering how the things at the beginning of this post have anything to do with the decision-making analysis forms that appear next –
The question that I was trying to answer about nuclear fusion, use of other power source alternatives such as geothermal power, and a number of other things – finally came down to – upon what basis are the decisions being made, who is making them, why are they believing those are the best choices for them to make and why are these decision-makers not considering any other choices as viable and appropriate?
And, secondly, the moment when a system is known to have a risk of causing massive harm, permanent harm and even, loss of life to people, why isn’t it changed immediately and appropriately? And, why does it take so long to change a known danger, once it is known? And, why isn’t something else adopted in a timely manner, once a known risk higher than anticipated is defined, studied, recognized and accepted?
(Okay – the question included a number of related questions. However, the impacts of the answers I found to these questions touch every single part of our modern society’s set of wonders and the dangers of them from buildings and homes to airplanes and nuclear power – etc. ad infinitum along with the decisions being made that impact all of us. When a civil engineer and his set of corporate contract holders who are directing decisions for their desired outcome make decisions for whatever reason, they impact what will happen to the people walking by their project after it is completed over the course of many years, they impact the people living and working in and around it, and they impact the health and safety of the lives in any way negatively touched by what they’re building.
If a bridge falls, people can be permanently maimed which impacts not only the community where they live, but each of their family members and their children for the remaining course of their lives. So, my question was – why would the civil engineers, local governments and businesses involved with construction and design of a bridge or the repair and replacement of a bridge treat it as nobody’s business but their own in how they go about it? (Just one example of many.)
When nuclear power is the only choice and it isn’t done safely, there is no way for the mind to grasp how many generations of people are impacted by it. That isn’t only the business of those involved with it as a business, nor simply for the regulators to serve the desires of that industry without further consideration of its potential costs to human lives and our society’s best interest. The same is true for safety of our planes, our airline industry, our construction industry’s choices, our financial backers insistence on cutting corners in all sorts of things, and a multitude of other things. (also as examples of the same, but auto industry, for instance, and others can be included as well.)
The cost to benefit decision tree models are effectively removing projects that would be beneficial to mankind and re-routing funding based upon those findings – regardless of how inappropriate it has become to do so and at what costs to all of us. These formats are also being used to support the continued use in the same manner of things that do need to be changed and are known to need to be changed. We have had countless situations where human lives were lost or permanently altered in the most horrific and negative ways as a result of not making changes in a timely manner where there were known dangers that had been identified (and, often, even as there were known solutions that could have been economically, effectively and appropriately applied in a timely manner.)
US Energy Information – Total Energy Used, Resourced, being Developed, in Reserve, etc Monthly Data – EIA – Total Energy
Just found this – from a twitter –
A vast fan-shaped compound in China has officially taken the title of “largest solar-powered office building in the world“. Located in Dezhou in the Shangdong Province in northwest China
Producing steam to drive a turbine and generator is relatively easy, and a light water reactor running at 350°C does this readily. As the above section and Figure show, other types of reactor are required for higher temperatures. A 2010 US Department of Energy document quotes 500°C for a liquid metal cooled reactor (FNR), 860°C for a molten salt reactor (MSR), and 950°C for a high temperature gas-cooled reactor (HTR). Lower-temperature reactors can be used with supplemental gas heating to reach higher temperatures, though employing an LWR would not be practical or economic.
The DOE said that high reactor outlet temperatures in the range 750 to 950°C were required to satisfy all end user requirements evaluated to date for the Next Generation Nuclear Plant.
I noticed this last night as I was looking for the other nuclear reactor form in use at some research facilities. But, I had just read a very interesting article about the size, scale and weight currently in use for turbines that are being run by the nuclear power industry’s large scale reactors, such as commonly in use. And, it occurred to me that 40 tons is a lot of weight to be moving for a turbine blade which could very well be the reason many ideas are shelved, and why a temperature of 750 – 950 degrees C is required as noted above by the DOE’s guidelines.
That is the same as moving an entire rocket in the dense space requirements of a nuclear power plant in size and scale – just to turn the turbines from the steam to make electricity. It was also created upon a design and materials choices were made from a time when robotic manufacturing was not operational and materials science had not created many of the new high-strength, lower weight types of materials we can choose from now, (even on those massive scales of size and extreme conditions of heat and pressure.)
Here is the link to that article about the size of turbine blades and components being placed in the nuclear plant systems (and probably other power generating systems) which the industry fuel systems and power workhorse “sources” are required to move:
A generator rotor weights in excess of 200 tons, according to Craig Hanson, vice president and product line manager for nuclear plant builder for Babcock & Wilcox. And, for each nuclear plant, there are three to four turbine rotors. ( . . . )
In the late 1960s, designers discovered that larger forgings had better mechanical properties, requiring less welding and therefore less inspection requirements over the life of a plant. These larger forgings became a signature of Generation II plants and all others that have followed.
But, by choosing larger forgings, even the most powerful domestic steel producers, such as U.S. Steel and the now-defunct Bethlehem Steel, were shut out of the supply chain.
“In the interest of efficiency, the companies that built nuclear reactors made their reactors bigger,” says Mike Kamnikar, senior vice president for marketing and business development at The Ellwood Group, a forging group. “The biggest ingot that could be made by Bethlehem Steel or U.S. Steel in the 1970’s was roughly 380 tons. Bethlehem and U.S. Steel each had 8,000- ton presses, but the presses didn’t have enough clearance to make these big rings, which were over 200 inches in diameter.”
Four of the most complex parts of a nuclear power plant — the containment vessel, the reactor vessel components, the turbine rotors and steam generators — are made from over 4,000 tons of steel forgings, and almost none of those components are manufactured in the United States.
My Note –
No wonder it has to be 950 degrees Celsius to move the damn turbine rotors – Damn.
So, what dingleberry made the decision that we must move 200 ton rotors to make electricity? That is like making a massive flywheel out of the densest, heaviest material such as lead and then demanding excessive power to be manufactured simply to get its motion started for no other reason than the material used for it. Maybe that made sense in designs from the 1930’s which were being used for 1960’s decisions and scale ups which took no consideration of the alloys, unique materials and composites or manufacturing process choices we have today. If that material strength and durability could be created without weighing 200 tons – what temperature range to move it could be opened up for those systems? As if it isn’t bad enough that nuclear power is no more than a $10 billion dollar steam kettle, the fact is – choices being made about some components are driving the requirements for its throughput power. That doesn’t even make any good sense. These people have a lot more money and intelligent resources than I have, why haven’t they redesigned the rotor materials to accommodate new choices available in the marketplace today? I don’t understand.
There are potentially system choices that could be made using other novel approaches from geothermal sources to nuclear fusion, but not if the only temperature range to be required for them in order to move 200 ton rotors / turbine blades runs over 750 degrees Celsius. And, I’m guessing it is the top of that range which is more desirable for those massive constructs to move efficiently for producing electricity. That is insane. The only way that would make sense would be if there were no other choices of materials available to do that work without the weight inherent in steel. And, steel isn’t the strongest material we have today, nor the least costly to produce either.
Well. How about that?
No wonder it is costing so much to produce these power plants and costing so much to create electricity with them as well. There has to be better answers than that. And, on top of it – as much as I do want the global economy stimulated as well – I’m an American first and there is no advantage to American economic foundations, for producing large forgings of steel for these items which are made elsewhere, shipped by companies based elsewhere and supporting every other economy besides our own as these power plants are built using unnecessary material requirements and constraints. So, with nuclear power plants, not only are we moving 200 ton rotors to get electricity, we have all the other drawbacks of the system as well – it utilizes our money and funding to do it, why don’t the engineers and scientists simply redesign it in a form that is more appropriate to today’s materials science menu.
This was the nifty reactor design I was looking up last night which I had found earlier (and there is another one that I remember too, which I still want to find) –
Although it seems strange, the Massachusetts Institute of Technology- in the city of Cambridge has on its campus a small nuclear reactor core surrounded on the outside by concrete. It was built in 1958, then renovated in 1975. This reactor runs on enriched Uranium 235 and is used to generate neutrons. It does not generate enough high pressure or hot temperatures to make heat energy.
And this one – which General Fusion has a very nifty device already designed – (they’re working on it now) – Need to tell them to redesign the systems’ harness and rotor materials to make it viable obviously – I mean in the turbine system it will be required to run. Damn ridiculous – 200 ton rotors, that is 2,000, 2,200 pounds per ton or something – what kind of math is that?? Superfluid transport and then have to move flywheels of lead (or actually something massively worse and more constrained than that.)
Magnetized target fusion (MTF) is a relatively new approach to producing fusion power that combines features of the more widely studied magnetic confinement fusion (MCF) and inertial confinement fusion (ICF) approaches. Like the magnetic approach, the fusion fuel is confined at lower density by magnetic fields while it is heated into a plasma. Like the inertial approach, fusion is initiated by rapidly squeezing the target to greatly increase fuel density, and thus temperature. Although the resulting density is far lower than in traditional ICF, it is thought that the combination of longer confinement times and better heat retention will let MTF yield the same efficiencies, yet be far easier to build.
(I’m still thinking about the nuclear power industry’s insistence on massively scaled turbines with weight configurations discussed in the industry article near the top of this post.)
I bet there have been lots of scientists and engineers who didn’t understand why their work was being considered as less than desirable by the DOE and the energy industry when the real target that was being missed was this 950 degree Celsius mark required to turn rotors with the weight of a small skyscraper each. The decisions being made against certain energy forms and choices would have been decided by (DOE et al.) based on the idea that everything had to match into that existing system application (and its constraints) in order to viable. That means, geothermal sources wouldn’t have even been in the playbook and neither would a multitude of other choices. And rather than to redesign the system constraints with those massive forgings of appreciable weight in steel made into something more applicable to today’s materials, every other source possibility was simply treated as some bastardized child wasting the taxpayers’ money even as they allocated some pittance to it.
It seems we could take the same “system” and choose a geothermal source and manner of access to it – but make the turbine components of new materials with lower weight to strength ratios with durability, high structural integrity and (tested) long term reliability characteristics and within the next three years, place it online to provide electricity much faster than trying to create the temperatures of the sun through fusion and harness it for the power to turn 200 ton turbine rotors. Honestly.
And, it also seems that this would be the best time to consider redesigning the rotor materials in light of the fact that companies all over the US and the world are begging for business and contracts to use the wonderful new things they know how to do now. The materials that are available, the new manufacturing processes and the new carbon nanomaterials companies are desperate for the opportunities to make these applications of the things they have available. And, they would do it right now.
I’m so sick of hearing throughout my adult life that we are forever 30 years away from doing anything. Maybe that works to get funding for more research and more research and more research, but at what point is that costing us far more than the time that we continue to wait for any of that research to be available to make our lives better? I can understand why the nuclear industry may not want to make any changes to the system they have in place right now and the ways they are doing it. My guess is that they make money at every single stage of the process and may even own part of the profits of the large forging manufacturers, the mine operations to provide the raw materials and the shippers that ship these things at every stage of the process. I wouldn’t doubt it. But some of these decisions need to be re-analyzed in light of what we know now. And, many of these decisions including cost-to-benefit ratios have changed significantly. Probabilistic assessments failed to compute accurately the various scenarios that were even contemplated, which reality has now shown us could, and in fact, would be likely, let alone to have accurately depicted the drawbacks and dangers that have recently been discovered.
We have at least 8 million businesses involved in some form of something which has to do with the energy industry, easily. We have plenty of money throughout those industry sources which the energy sector businesses enjoy almost without reserve or even with any further consideration of what they are asking to do. Surely, some of those funds and intelligent brain power could be used to resolve these issues for them, which include decisions made on power systems and energy sources (and design decisions) that were made with facts of some earlier time rather than the facts of today.
If a decision was even based in today which failed to account for those changes in information – it would be faulty. In 1960 and in 1970, the software didn’t exist to do the things we can do today, the equipment available for testing and for modeling did not exist in its current forms, the robotic manufacturing with computer software control systems did not exist in the same range of possibilities and the raw materials costs along with shipping and processing costs were of a completely different scale when those decisions were made than it is today.
It isn’t enough to have added a few new figures to the analysis to explain the difference and make an adjusted comparison. The entire supply chain is different now and what may have not been viable in the past, is in many cases, more viable today and existing systems in the manner they were originally designed and computed for costs – may be far more costly than anticipated. Those original 40 year old facts and figures for comparison simply do need to be re-analyzed and what is remarkable is that – it shouldn’t take years, upon years, upon years of manhours to do that. (Although I’m sure there will be a way to do it like that where in fifty more years, we are still waiting for those results – much as we are today on some things.)
950 Degrees Celsius to make a power system that works . . .
So it can turn four 200 ton turbine blade rotors with the weight of a mid-sized skyscraper each – in order to make electricity.
And, anything that can’t do that isn’t even considered with any appreciable respect and funding . . .
“they” (in business, energy industry and government energy agencies) want scientists and engineers to create a small sun on earth using fusion so they can power a steam kettle to make electricity much as they are doing now through nuclear fission to heat water.
Yep, that’s about it . . .
That is wrong on so many levels and in so many ways as to be unbelievably misguided.
But, then who am I to say – I’m sure it must be me that is misguided about it. Having a small contained sun on our planet thirty years from now driving a fusion reactor to put steam into some massively weighted system of components to make steam to drive turbines also massively weighted – is probably the “right way to do it” in their estimation.
And in the meantime, as our planets’ population hits the 7 billion persons mark with its increased need for power generating capacity, electricity in general, fuel sources and increased desire for more extensive power grids, and even as raw materials get scarier in the harvesting of them being used – we are supposed to say nothing and wait another thirty years for these power options to be available while enduring nuclear fission based steam kettle systems we have or are building now with all their dangers and drawbacks.
Hmmm… I don’t think so. That would have to be wrong.
Okay – on to other things –
This article (linked below) is very interesting about a nuclear powered bomber the US produced in 1944 – 1957/1960’s program time period. It was apparently successful, but the idea of having a flying nuclear generator overhead was unappealing – as I can imagine. Although that doesn’t seem to matter with planes that are armed with nuclear missiles or bombs on board. Hmmm. Interesting technology notes on the nuclear power systems considered and tested during the program can be found in this wonderful article about the plane –
In 1949, the program ran a series of tests, known as the Heat Transfer Reactor Experiment (HTRE), involving three types of reactors, with the purpose of determining the most efficient method of transferring energy from the reactor. After an extensive trial series, the HTRE-3 emerged as the selected transfer system. The HTRE-3 was a Direct-Cycle Configuration. In a direct cycle system, the air entered the engine through the compressor of the turbojet, it then moved to a plenum intake that directs the air to the core of the reactor.
At this point the air, serving as the reactor coolant, is super-heated as it travels through the core. After that stage, it goes to another plenum intake; from there the air is directed to the turbine section of the engine and eventually to the tailpipe. This configuration allowed the aircraft engine to start on chemical power and then switch to nuclear heat as soon as the core reached optimized operational temperatures, thus providing the proposed aircraft the ability to take-off and land on conventional power.
Another system considered was the Indirect-Cycle Configuration (not shown here, my note). In this configuration, the air did not go through the reactor core, air instead passed through a heat exchanger. The heat generated by the reactor is carried by liquid metal or highly pressurized water, to the heat exchanger where the air is, thus heating the air in its way to the turbine.
And this one –
Neutron Activated Graphite.
Lorraine McDermott, School of Materials, Manchester
Autoradiographic image of neutron activated graphite from the British Experimental Pile Zero (BEPO) nuclear reactor core. The core was operational from 1948-1968 with a final decommissioning date scheduled for 2022. Autoradiography produces a visual distribution pattern of radiation, where the specimen is the source of the radiation. Autoradiography therefore provides information on the distribution of radioactivity within a sample. This information is being used to understand how thermal and leaching treatments may reduce the activity of nuclear graphite waste. The area of this autoradiography image is 6 x 9 mm. Hot (i.e. red) colours indicate higher activity.
(and other nifty stuff)
A reminder that any major critical incident at a nuclear power plant is not an isolated contained event – it affects the entire world along with food sources available that are required to serve populations –
Early projections of fallout dispersal from Fukushima – (now radioactive materials are in food sources including beef, sea sources, and others of various measures have been found in drinking water, milk, vegetables, etc. in Japan) –
A challenge that lies ahead will be how to clean up massive amounts of debris from the tsunami and quake, some of which may now be radioactive. “They’re going to have t come with a plan and a repository,” Jemmex said, adding that includes creating designated clean-up zones to allow materials to cool down.
(etc. – includes world map with expected contamination effects – regardless of the degree – this means it is not simply the business of the energy industry, the nuclear industry owners, the individual nation involved or the moneyed decision makers in some isolation – considering the damage possible is extensively life altering for both neighbors and those far removed from the location of the event. – my note)
Core damage confirmed at 3 reactors; spent fuel rods a rising concern at 4th;
U.S. urges evacuation within 80 kilometers (50 Miles) around stricken plants
March 16, 2011 (San Diego) – The United Nations has released a forecast indicating a radioactive plume from damaged Japanese nuclear reactors at Fujushima Daiichi cold reach the Aleutian Islands off Alaska on Thursday and Southern California late on Friday, then east to Nevada, Utah, Arizona, and likely points beyond.
The U.N. has not issued a statement on how much radiation the plume could contain, however numerous other experts have indicated that amounts are expected to be small and below levels likely to harm human health. The U.S. Environmental Protection Agency is setting up additional radiation monitors on the West Coast as a precaution. An existing monitor in San Diego is currently non-operational, according to the EPA’s RadNet real-time radiation monitoring database online. ( . . . )
This wikipedia page explains the various states of matter and has really nifty pictures, too. It presents an overview and a new explanation of the definitions that have come to be accepted. (this is just a little of it – well worth reading through all of it).
Under extremely high pressure, ordinary matter undergoes a transition to a series of exotic states of matter collectively known as degenerate matter. In these conditions, the structure of matter is supported by the Pauli exclusion principle. These are of great interest to astrophysicists, because these high-pressure conditions are believed to exist inside stars that have used up their nuclear fusion “fuel”, such as white dwarfs and neutron stars.
Electron-degenerate matter is found inside white dwarf stars. Electrons remain bound to atoms but are able to transfer to adjacent atoms. Neutron-degenerate matter is found in neutron stars. Vast gravitational pressure compresses atoms so strongly that the electrons are forced to combine with protons via inverse beta-decay, resulting in a superdense conglomeration of neutrons. (Normally free neutrons outside an atomic nucleus will decay with a half life of just under 15 minutes, but in a neutron star, as in the nucleus of an atom, other effects stabilize the neutrons.)
SupersolidMain article: Supersolid
A supersolid is a spatially ordered material (that is, a solid or crystal) with superfluid properties. Similar to a superfluid, a supersolid is able to move without friction but retains a rigid shape. Although a supersolid is a solid, it exhibits so many characteristic properties different from other solids that many argue it is another state of matter.
Brief explanation of nuclear propulsion used mainly in submarines (fission heat products to steam process) –
In the early 1950s work was initiated at the Idaho National Engineering and Environmental Laboratory to develop reactor prototypes for the US Navy. The Naval Reactors Facility, a part of the Bettis Atomic Power Laboratory, was established to support development of naval nuclear propulsion. The facility is operated by Westinghouse Electric Corporation under the direct supervision of the DOE’s Office of Naval Reactors. The facility supports the Naval Nuclear Propulsion Program by carrying out assigned testing, examination, and spent fuel management activities.
The facility consists of three naval nuclear reactor prototype plants, the Expended Core Facility, and various support buildings. The submarine thermal reactor prototype was constructed in 1951 and shut down in 1989; the large ship reactor prototype was constructed in 1958 and shut down in 1994; and the submarine reactor plant prototype was constructed in 1965 and shut down in 1995. The prototypes were used to train sailors for the nuclear navy and for research and development purposes. The Expended Core Facility, which receives, inspects, and conducts research on naval nuclear fuel, was constructed in 1958 and is still operational.
The initial power run of the prototype reactor (S1W) for the first nuclear submarine, the Nautilus, was conducted at the INEEL in 1953. The A1W prototype facility consists of a dual-pressurized water reactor plant within a portion of the steel hull designed to replicate the aircraft carrier Enterprise. This facility began operations in 1958 and was the first designed to have two reactors providing power to the propeller shaft of one ship. The S5G reactor is a prototype pressurized water reactor that operates in either a forced or natural circulation flow mode. Coolant flow through the reactor is caused by thermal circulation rather than pumps. The S5G prototype plant was installed in an actual submarine hull section capable of simulating the rolling motions of a ship at sea. The unique contributions of these three reactor prototypes to the development of the United States Nuclear Navy make them potentially eligible for nomination to the National Register of Historic Places.
The Test Reactor Area (TRA) occupies 102 acres in the southwest portion of the INEL. The TRA was established in the early 1950s with the development of the Materials Test Reactor. Two other major reactors were subsequently built at the TRA: the Engineering Test Reactor and the Advanced Test Reactor. The Engineering Test Reactor has been inactive since January 1982. The Materials Test Reactor was shut down in 1970, and the building is now used for offices, storage, and experimental test areas. The major program at the TRA is now the Advanced Test Reactor. Since the Advanced Test Reactor achieved criticality in 1967, it’s been used almost exclusively by the Department of Energy’s Naval Reactors Program. After almost 30 years of operation, this reactor is still considered a premier test facility. And it’s projected to remain a major facility for research, radiation testing, and isotope production into the next century.
Federation of American Scientists website – entry found here -(well worth reading all of it – has great diagrams too.) –
Here is their main page link –
Describes a bladeless turbine designed by Nikola Tesla.
List of Tesla Patents –
Explains the algorithms used in the stock market for trades that most people don’t know are running in the background for every exchange.
Lists guidelines for nuclear power to be operated more safely given the recent Fukushima meltdown and release of radioactive materials that has occurred. (from the Union of Concerned Scientists).
Just really nifty stuff.
Also very nifty.
X-ray scattering is one of the most effective methods for determining the structure of materials on the nanoscale. Scattering is favored because it can reveal both the structure and chemical composition of solids or liquids without destroying the sample. Small-angle x-ray scattering (SAXS) is a widely used variation of this technique that fires a monochromatic beam through a sample. Most of the x-rays pass through the sample, but some x-rays scatter as they encounter inhomogeneities in the material.
For porous materials, SAXS is especially useful because x-rays are scattered as they pass through interfaces of domains within the sample. These domains can be solid, another type of liquid, or even a gas within the sample.
Very nifty – but I suppose it can’t be taken out to a building, a bridge, a dam or a levee to check the integrity of those materials. But, it sure needs to – or something that is based on the same principles. (my note)
Absolutely stunning stuff.
About a teacher who taught Einstein –
Aurel Stodola From Wikipedia, the free encyclopedia
Aurel StodolaAurel StodolaBorn 10 May 18591859-05-10Liptovský Mikuláš, Austro-Hungarian EmpireDied 25 December 19421942-12-25 aged 83Zürich, SwitzerlandResting place Liptovský Mikuláš, SlovakiaResidence Slovakia, Switzerland
Known for technical thermodynamics gas turbine-powered electric generator
Awards Honorary degree of Leibniz University Hannover Grashof medal of Verein Deutscher Ingenieure Honorary degree of German Technical University of Brno Honorary degree of Charles University of Prague James Watt International MedalAurel Boleslav Stodola 10 May 1859 Liptovský Mikuláš, Austro-Hungarian Empire – 25 December 1942 Zürich, Switzerland
Aurel Stodola was an engineer, physicist, and inventor. He was an ethnic Slovak. He was a pioneer in the area of technical thermodynamics and its applications and published his book Die Dampfturbine the steam turbine in 1903. In addition to the thermodynamic issues involved in turbine design the book discussed aspects of fluid flow, vibration, stress analysis of plates, shells and rotating discs and stress concentrations at holes and fillets.
Stodola was a professor of mechanical engineering at the Swiss Polytechnical Institute now ETH in Zurich. One of his students was Albert Einstein. In 1892, Stodola founded the Laboratory for Energy Conversion.
Steam and Gas TurbinesStodola’s book Steam and Gas Turbines  was cited by Soviet rocket scientist Fridrikh Tsander in the 1920s. Published in English in 1927 and reprinted many times up to 1945, it was a basic reference for engineers working on the first generation of jet propulsion engines in the United States.
Stodola worked closely with industries on the development of the first practical gas turbines, in particular Brown, Boveri & Cie, who built the first gas turbine-powered electric generator in 1939.
Medical equipmentIn 1915-1916 Stodola collaborated with Ferdinand Sauerbruch a German surgeon to develop an advanced mechanically driven prosthetic arm. This collaboration marked one of the first documented examples of a surgeon and engineer merging efforts. Sauerbruch said, “Henceforth, surgeon, physiologist, and technician prosthetist/engineer will have to work together.”
Honors 1905 – Honorary degree of Leibniz University Hannover 1908 – Grashof medal of Verein Deutscher Ingenieure Honorary degree of German Technical University of Brno 1929 – Honorary degree of Charles University of Prague 1941 – James Watt International MedalCorresponding member of French Academy of Sciences.
See also Ellipse Law
References 1. ^ Rao, S., “Mechanical Vibrations”, Addison-Wesley, Wokingham, England, Third Edition, 1995. 2. ^ “Osobnosti Pýcha inžinierstva celého sveta Aurel Stodola: Moje city nikdy neochabli pre môj národ”. civil.gov.sk. http://www.civil.gov.sk/archiv/casopis/2001/1426mipr.html. Retrieved 23 September 2009 In Slovak. 3. ^ Aurel Stodola 1945, Steam and gas turbines, New York: P. Smith, OL18625767M 4. ^ Dawson, V.P., Engines and Innovation: Lewis Laboratory and American Propulsion Technology. NASA SP-4306, 1991. 5. ^ Early Gas Turbine History at web.mit.edu 6. ^ Childress, D.S., Development of rehabilitation engineering over the years: As I see it. Journal of Rehabilitation Research and Development, 2002, 396, Supplement:1-10. External links Virtual Exhibbition from the Library of the ETH Zürich German Fund of A. StodolaPersondataName Stodola, AurelAlternative names Short description Date of birth 10 May 1859Place of birth Liptovský Mikuláš, SlovakiaDate of death 25 December 1942Place of death Zürich, Switzerland
Retrieved from “http://en.wikipedia.org/wiki/Aurel_Stodola”Categories: 1859 births | 1942 deaths | Slovak inventors | Slovak engineers | Hungarian engineers | Hungarian scientists | Austro-Hungarian scientists | ETH Zurich faculty | People from Liptovský Mikuláš | Turbines | Jet engines | Power engineering
Relativistic-runaway-electron avalanche – Wikipedia, the free encyclopedia
This one is very important where fusion possibilities are concerned. Well worth studying this page at wikipedia.
Carbon nanotubes offer new way to produce electricity
This is the best one.
Breakthrough in Converting Heat Waste to Electricity : Northwestern University Newscenter
“It has been known for 100 years that semiconductors have this property that can harness electricity,” said Mercouri Kanatzidis, the Charles E. and Emma H. Morrison Professor of Chemistry in The Weinberg College of Arts and Sciences. “To make this an efficient process, all you need is the right material, and we have found a recipe or system to make this material.”
Well worth reading – my note.
Heat to electricity to heat to … « Texas A&M Engineering Works
Silicon Nanowires Turn Heat to Electricity – IEEE Spectrum
02.15.2007 – Researchers convert heat to electricity using organic molecules, could lead to new energy source
Generating ‘green’ electricity: Waste heat converted to electricity using new alloy
A Sound Way To Turn Heat Into Electricity
Turning heat to electricity – MIT News Office
Quantum Ferromagnet Using a Nine Ion Crystal Observed by Researchers
A brief and concise explanation of how the Fleischmann and Pons cold fusion experimental results came to be called quack science despite evidence to the contrary – (with sources) –
Source 9 does not mention “pathological science,” but is a report of the coming 2004 U.S. DoE review. It begins with: Cold fusion, briefly hailed as the silver-bullet solution to the world’s energy problems and since discarded to the same bin of quackery as paranormal phenomena and perpetual motion machines, will soon get a new hearing from Washington. This is a report in a reliable source, all right, but is fluff, general passing hyperbole, passive, with no attribution of who did the discarding.
Source 32 is not a reliable source, it appears to be a single individual’s private account of the meeting, attending with a group from General Electric Research, and does not mention “pathological science” either. It contains the following information about Morrison:
Lightwave electronics at sharp metal tips
Biography James Watt
Amazing. (my note)
Patent analysis and product survey on use of nanomaterials in lithium-ion batteries
Amazing – with a list toward the bottom of the article which describes the patents. Absolutely brilliant.
List of Tesla Patents –
A Fusion Thruster for Space Travel – IEEE Spectrum
This is AMAZING – although noted that it is ten years from being onboard – it is already at a very useful state of design.
Absolutely worth taking a look and reading through the explanation.
Truly amazing work.
Great design – very workable concept.
More fusion notes. Glad to have found it.
Evidence of a new phase in liquid hydrogen
February 25, 2010 By Miranda Marquit
One of the most significant things Tamblyn and Bonev discovered through their simulations, from an astrophysics standpoint, is that equations describing the properties of hydrogen might need to be updated. “This should change the modeling going forward,” Tamblyn insists. “What we found in the liquid suggests what the solid might look like, and that can help determine some of its thermal and electronic properties.”
After running the simulations, Tamblyn and Bonev then had to analyze them. “We discovered an ordering in the liquid that accounts for some of the interesting characteristics of hydrogen, such as the fact that under certain conditions, liquid hydrogen is more dense than the solid. We also found that highly ordered packing explains properties related to dissociation that were previously not well understood.”
Information on the simulation efforts, as well as results and conclusions, are presented in Physical Review Letters: “Structure and Phase Boundaries of Compressed Liquid Hydrogen.”
Therefore, we are all agreed that new ways of developing and harnessing energy resources are desirable and much needed. The idea that the concept of fusion promises viable solutions to these energy needs has been around for a long, long time. The first such candidate for inclusion in this article is Sergei –
I did a google search right quick rather than trying to find anything in my documents which takes forever –
using the search terms –
sergei 1827 engineering fusion
Very interesting results – but especially this one –
That is the google translation page to see it in English – it is a 2005 paper.
And this one – also from those search results –
A quote from the text linked above –
By early August Kapitza, at Rutherford’s suggestion, was studying how the energy of the alpha particle falls off at the end of its range. This project was brought to a successful conclusion with amazing rapidity. (etc.)
And this one –
He had to convince the authorities that his work lay primarily in pure rather than applied physics and that he could do nothing useful unless he had equipment and other facilities comparable with those he had enjoyed in Cambridge. Negotiations were begun to bring him what he required.
So Kapitza settled down to research again and within a year made his greatest discovery, the superﬂuidity of liquid helium. However, he lacked the freedom he had enjoyed in Cambridge. From the mid 1930s Soviet scientists found themselves increasingly cut off from their colleagues in other countries.
( . . . )
Although he was no longer head of the Institute of Physical Problems, Kapitza retained his position and salary as a full academician and went to live at his country house at Nikolina Gora, where he managed to carry on scientific work while virtually under house arrest. Most of his effort went into building up a laboratory in various outhouses where, aided by his sons, particularly Sergei, he could continue experimental work, albeit only on relatively unexciting projects. While atomic physics elsewhere was moving rapidly ahead using particle accelerators and other new equipment, he was unable to contribute to this. Even after he had been reinstated in 1954 and could return to Moscow,he was still without the facilities he needed for the kind of experimental work at which he excelled. Nevertheless, he began to think about the possibility of developing a defence against atomic bombs using extremely powerful microwave emissions. Later he transferred his attention to the problem of generating energy through nuclear fusion.
Also – this note from the article above – (about who Sergei is – one of two children born to Piotr and his wife, Anna) –
Two children were born in their Cambridge period: in 1928 Sergei, who became a distinguished physicist and successful popularizer of science for Soviet television, and three years later Andrei, who became a well-known Antarctic explorer and geographer.
From the other linked document –
(for which there is a google translated document link above the article about Piotr Kapitza,)
Liquid Helium Research from above (my note – and check out the equations he used)
To describe the contributions made by Piotr Kapitza in physics as it pertains to this discussion of fusion –
(from a biographical sketch about him online)
In 1934 he returned to Moscow where he organized the Institute for Physical Problems at which he continued his research on strong magnetic fields, low temperature physics and cryogenics.
In 1939 he developed a new method for liquefaction of air with a lowpressure cycle using a special high-efficiency expansion turbine. In low temperature physics, Kapitsa began a series of experiments to study the properties of liquid helium that led to discovery of the superfluidity of helium in 1937 and in a series of papers investigated this new state of matter.
During the World War II Kapitsa was engaged in applied research on the production and use of oxygen that was produced using his low pressure expansion turbines, and organized and headed the Department of Oxygen Industry attached to the USSR Council of Ministers. Late in the 1940′s Kapitsa turned his attention to a totally new range of physical problems.
He invented high power microwave generators – planotron and nigotron (1950- 1955) and discovered a new kind of continuous high pressure plasma discharge with electron temperatures over a million K. Kapitsa is director of the Institute for Physical Problems.
Since 1957 he is a member of the Presidium of the USSR Academy of Sciences. He was one of the founders of the Moscow Physico-Technical Institute (MFTI), and is now head of the department of low temperature physics and cryogenics of MFTI and chairman of the Coordination Council of this teaching Institute. He is the editor-in-chief of the Journal of Experimental and Theoretical Physics and member of the Soviet National Committee of the Pugwash movement of scientists for peace and disarmament.
And this –
(from the CERN materials found on this page link below it)
HEAVY ION PROGRAM AT BNL: AGS, RHIC*
AGS Department, Brookhaven National Laboratory
Associated Universities, Inc.
Upton, New York 11973
(from the CERN materials found on this page – despite it being an early document among them)
Oh yes, and do be sure and read this –
which explains a lot of it – in about the most current understanding of it with very clear explanations that are easy to follow –
And from this page -pg 25 of the document
Extreme-Circumstances StructuraI Materials
教授 吉田 直亮，助教授 渡邉 英雄，助手 岩切 宏友
Professor Naoaki Yoshida, Associate Professor Hideo Watanabe,
Research Associate Hirotomo lwakiri
Xu, Qiu, N. Yoshida, T. Yoshiie : Dynamic Simulation of Multiplier Effects of
Helium Plasma and Neutron Irradiation on Microstructural Evolution in
Tungsten, Materials Transactions, Vol. 46, No. 6, pp.1255-1260, 2005.
Nishijima, D., H. Iwakiri, K. Amano, M.Y. Ye, N. Ohno, K. Tokunaga, N. Yoshida,
S. Takamura : Suppression of blister formation and deuterium retention on
tungsten surface due to mechanical polishing and helium pre-exposure, Nuclear
Fusion 45, pp. 669-674, 2005.
From the other linked document –
(for which there is a google translated document link above the article about Piotr Kapitza,)
page 22 – along with some of the non-linear dynamic system materials
High Energy Plasma Physics
教授 伊藤 早苗，助教授 矢木 雅敏
Professor Sanae-I. Itoh, Associate Professor Masatoshi Yagi
Zonal flows in plasma –a review,
Plasma Phys. Control. Fusion Vol.47, No.5 (2005), R35-R161,
P. H. Diamond, S.-I. Itoh, K. Itoh and T. S. Hahm
Two decades of plasma physics – Turbulence and structure formation – (in
Parity Vol.20, No.11 (2005) 36-38
K. Itoh and S.-I. Itoh
Progress of the theory of zonal flow (in Japanese)
J. Plasma and Fusion Research Vol.81 No.12 (2005) 972-977
K. Itoh and S.-I. Itoh
Very interesting results – but especially this one –
That is the google translation page to see it in English – it is a 2005 paper.
(from the “obscurity assured” post I made on 6-27-11)
Japan nano-tech team creates palladium-like alloy: report – Latest news around the world and developments close to home – MSN Philippines News
from Agence France-Presse, 12-30-10
using rhodium-silver nanoparticles with alcohol stabilization – wowsa. and it works . . .
CERN unveils Open Hardware initiative
The CERN OHL was created to govern the use, copying, modification and distribution of hardware design documentation and the manufacture and distribution of products. Hardware design documentation includes schematic diagrams, designs, circuit or circuit board layouts, mechanical drawings, flow charts and descriptive texts, as well as other explanatory material.
That is truly amazing. What it means is that their research is available for anyone to study and understand better, among other things. It can be appropriately incorporated into different things following their guidelines. Truly amazing. And, wondrous news.
Collections of scientific and technical information from the U.S. Department of Energy (DOE) with a distributed searching capability.
(Note – this was the page that I was talking about earlier in this post with the most wonderful archives of information from the DOE and it is really easy to figure out and use it – cricketdiane)
Johnson Thermo-Electrochemical Converter System
The JTEC is an all solid-state engine that operates on the Ericsson cycle. Equivalent to Carnot, the Ericsson cycle offers the maximum theoretical efficiency available from an engine operating between two temperatures. The JTEC system utilizes the electro-chemical potential of hydrogen pressure applied across a proton conductive membrane (PCM). The membrane and a pair of electrodes form a Membrane Electrode Assembly (MEA) similar to those used in fuel cells.
On the high-pressure side of the MEA, hydrogen gas is oxidized resulting in the creation of protons and electrons. The pressure differential forces protons through the membrane causing the electrodes to conduct electrons through an external load. On the low-pressure side, the protons are reduced with the electrons to reform hydrogen gas. This process can also operate in reverse. If current is passed through the MEA a low-pressure gas can be “pumped” to a higher pressure.
The JTEC uses two membrane electrode assembly (MEA) stacks. One stack is coupled to a high temperature heat source and the other to a low temperature heat sink. Hydrogen circulates within the engine between the two MEA stacks via a counter flow regenerative heat exchanger. The engine does not require oxygen or a continuous fuel supply, only heat.
(from a post I made July 7, 2010 during the BP oil spill debacle of the Deepwater Horizon as it spewed zillions of cubic meters of crude oil and dispersants into the Gulf of Mexico – damn ridiculous mess they created.)
Nuclear Reactors are classified by several methods; a brief outline of these classification schemes is provided.
Classification by type of nuclear reaction
- Nuclear fission. All commercial power reactors are based on nuclear fission. They generally use uranium and its product plutonium as nuclear fuel, though a thorium fuel cycle is also possible. Fission reactors can be divided roughly into two classes, depending on the energy of the neutrons that sustain the fission chain reaction:
- Thermal reactors use slowed or thermal neutrons. Almost all current reactors are of this type. These contain neutron moderator materials that slow neutrons until their neutron temperature is thermalized, that is, until their kinetic energy approaches the average kinetic energy of the surrounding particles. Thermal neutrons have a far higher cross section (probability) of fissioning the fissile nuclei uranium-235, plutonium-239, and plutonium-241, and a relatively lower probability of neutron capture by uranium-238 (U-238) compared to the faster neutrons that originally result from fission, allowing use of low-enriched uranium or even natural uranium fuel. The moderator is often also the coolant, usually water under high pressure to increase the boiling point. These are surrounded by reactor vessel, instrumentation to monitor and control the reactor, radiation shielding, and a containment building.
- Fast neutron reactors use fast neutrons to cause fission in their fuel. They do not have a neutron moderator, and use less-moderating coolants. Maintaining a chain reaction requires the fuel to be more highly enriched in fissile material (about 20% or more) due to the relatively lower probability of fission versus capture by U-238. Fast reactors have the potential to produce less transuranic waste because all actinides are fissionable with fast neutrons, but they are more difficult to build and more expensive to operate. Overall, fast reactors are less common than thermal reactors in most applications. Some early power stations were fast reactors, as are some Russian naval propulsion units. Construction of prototypes is continuing (see fast breeder or generation IV reactors).
- Nuclear fusion. Fusion power is an experimental technology, generally with hydrogen as fuel. While not suitable for power production, Farnsworth-Hirsch fusors are used to produce neutron radiation.
- Radioactive decay. Examples include radioisotope thermoelectric generators as well as other types of atomic batteries, which generate heat and power by exploiting passive radioactive decay.
In nuclear physics, an energy amplifier is a novel type of nuclear power reactor, a subcritical reactor, in which an energetic particle beam is used to stimulate a reaction, which in turn releases enough energy to power the particle accelerator and leave an energy profit for power generation. The concept has more recently been referred to as an accelerator-driven system (ADS).
And I wanted to put this one other reactor type which is amazing –
(But, those aren’t it. Does have interesting insight into the nuclear fission stuff underway, though.)
This is the best little search for some good images of some of it – still not what I was trying to find –
Its this nifty experimental fission powered reactor – seems like the pictures from it I’ve seen are from somewhere not in America – Switzerland, Denmark, France or Russia maybe –
I’ll look through my docs to decipher where it is from and what it is called – very, very nifty, though. You just gotta see it and what it does. Really amazing.
The whole approach is different. Yep, really amazing. And, the results are spectacular, at least I think so.
Later . . .