It is actually the conversion that we want; the transfer of one type of work energy into the form of electricity, heat, power, light, energy, locomotion, speed and velocity generally, along with control of it that our systems are designed to do. The idea that a system to be viable must produce more energy than is consumed is a misleading concept at best. There is no greater energy in a hydro-electric system coming out than going in. There is, in fact, less energy converted into electrical power than the consummate energy of the water’s motion would be calculated to produce directly. There is a loss by the conversion being used at every stage of the process.
These systems we have for producing electricity (and other forms of power, such as gasoline to power vehicles and machinery), are literally transferring one type of energy into another. That is true for every type of “energy source” we are currently using, (by utilizing them in the current manner we are doing it.)
With ITER and at Livermore, their energy conversions are requiring mega-power going in and after their “conceptual” processing, the amount of power coming out is about enough to light a few lightbulbs and run a few toasters, and does so at a cost of several hundred billion (with a “b”) dollars over the life of the experimental phases, the startup capital building requirements and through the life of the operating usefulness. It compares with the $10 billion dollar plus steam kettle systems that we’ve designed called “nuclear power” as well which use uranium rods to heat water into steam for turning the turbines.
In nature, considering that we haven’t had to go to the sun and stir it even once to keep the fusion reactions occurring, at least not in any lifetimes found in science or mythical literature, there is something to be said for creating similar dynamic and robust systems which, in my opinion – are an asymmetrical equilibrium maintained that continuously feeds both the reactions and the physics-based attributes of the system making the reactions viable within the system. We don’t try to do that in our systems and process designs thus far.
As a result of our ways of approaching power “generation” which is simply to convert one potential or kinetic energy source into another (of electricity, for instance) with our fairly systemically symmetric, stable and forced system dynamics in a linear configuration, the power lost at each stage is tremendous. The actual system itself depletes some of those energy conversions rather than to help sustain the conversion by its basic design and concept.
( – cricketdiane, 06-27-11)
So, what if –
We design a dynamic fusion system based upon a mechanically asymmetric process.
That, the attempt is not made to convert through steam, fire, temperature changes or the kinetic energy of motion from water or air or steam.
That, the conversion be made more directly – since, only the stripped electrons at the molecular and atomic level are actually desirable or needed from the system.
And, those harnessing mechanisms be controlled and safely contained with all their possible by-products from the reaction. It isn’t safe to have neutrinos running around loose. The by-products must be contained within the system and harnessed as a possibility for managing them safely while gaining what possible work can be derived from them.
(Note, the word “mechanically” is used in the sense of physics – not in the sense of physical mechanical substrates within the system.
The list above, did not include geothermal power sources which are both actively engaged in generating power today and, it is safe to assume, will be used to generate electric power in the future. These sources are simply using a different initializing source for a temperature change to steam as a method of turning the turbines to generate electricity. In these systems, they are using the earth like a hot springs tap to engineer the steam desired. They are very effective, but not as well funded and pursued as other “energy sources.”
As I said, there are drawbacks to every one of the systems which is not included in this discussion. The idea that we need another option is fairly well grounded in reality and there are many good reasons for this.
When I first started looking at the options for fusion energy / electricity generation, I thought that the first line of options would include the radioactive decay from uranium and similar elements which, I discovered, had been tried with good results but very dangerous drawbacks. There was no possible control mechanism that could be adequately safe for those decay particles to be harnessed to gain the electrical “work” available in them. They are dangerous, unstable, not user friendly to human beings and human life, along with being unpredictable inside a very slim margin of error. They were in use for some lighthouses and other similar direct energy products including some uses for satellites and space program interstellar research missions’ onboard power needs. These sources rely on a more direct approach to harnessing the energy from these sources but are so dangerous as to make them impractical on any massive scale. It has been tried.
Meanwhile, our sun keeps on making fusion reactions a multitude of times per second across its entire surface and dimensionally through its gaseous structure, maybe to several earth’s deep or more without one bit of help from us and its initiation began in an ultra-cold, vacuum of space condition. When a lightning bolt strips through the air on our planet, it is neither in a high-pressure confined and contained environment nor in a high temperature and high pressure environment. Its entire system neither requires extreme conditions of pressure such as those used in our labs nor relies upon an intense high temperature environment to occur. Admittedly, lightning is a direct current system, however, it is fusing, altering, fissioning, and changing atoms as it goes along its path as well as impacting the surrounding air molecules in chain reactions as a result.
– ** –
So, on a scale of 1 to 10 – with 10 being the most efficient of the lot –
10 would have to go to the sun and other naturally occurring fusion and energy conversion systems.
and 1 or 2 would have to encompass where our designed systems are operating.
And, in the meantime, energy needs are being fulfilled by fairly traditional sources of steam or falling water turning turbines to make electricity, or coal-fired and petroleum fired power plants doing it nearly the same way with nothing more than a different starting point fuel making fire to make steam to turn the turbines – and nuclear rods in nuclear power plants doing basically the same thing.
June 27, 2011
And this from the same post which is a quote from a CNN article about fusion recently –
“Washington is comfortable that this technology provides no opportunities “for nuclear proliferation or advancement of other country’s weapons capability,” said Dunne. The development of commercial fusion, he says, has no defense applications.”
Our engineers, scientists, business people and decision-makers have been passing up tangible choices that work, especially for the last 40 years. Based on profitability analyses, cost-to-benefit ratios, and probabilistic risk assessment scenarios – decisions have been made to exclude, refuse, reject, de-fund and deny funding to possible and viable solutions. That is true with every single area of engineering, every type of science, every product, architecture, civil engineering, construction choice, business and especially the energy industry.
I noticed this (below) yesterday in an article from the Union of Concerned Scientists among their suggestions to make nuclear power plants safer and it reminded me of things I had seen in engineering books about cost-to-benefit choices that are made constantly throughout the process. It is obvious that those kinds of choices are being made at every level and in every arena which at the same time is serving to exclude any number of better choices in materials, construction methods, science, research, avenues of research, and integrity of engineering as it is applied. It could have set aside numerous methods for fusion that could work, for instance, or served to steal funding from those avenues of research, and to have literally shelved those other options. It has certainly done this with geothermal, and hindered solar, wind and wave /current possibilities.
The NRC should increase the value it assigns to a human life in its cost-benefit analyses so the value is consistent with other government agencies.
The NRC should require plant owners to calculcate the risk of fuel damage in spent fuel pools as well as reactor cores in all safety analyses.
The NRC should not make decisions about reactor safety using probabilistic risk assessments (PRAs) until it has corrected its flawed application of this tool.
Union of Concerned Scientists
I was looking at a bladeless turbine invented by Teslas earlier today, and I was thinking – what if these new materials we have today were used for it? Hmmm………
How many things would be like that?
And I noticed this the other day on France24 about CO2 capture used for piping into greenhouses by a company named Hortichuelas – for industrial tomato farms and such. That is pretty brilliant. They had a tank of CO2 captured from some other industry that was being fed into the greenhouse through pipes along next to the plants which made them grow super big and stronger than without it. Amazing.
It remains a possibility that things have already been created which at some point were unfunded and set aside. Those analyses that supported those choices at the time didn’t have the advantage of the manufacturing possibilities we have now, nor the processes and materials available today, along with higher costs now associated with some of the things they were using for comparison. It would be great to have another look at some of those things with a different mindset.
How many of the ways that are in use today have been protected from competition by any other source as well? How often have corners been cut to produce cheaper at the expense of safer? And, how much of that still must be tolerated today?
On my last post, I noted a discussion from wikipedia authors about cold fusion and the Pons and Fleischmann experiment results.
(found here – )
What it means, is that through a sleight of hand by the science community – the entire avenue of research has been denied – but apparently there were some results worthy of consideration from their experiments.
So, what we have now are research groups following lines of massive projects for fusion with massive power going into their mechanisms to create that fusion potential, the promise that in another thirty years it will be available and massive funding wrapped up in them. There is also apparently some measure of disagreement between the chemistry fields and physics fields about atomic “stuff”. At some point the different understandings of scientific specialties are crossing over the same paths when it comes to atoms and molecular research, knowledge and funding as well.
The other thing is that, having found (as I noted in the last post) the fact universities are teaching even their psychology classes about cold fusion as an inherently delusional popular “myth” even while failing to present the actual evidence about it in its entirety, it occurs to me that it has hindered, shunned and prevented many other avenues of thought about fusion generally and about generating power from anything other than what is being used already.
Astrophysical reaction chains
The most important fusion process in nature is the one that powers stars. The net result is the fusion of four protons into one alpha particle, with the release of two positrons, two neutrinos (which changes two of the protons into neutrons), and energy, but several individual reactions are involved, depending on the mass of the star. For stars the size of the sun or smaller, the proton-proton chain dominates. In heavier stars, the CNO cycle is more important. Both types of processes are responsible for the creation of new elements as part of stellar nucleosynthesis.
At the temperatures and densities in stellar cores the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ≈ 15 MK) and density (160 g/cm³), the energy release rate is only 276 μW/cm³—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates strongly depend on temperature (exp(−E/kT)), achieving reasonable energy production rates in terrestrial fusion reactors requires 10–100 times higher temperatures (compared to stellar interiors): T ≈ 0.1–1.0 GK.
Slide Show of the Saltwater being turned into fire by the invention of a Florida man from 2007
A Florida man may have accidentally invented a machine that could solve the gasoline and energy crisis plaguing the U.S., WPBF News 25 reported.
Fla. Man Invents Machine To Turn Water Into Fire
POSTED: 1:22 pm EDT May 24, 2007
UPDATED: 2:53 pm EDT May 24, 2007
Kanzius said the flame created from his machine reaches a temperature of around 3,000 degrees Farenheit. He said a chemist told him that the immense heat created from the machine breaks down the hydrogen-oxygen bond in the water, igniting the hydrogen.
“You could take plain salt water out of the sea, put it in containers and produce a violent flame that could heat generators that make electricity, or provide other forms of energy,” Kanzius said.
The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T. Farnsworth to create nuclear fusion. It has also been developed in various incarnations by researchers including Elmore, Tuck, and Watson, and more lately by George Miley and Robert W. Bussard. Unlike most controlled fusion systems, which slowly heat a magnetically confined plasma, the fusor injects “high temperature” ions directly into a reaction chamber, thereby avoiding a considerable amount of complexity. The approach is known as inertial electrostatic confinement.
Farnsworth’s original fusor designs were based on cylindrical arrangements of electrodes, like the original multipactors. Fuel was ionized and then fired from small accelerators through holes in the outer (physical) electrodes. Once through the hole they were accelerated towards the inner reaction area at high velocity. Electrostatic pressure from the positively charged electrodes would keep the fuel as a whole off the walls of the chamber, and impacts from new ions would keep the hottest plasma in the center. He referred to this as inertial electrostatic confinement, a term that continues to be used to this day. (etc.)
Also from that entry –
Regardless of its possible use as an energy source, the fusor has already been demonstrated as a viable neutron source. Fluxes are not as high as can be obtained from nuclear reactor or particle accelerator sources, but are sufficient for many uses. Importantly, the neutron generator easily sits on a benchtop, and can be turned off at the flick of a switch. A commercial fusor was developed as a non-core business within DaimlerChrysler Aerospace – Space Infrastructure, Bremen between 1996 and early 2001. After the project was effectively ended, the former project manager established a company which is called NSD-Fusion .
- Bennett, W. H., U.S. Patent 3,120,475, February 1964. (Thermonuclear power)
- P.T. Farnsworth, U.S. Patent 3,258,402, June 1966 (Electric discharge — Nuclear interaction)
- P.T. Farnsworth, U.S. Patent 3,386,883. June 1968 (Method and apparatus)
- Hirsch, Robert, U.S. Patent 3,530,036. September 1970 (Apparatus)
- Hirsch, Robert, U.S. Patent 3,530,497. September 1970 (Generating apparatus — Hirsch/Meeks)
- Hirsch, Robert, U.S. Patent 3,533,910. October 1970 (Lithium-Ion source)
- Hirsch, Robert, U.S. Patent 3,655,508. April 1972 (Reduce plasma leakage)
- P.T. Farnsworth, U.S. Patent 3,664,920. May 1972 (Electrostatic containment)
- R.W. Bussard, “Method and apparatus for controlling charged particles”, U.S. Patent 4,826,646, May 1989 (Method and apparatus — Magnetic grid fields).
- R.W. Bussard, “Method and apparatus for creating and controlling nuclear fusion reactions”, U.S. Patent 5,160,695, November 1992 (Method and apparatus — Ion acoustic waves).
From the “also see” resources list at the bottom of the page –
This was another design that had some merit but defunding made continued work with it nonexistent –
Migma was a proposed inertial electrostatic confinement fusion reactor designed by Bogdan Maglich in the early 1970s. Migma uses self-intersecting beams of ions from small particle accelerators to force the ions to fuse. It was an area of some research in the 1970s and early 1980s, but lack of funding precluded further development. (etc.)
More about the Migma – (from the same entry)
Two primary approaches have developed to attack the fusion energy problem. In the inertial confinement approach the fuel is quickly squeezed to extremely high densities, increasing the internal temperature in the process. There is no attempt to maintain these conditions for any period of time, the fuel explodes outward as soon as the force is released. The confinement time is on the order of nanoseconds, so the temperatures and density have to be very high in order to any appreciable amount of the fuel to undergo fusion. This approach has been successful in producing fusion reactions, but to date the devices that can provide the compression, typically lasers, require more energy than the reactions produce.
In the more widely studied magnetic confinement approach, the plasma, which is electrically charged, is confined with magnetic fields. The fuel is slowly heated until some of the fuel in the tail of the temperature distribution starts undergoing fusion. At the temperatures and densities that are possible using magnets the fusion process is fairly slow, so this approach requires long confinement times on the order of tens of seconds, or even minutes. Confining a gas at millions of degrees for this short of time scale has proven difficult, although modern experimental machines are approaching the conditions needed for net power production.
The Migma approach avoided the problem of heating the mass of fuel to these temperatures by accelerating the ions directly in a particle accelerator. Accelerators capable of 100 keV are fairly simple to build, although in order to make up for various losses the energy provided is generally higher. Later Migma testbed devices used accelerators of about 1 MeV, fairly small compared to the large research reactors like Tevatron, which are a million times more powerful.
The original Migma concept used two small accelerators arranged in a collider arrangement, but this reaction proved to have fairly low cross-sections and most of the particles exited the experimental chamber without colliding. Maglich’s concept modified the arrangement to include a powerful magnetic confinement system in the target area; ions injected into the center would orbit around the center for some time, thereby greatly increasing the chance that any given particle would undergo a collision given a long enough confinement time. It was not obvious that this approach could work, as positively charged ions would all orbit the magnetic field in the same direction. However, Maglich showed that it was nevertheless possible to produce self-intersecting orbital paths in such a system, and he was able to point to experimental results from the intersecting beams at CERN to back up the proposal with real-world numbers.
Several Migma experimental devices were built in the 1970s; the original in 1972, Migma II in 1975, Migma III in 1978, and eventually culminating with the Migma IV in 1982. These devices were relatively small, only a few meters long along the accelerator beamline with a disk-shaped target chamber about 2 m in diameter and 1 m thick. This device achieved the record fusion triple product (density × energy-confinement-time × mean energy) of 4e14 keV sec cm−3 in 1982, a record that was not approached by a conventional tokamak until JET achieved 3e14 keV sec cm−3 in 1987.
Maglich has been attempting to secure funding for a follow-on version for some time now, unsuccessfully. According to an article in The Scientist, Maglich has been involved in an apparently acrimonious debate with the various funding agencies since the 1980s.
And, a little more about the Cold Fusion experiment of Pons and Fleischmann (1989) –
Cold fusion refers to a proposed nuclear fusion process offered to explain a group of disputed experimental results first reported by electrochemists Martin Fleischmann and Stanley Pons. Proponents may prefer “Low Energy Nuclear Reaction” (LENR) or Chemically Assisted Nuclear Reaction (CANR) to avoid the negative connotations associated with the original name. The field originates with reports of an experiment by Martin Fleischmann, then one of the world’s leading electrochemists, and Stanley Pons in March 1989 where they reported anomalous heat production (“excess heat”) of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode.
Hopes fell when replication failures were weighed in view of several reasons cold fusion is not likely to occur, the discovery of possible sources of experimental error, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts.
In 1989, the majority of a review panel organized by the US Department of Energy (DOE) found that the evidence for the discovery of a new nuclear process was not persuasive enough to start a special program, but was “sympathetic toward modest support” for experiments “within the present funding system.” A second DOE review, convened in 2004 to look at new research, reached conclusions similar to the first. A small community of researchers continues to investigate cold fusion, claiming to replicate Fleischmann and Pons’ results including nuclear reaction byproducts.
In 1988, Fleischmann and Pons applied to the United States Department of Energy for funding towards a larger series of experiments. Up to this point they had been funding their experiments using a small device built with $100,000 out-of-pocket. The grant proposal was turned over for peer review, and one of the reviewers was Steven E. Jones of Brigham Young University. Jones had worked for some time on muon-catalyzed fusion, a known method of inducing nuclear fusion without high temperatures, and had written an article on the topic entitled “Cold nuclear fusion” that had been published in Scientific American in July 1987.
Materials Science Supplier of Experimental and New Materials –
Muon-catalyzed fusion (from wikipedia entry)
Muon-catalyzed fusion (μCF) is a process allowing nuclear fusion to take place at temperatures significantly lower than the temperatures required for thermonuclear fusion, even at room temperature or lower. Although it can be produced reliably with the right equipment and has been much studied, it is believed that the poor energy balance will prevent it from ever becoming a practical power source. However, if muons (μ−
) could be produced more efficiently, or if they could be used as catalysts more efficiently, the energy balance might improve enough for muon-catalyzed fusion to become a practical power source.
Muons are unstable subatomic particles. They are similar to electrons, but are about 207 times more massive. If a muon replaces one of the electrons in a hydrogen molecule, the nuclei are consequently drawn 207 times closer together than they would be in a normal molecule. When the nuclei are this close together, the probability of nuclear fusion is greatly enhanced, to the point where a significant number of fusion events can happen at room temperature. Unfortunately, it is difficult to create large numbers of muons efficiently; moreover, the existence of processes that remove muons from the catalytic cycle mean that each muon can only catalyze a few hundred nuclear fusion reactions before it decays away. These two factors limit muon-catalyzed fusion to a laboratory curiosity, although there is some speculation that an efficient muon source could someday lead to a useful room-temperature fusion reactor.
Except for refinements such as these, little has changed in the half-century since Jackson’s assessment of the feasibility of muon-catalyzed fusion, other than Vesman’s prediction of the hyperfine resonant formation of the muonic (d-μ-t)+ molecular ion, which was subsequently experimentally observed. This helped spark renewed interest in the whole field of muon-catalyzed fusion, which remains an active area of research worldwide among those who continue to be fascinated and intrigued (and frustrated) by this tantalizing approach to controllable nuclear fusion that almost works. Clearly, as Jackson observed in his 1957 paper, muon-catalyzed fusion is “unlikely” to provide “useful power production… unless an energetically cheaper way of producing μ−-mesons[note 2] can be found.”
Department of Energy Tools and Resources –