The nucleus of an atom, like most everything else, is more complicated than we first thought. Just how much more complicated is the subject of a Petascale Early Science project led by Oak Ridge National Laboratory’s David Dean.
According to findings outlined by Dean and his colleagues in the May 20, 2011, edition of the journal Physical Review Letters, researchers who want to understand how and why a nucleus hangs together as it does and disintegrates when and how it does have a very tough job ahead of them.
Dean’s team, however, determined that the two-body force is not enough; researchers must also tackle the far more difficult challenge of calculating combinations of three particles at a time (three protons, three neutrons, or two of one and one of the other). This approach yields results that are both different from and more accurate than those of the two-body force.
(definitely read this one – it includes a brief explanation of the forces known to be at work.)
DOE Energy Files Access Portal –
A note on one of my cards says –
Germany looking for 10GW (to replace nuclear power facilities)
Oh and the best choice from the DOE Energy Portal – (in my opinion, is this one) –
(which offers multi-disciplinary tools) – and this one especially –
Federal R&D Project Summaries – Descriptions, awards, and summaries of federally funded research
And these –
Argonne Library’s Resources on the Internet – A repository of Internet sites for scientific research created and maintained by the library staff at Argonne National Laboratory
National Academies Press – The National Academies Press (NAP) was created by the National Academies to publish the reports issued by the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council, all operating under a charter granted by the Congress of the United States. The NAP publishes more than 200 books a year on a wide range of topics in science, engineering, and health.
Code of Federal Regulations – Government Printing Office (GPO) database containing text of public regulations issued by the agencies of the U.S. government
AND especially this one –
National Institute of Standards and Technology (NIST) – Information on products and services including reference materials and data, calibrations, standards information, and other services
AND this one –
Oak Ridge National Laboratory Technical Reports – Full text technical reports from Oak Ridge National Laboratory
These two software packages are interesting – however the first is from 2000 and the second from 2005 – there are probably better ones now – (11 years old and 6 years old, respectively) –
DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.
|PACKAGE ID||000138MLTPL01 DYNA3D2000*|
|KWIC Title||Explicit 3-D Hydrodynamic FEM Program|
CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation laws is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.
|PACKAGE ID||000663SUN0002 CFDLIB05|
|KWIC Title||Computational Fluid Dynamics Library|
Arrangements for them have to be made through the Dept. of Energy Resource Portal here – or from the pages linked above –
However, there are probably better choices that are newer and handle information more effectively and efficiently.
Be aware that some software programs will average values as part of their programming approach. Regardless of whether this is a typical manner in which to handle data – to do so alters the facts upon whose integrity nearly all of our scientific and engineering theories are extrapolated.
There was a nifty 3-d modeling software program being pushed through a lot of physics and science websites to the researchers starting a few years ago. I was very excited about its ability to take large data arrays and model them until I discovered during a web live presentation about it from their companies technical reps that it averaged the values within the arrays as part of its paradigm for processing the information. So, behind the scenes – in the subroutines of the program, the data is actually being altered and then presented visually. I hated that about it and stopped having any interest in it.
And, here’s why –
First, I’ve found that it is a more common practice, both in science and in culling real world results data as well as in the display of those results using some of the modeling software that has been available over the last however many years.
And, second – I considered what that would mean in even the simplest scenario that I could consider – for instance, the cohesion values of concrete and cement. And, when I think of the impacts that averaging would have on that in particular, it is rather horrifying.
Third, that is probably a part of what has resulted in unnecessary dangers to human life and safety in some construction choices. In the example of cohesion factors for cement and concrete, those values (which might have been altered to average the data array in a lab, science or engineering environment for ease of handling, etc.) support decisions that are made at the design, construction and the financial decision-makers’ levels. So, what if a building is being designed, engineered and then built using these altered values (however slight) for the concrete being depended upon for strength and reliability? And, then what if, the financiers push to have corners cut further, citing as a margin of safety – values when viewed objectively – were simply altered by averaging. There is then, two places where the margin of safety supposedly built into the engineering and construction of these projects are narrowed and possibly to the point of exceeding the very range in the original margin of safety. Cohesion, for instance, is not a small thing that would have no impact, whether it is in the explanations of what is happening at the atomic level within molecules or within the structural materials that make up most of our living and working structures, dams, levees and other high priority projects for the public good.
I’m not naming the software in my description above, because they would probably frown upon it. And, it is apparently an all too common manner of handling large data sets of specific experimental real-world results in order to model them, compute the modeling of them or give visual interpretations of the things that data is suggesting. However, people who are choosing the software to be used by our labs would have to look specifically for whether the software does this averaging of values in its handling of the routines – my guess, is they might and they might not. Some results are visually stunning, but obviously wrong. And, how much of that is a result of this particular thing despite the equations being used having measured accuracy?
I noticed this – very interesting, too. –
Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.
In the case of many accidents, probabilistic risk assessment models do not account for unexpected failure modes:
At Japan’s Kashiwazaki Kariwa reactors, for example, after the 2007 Chuetsu earthquake some radioactive materials escaped into the sea when ground subsidence pulled underground electric cables downward and created an opening in the reactor’s basement wall. As a Tokyo Electric Power Company official remarked then, “It was beyond our imagination that a space could be made in the hole on the outer wall for the electric cables.”
When it comes to future safety, nuclear designers and operators often assume that they know what is likely to happen, which is what allows them to assert that they have planned for all possible contingencies. Yet there is one weakness of the probabilistic risk assessment method that has been emphatically demonstrated with the Fukushima I nuclear accidents — the difficulty of modeling common-cause or common-mode failures:
And in its “References” section – it lists these two of importance, certainly –
- ^ Centrale Nucléaire de Fessenheim : appréciation du risque sismique RÉSONANCE Ingénieurs-Conseils SA, published 2007-09-05, accessed 2011-03-30
- ^ a b c d e f M. V. Ramana (19 April 2011). “Beyond our imagination: Fukushima and the problem of assessing risk”. Bulletin of the Atomic Scientists. http://thebulletin.org/web-edition/features/beyond-our-imagination-fukushima-and-the-problem-of-assessing-risk.
AND This –
(in another entry)
Cost–benefit analysis is often used by governments and others, e.g. businesses, to evaluate the desirability of a given intervention. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs (i.e. whether it is worth intervening at all), and by how much (i.e. which intervention to choose). The aim is to gauge the efficiency of the interventions relative to each other and the status quo.
The costs of an intervention are usually financial. The overall benefits of a government intervention are often evaluated in terms of the public’s willingness to pay for them, minus their willingness to pay to avoid any adverse effects. The guiding principle of evaluating benefits is to list all parties affected by an intervention and place a value, usually monetary, on the (positive or negative) effect it has on their welfare as it would be valued by them. Putting actual values on these is often difficult; surveys or inferences from market behavior are often used.
One source of controversy is placing a monetary value of human life, e.g. when assessing road safety measures or life-saving medicines. However, this can sometimes be avoided by using the related technique of cost-utility analysis, in which benefits are expressed in non-monetary units such as quality-adjusted life years. For example, road safety can be measured in terms of ‘cost per life saved’, without placing a financial value on the life itself.
CBA usually tries to put all relevant costs and benefits on a common temporal footing using time value of money formulas. This is often done by converting the future expected streams of costs and benefits into a present value amount using a suitable discount rate.
Risk associated with the outcome of projects is also usually taken into account using probability theory.
A peer-reviewed study  of the accuracy of cost estimates in transportation infrastructure planning found that for rail projects actual costs turned out to be on average 44.7 percent higher than estimated costs, and for roads 20.4 percent higher (Flyvbjerg, Holm, and Buhl, 2002). For benefits, another peer-reviewed study  found that actual rail ridership was on average 51.4 percent lower than estimated ridership; for roads it was found that for half of all projects estimated traffic was wrong by more than 20 percent (Flyvbjerg, Holm, and Buhl, 2005). Comparative studies indicate that similar inaccuracies apply to fields other than transportation. These studies indicate that the outcomes of cost–benefit analyses should be treated with caution because they may be highly inaccurate. Inaccurate cost–benefit analyses likely to lead to inefficient decisions, as defined by Pareto and Kaldor-Hicks efficiency .These outcomes (almost always tending to underestimation unless significant new approaches are used) are to be expected because such estimates:
- Rely heavily on past like projects (often differing markedly in function or size and certainly in the skill levels of the team members)
- Rely heavily on the project’s members to identify (remember from their collective past experiences) the significant cost drivers
- Rely on very crude heuristics to estimate the money cost of the intangible elements
- Are unable to completely dispel the usually unconscious biases of the team members (who often have a vested interest in a decision to go ahead) and the natural psychological tendency to “think positive” (whatever that involves)
Another challenge to cost–benefit analysis comes from determining which costs should be included in an analysis (the significant cost drivers). This is often controversial because organizations or interest groups may think that some costs should be included or excluded from a study.
In the case of the Ford Pinto (where, because of design flaws, the Pinto was liable to burst into flames in a rear-impact collision), the Ford company’s decision was not to issue a recall. Ford’s cost–benefit analysis had estimated that based on the number of cars in use and the probable accident rate, deaths due to the design flaw would run about $49.5 million (the amount Ford would pay out of court to settle wrongful death lawsuits). This was estimated to be less than the cost of issuing a recall ($137.5 million) . In the event, Ford overlooked (or considered insignificant) the costs of the negative publicity so engendered, which turned out to be quite significant (because it led to the recall anyway and to measurable losses in sales).
In the case of environmental and occupational health regulation, it has been argued that if modern cost-benefit analyses had been applied prospectively to proposed regulations such as removing lead from gasoline, not turning the Grand Canyon into a hydroelectric dam, and regulating workers’ exposure to vinyl chloride, these regulations would not have been implemented even though they are considered to be highly successful in retrospect. The Clean Air Act has been cited in retrospective studies as a case where benefits exceeded costs, but the knowledge of the benefits (attributable largely to the benefits of reducing particulate pollution) was not available until many years later.
My Note –
First, it looks like at one time these analysis forms were used to actually consider the various choices and then at some point became an intentional method to support or undermine certain choices using those analysis forms. That is backwards, but seems to be common now and over the last thirty years, particularly in the last twenty-five or so years – and especially in America.
Second, it appears to me that the course taken for choices by industry, businesses and government, often too, by lobbies and industry serving groups, detracts from “fixing” a known problem which could negatively people’s lives and safety and is known to be a continuing risk to people’s lives and safety.
And, third, – I find no excuse for doing it that way.
And, fourth – It is not in the best interest of our society to do it that way, regardless of the cost to benefit analysis that supports doing it that way.
So, you might be wondering how the things at the beginning of this post have anything to do with the decision-making analysis forms that appear next –
The question that I was trying to answer about nuclear fusion, use of other power source alternatives such as geothermal power, and a number of other things – finally came down to – upon what basis are the decisions being made, who is making them, why are they believing those are the best choices for them to make and why are these decision-makers not considering any other choices as viable and appropriate?
And, secondly, the moment when a system is known to have a risk of causing massive harm, permanent harm and even, loss of life to people, why isn’t it changed immediately and appropriately? And, why does it take so long to change a known danger, once it is known? And, why isn’t something else adopted in a timely manner, once a known risk higher than anticipated is defined, studied, recognized and accepted?
(Okay – the question included a number of related questions. However, the impacts of the answers I found to these questions touch every single part of our modern society’s set of wonders and the dangers of them from buildings and homes to airplanes and nuclear power – etc. ad infinitum along with the decisions being made that impact all of us. When a civil engineer and his set of corporate contract holders who are directing decisions for their desired outcome make decisions for whatever reason, they impact what will happen to the people walking by their project after it is completed over the course of many years, they impact the people living and working in and around it, and they impact the health and safety of the lives in any way negatively touched by what they’re building.
If a bridge falls, people can be permanently maimed which impacts not only the community where they live, but each of their family members and their children for the remaining course of their lives. So, my question was – why would the civil engineers, local governments and businesses involved with construction and design of a bridge or the repair and replacement of a bridge treat it as nobody’s business but their own in how they go about it? (Just one example of many.)
When nuclear power is the only choice and it isn’t done safely, there is no way for the mind to grasp how many generations of people are impacted by it. That isn’t only the business of those involved with it as a business, nor simply for the regulators to serve the desires of that industry without further consideration of its potential costs to human lives and our society’s best interest. The same is true for safety of our planes, our airline industry, our construction industry’s choices, our financial backers insistence on cutting corners in all sorts of things, and a multitude of other things. (also as examples of the same, but auto industry, for instance, and others can be included as well.)
The cost to benefit decision tree models are effectively removing projects that would be beneficial to mankind and re-routing funding based upon those findings – regardless of how inappropriate it has become to do so and at what costs to all of us. These formats are also being used to support the continued use in the same manner of things that do need to be changed and are known to need to be changed. We have had countless situations where human lives were lost or permanently altered in the most horrific and negative ways as a result of not making changes in a timely manner where there were known dangers that had been identified (and, often, even as there were known solutions that could have been economically, effectively and appropriately applied in a timely manner.)
US Energy Information – Total Energy Used, Resourced, being Developed, in Reserve, etc Monthly Data – EIA – Total Energy
Just found this – from a twitter –
A vast fan-shaped compound in China has officially taken the title of “largest solar-powered office building in the world“. Located in Dezhou in the Shangdong Province in northwest China