An ethical situation that may have serious negative implications regarding moral values and duties

Uncertainty touches most aspects of life, especially when we make decisions that have consequences that we cannot predict. Leaving the house without an umbrella carries a risk because it could start to rain; investing in the stock market carries the risk of losing money. It is therefore natural that, whenever we make decisions with unpredictable outcomes, we weigh up the possible results and their risks and benefits. Of course, some decisions carry more severe risks than getting wet or losing money; the decision to approve a new drug or to ban certain chemicals in products can have far-reaching consequences for health, the environment, society and economies. In such cases, where the lives of others are at stake, decision-making and the handling of uncertainties have important ethical dimensions.

A prudent strategy to deal with this ethical challenge is to diminish uncertainty by acquiring knowledge of the issue. When it comes to decisions that affect people's lives and health—the regulation of potentially harmful substances or diagnostic tests to predict an individual's propensity to develop a severe disease—carrying out research to diminish uncertainty and, consequentially, risks can become an ethical duty. If this is not possible—because decision-makers cannot wait for the relevant research or the gaps in our knowledge are not accessible to scientific investigation—the precautionary principle is increasingly advocated and used as an alternative strategy to make decisions in light of uncertainties. However, the application of the precautionary principle itself can create dangers (Wiedemann & Schütz, 2005) that have to be weighed against the benefits of adopting it—so-called iatrogenic risks (Wiener, 1998)—and therefore also has a serious ethical dimension that needs to be considered.

In this viewpoint, we investigate the role of uncertainty in the field of practical ethics. This is a relatively new issue on the ethical research agenda, which began in the early twentieth century when scientists started to evaluate economic judgements and decisions from an ethical perspective (Knight, 1921; Luntley, 2003). However, the concept of uncertainty has been around for a much longer time; starting with Socrates and Plato, philosophers doubted whether scientific knowledge, no matter how elaborate, sufficiently reflected reality (Kant, 1783, 1787; Pörksen, 2002). They realized that the more we gain insight into the mysteries of nature, the more we become aware of the limits of our knowledge about how ‘things as such' are (Kant, 1783; Prauss, 1989). These limitations to our understanding also make it impossible to foresee future events or the effects and implications of decisions with certainty.

Any scientist knows that knowledge is never complete and that research can do no more than produce estimates of what we think is happening. Science, at least in part, is not about facts but about odds. Yet accepting and realizing this principal uncertainty is a conceptual challenge, and it is within this framework that we must make decisions of a moral nature. In his book Risk Society, Ulrich Beck concludes that “[r]isk calculations are the phenotype of the resurrection of ethics […] in economics, natural sciences and technical disciplines” (Beck, 1992). Uncertainty itself has no ethical quality—it is an inherent attribute of a situation. However, in a potentially dangerous situation, uncertainty can trigger ethically adjusted behaviour that aims to avoid dangers and diminishes risks. To explain how ethics are relevant to uncertainty in such cases, we can draw a schematic map of various forms of uncertainty, beginning with a distinction between our knowledge and ignorance of the probabilities of adverse impacts.

When it comes to decisions that affect people's lives and health […] carrying out research to diminish uncertainty and, consequentially, risks can become an ethical duty

Our schematic approach, the ‘igloo of uncertainty' (Fig 1), which was partly inspired by Faber and co-workers (1992), mainly distinguishes between open and closed forms of both ignorance and knowledge. Within that framework, dangers are defined in terms of the possible outcomes of a given situation. To understand the potential adverse effects of a decision, we therefore require an approximation of the quality of dangers in any given event. Consequently, a rational approach is to give an estimate of the probability that the respective event will happen, and to assess the hazard and the possible impact of the event. Classical risk assessment then takes the product of probability and the expected hazard dimension to obtain a quantitative measure of risk. However, decision-making often depends both on mathematical calculations and on moral considerations or other convictions, which risk assessment does not address. For example, regulations about the use of genetically modified crops in agriculture or stem-cell research are clearly governed by ethical and societal considerations in addition to quantitative risk assessments.

An ethical situation that may have serious negative implications regarding moral values and duties

The igloo of uncertainty.

In this regard, it is important to distinguish between dangers and risks. A danger has a prescribed quality and a defined probability, and can therefore be avoided or counteracted. For example, car accidents that caused severe or deadly injuries prompted regulation for the mandatory installation and use of safety belts. By contrast, a risk can either be accepted by, or imposed on, a person. Driving without a safety belt is a self-accepted risk, while selling cars with faulty safety belts imposes a risk on unsuspecting buyers. This is the decisive difference between danger and risk: a danger is present regardless of choice, whereas a risk is either optionally accepted or imposed (Luhmann, 1993; Bora, 2006).

When we know that a certain situation or decision will involve dangers and risks, it is a proactive and morally justifiable activity to reduce gaps in our knowledge. However, although such gaps can be successfully diminished by research, ignorance presents a greater challenge. If the cause of ignorance is a lack of knowledge, which cannot be reduced owing to stochastics and the randomness of the matter under study, and/or the structure of our cognitive apparatus, it is called closed ignorance or ‘nescience'—an absence of knowledge (Gross, 2007). Closed ignorance also results from rejecting or ignoring available knowledge, which we refer to as the ‘Galileo effect'—inspired by the cardinal in Bertolt Brecht's play Galileo Galilei, who refused to look through a telescope in order not to accept the knowledge that the planets revolve around the sun. Not surprisingly, the Galileo effect is itself a risk factor and increases danger, although it can be overcome. A change in attitude would transform closed ignorance into open ignorance, which can, at least in part, be addressed by learning or by research.

Science, at least in part, seems to be not about facts but about odds

A prerequisite for turning danger into risk, either by accepting it or by being subjected to it, is acquiring knowledge about the danger, its nature and its probability. In this context, we can distinguish between closed and open knowledge with respect to risk—analogous to closed and open ignorance with respect to danger. In this case, closed knowledge means comprehensive knowledge or the certainty that the adverse event will happen in any case. For example, driving at 200 km/h without a safety belt generally means death in an accident. Under these circumstances, the most responsible and rational behaviour would be either to use a safety belt or to avoid the situation altogether.

Open knowledge, by contrast, means that there is sufficient information available to perform a risk assessment, and to give rational and responsible advice, such as requiring people to wear safety belts and imposing speed limits. However, a notable amount of ignorance remains that clearly distinguishes a ‘risky' situation from a non-risky one (Fig 1).

An ethically responsible strategy to address gaps in knowledge and, therefore, uncertainties about possible outcomes requires insight into the particular type of uncertainty. We therefore propose a ‘taxonomy of uncertainty' that recognizes two fundamental forms of uncertainty, both of which are divided into two further sub-forms (Fig 2). Each of the sub-forms describes a particular type of mismatch between the knowledge required and the knowledge available for rational decision-making.

The first form of uncertainty in this scheme is objective uncertainty, which can be further divided into epistemological uncertainty and ontological uncertainty (van Asselt & Rotmans, 2002). The former is caused by gaps in knowledge that can be closed by research. In this case, research becomes a moral duty that is required to avoid dangers or risks, to realize possible benefits, or to balance risks and benefits in a rational and responsible way. Still, given the need to make a decision at some point, decision-makers must both rely on existing knowledge and reflect on any remaining uncertainties. One strategy in this regard is a comparative risk assessment of similar situations. For example, the assessment of the health or environmental risks of a new chemical could draw on both existing knowledge about related compounds and information from safety tests.

Conversely, ontological uncertainty is caused by the stochastic features of a situation, which will usually involve complex technical, biological and/or social systems. Such complex systems are often characterized by nonlinear behaviour, which makes it impossible to resolve uncertainties by deterministic reasoning and/or research (Shrader-Frechette, 1996). In such cases, it is impossible to make rational decisions and we therefore call such decisions ‘quasi-rational'. The effects of interfering with financial markets or ecosystems, for example, are largely unpredictable; nevertheless, past experience and probabilistic reasoning at least provide some guidance on how such complex systems will react.

The second main form of uncertainty in our taxonomy is subjective uncertainty, which is characterized by an inability to apply appropriate moral rules. These types of uncertainty can lead to societal anxiety or conflict, which Emile Durkheim called ‘anomie' (Durkheim, 1996; originally published in 1893). Yet, even within a state of anomie, decisions have to be made. Again, we can distinguish between two sub-forms of subjective uncertainty. The first is uncertainty with respect to rule-guided decisions. This is caused by a lack of applicable moral rules and we call these situations ‘moral uncertainties'. In this case, decision-makers have to fall back on more general moral rules and use them to deduce guidance for the special situation in question. Examples of these types of general rule are Immanuel Kant's moral imperative (1785) or the Hippocratic Oath taken by doctors. Unfortunately, deductions guided by general moral rules often give only poor satisfaction to the decision-maker.

A prerequisite for turning danger into risk, either by accepting it or by being subjected to it, is acquiring knowledge about the danger…

The second sub-form is uncertainty with respect to intuition-guided decisions—that is, uncertainty in moral rules. In specific situations, we can make decisions only by relying on our intuition rather than knowledge, or explicit or implicit moral rules. This means that we act on the basis of fundamental pre-formed moral convictions in addition to experiential and internalized moral models. As with rule-guided decisions, a level of deduction is used here, but in a subconscious and intuitive way. We call the decisions that stem from internalized experiences and moral values ‘intuitional'.

The way that the scientific method deals with knowledge and ignorance, according to the schematic view shown in Fig 1, creates practical ethical problems with regard to making decisions in the face of uncertainty. Much of the research in the fields of chemistry, biology and medicine assesses the effects of a certain agent—be it a potentially hazardous substance, a new pharmaceutical or a medical therapy—on humans, animals and the environment. This is usually done in a defined but ultimately limited study, the results of which are extrapolated to the general population. To assess whether the observed effects are ‘real' or just random variations, researchers perform a statistical test for significance that is based on the concurrence of both a null hypothesis (that there is no effect) and an alternative hypothesis (that there is an effect; Neyman & Pearson, 1928). Although this procedure is strongly formalized and based on mathematical calculations, it still carries the risk of rejecting a true hypothesis out of ignorance—if uncertainties cannot be eliminated or if possible knowledge is rejected (Fig 1). This might have dire consequences in either case. A ‘false positive', for instance, rejecting a safe drug application, might have serious consequences if it is a potentially life-saving medication. Similarly, a ‘false negative'—wrongly rejecting the null hypothesis—could create severe dangers for human and environmental health in the case of a hazardous chemical.

This is where the precautionary principle is applied as a strategy to prevent incalculable possible dangers. As an epistemic principle, the precautionary principle deals with uncertainties in a proactive fashion (Peterson, 2006). It is therefore distinct from quantitative risk assessment, which requires at least open knowledge (Fig 1) to calculate the probabilities of possible adverse effects. Several international proceedings, such as the Montreal Protocol on Substances that Deplete the Ozone Layer (1987), the Treaty on the European Union (Maastricht Treaty, 1992) and the Stockholm Convention on Persistent Organic Pollutants (2001), therefore regard the precautionary principle as an approach to prevent harm where risk analyses cannot be performed: “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation” (United Nations, 1992).

…deductions guided by general moral rules often give only poor satisfaction to the decision-maker

The precautionary principle can help us to cope with open ignorance (Fig 1). However, research has shown that precautionary measures can have negative side effects because they might lower public trust by amplifying unreal public risk perceptions (Wiedemann & Schütz, 2005). Therefore, in a state of uncertainty, the application of precautionary measures has to be carefully weighed against other outcomes, especially spurious anxieties and fears, and a principal scepticism towards technological innovations. It is therefore important to keep in mind that the precautionary principle does not (Renn, 2007), and should not (Peterson, 2007), constitute a decision rule—it is instead a ‘state of mind' (Renn, 2007) that helps decision-makers to avoid false negatives, and to be more sensitive to uncertainties, ambiguities and ignorance (Stirling, 2007).

In regulatory practice, the implementation of the precautionary principle is often problematic because of the discrepancy between the promise of scientific knowledge and the lack thereof in a specific case. This problem was termed the ‘uncertainty paradox' (van Asselt & Vos, 2006) and refers to the adoption of precautionary action in the light of insufficient scientific evidence with the concomitant request for scientific knowledge. As the precautionary principle is designed to deal with uncertainty, its application demonstrates the limits of science to provide reliable evidence of potential risks. Yet, whenever precautionary action is established, science is called on to deliver knowledge in order to assess potential risks (Weingart, 1999).

The new European chemical regulations highlight the practical relevance of the uncertainty paradox. In 2007, the European Union Regulatory Framework for the Registration, Evaluation and Authorisation of Chemicals (REACH) went into effect. It covers about 100,000 chemicals, of which 141 high-volume chemicals have so far been identified as priority substances for risk assessment. The aim of REACH is to improve the protection of human health and the environment through better and earlier identification of the potentially hazardous properties of chemical substances. Both manufacturers and importers are required to undertake risk assessments of the substances that they produce and use, rather than the public authorities that had previously been obliged to do so.

REACH is an example of precautionary action to decrease potential unknown negative effects. At the same time, the regulatory framework is an example of the uncertainty paradox, because there is a discrepancy between precautions taken to deal with uncertainties and the demand for more risk analysis of the respective chemicals. For instance, chemicals that must be authorized are substances “identified by scientific evidence as causing probable serious effects to humans or the environment” (European Union, 2006). It is debatable whether it is possible to achieve zero risk for about 100,000 chemicals that are in use in many conceivable combinations. This question belongs to the ethics of uncertainty insofar as the precautionary paradox might be enforced intentionally to suggest that there are ‘risks' (that is, open knowledge; Fig 1), although the respective cases are still ‘dangers' (that is, open ignorance).

The precautionary principle and the uncertainty paradox share common ground with the so-called Collingridge Dilemma (Collingridge, 1980). This is “a methodological quandary in which efforts to control technology development face a double-bind problem: (1) an information problem: impacts cannot be easily predicted until the technology is extensively developed and widely used, and (2) a power problem: control or change is difficult when the technology has become entrenched” (Collingridge, 1980).

REACH is also an example of the Collingridge Dilemma because it assumes the worst-case scenario until science proves otherwise. At the same time, it has to denounce worst-case scenarios because it is not possible to ban all substances, many of which have been used for decades or are ubiquitous in the environment. One approach to resolve this dilemma is to give an expected value of deleteriousness or recommend threshold concentrations. Many regulations for chemical usage apply so-called maximum workplace concentration values, which define the maximum value of exposure that is assumed to be harmless. This is especially problematic in the case of carcinogens, because one single molecule might be sufficient to cause cancer.

In addition to addressing uncertainties in both the theory and practical implementation of the precautionary principle, there are cases in which adverse effects can be scientifically predicted with high probability or even absolute certainty. However, in these cases, people can deliberately reject knowledge and choose to remain ignorant of the dangers—this is the Galileo effect mentioned previously (Fig 1). A current example of this is the use of genetic testing to predict Huntington disease (HD), which is a rare inheritable neurological disorder affecting around eight people in every 100,000. HD results from a genetically programmed degeneration of cells in certain areas of the brain. The disease allele is dominant, which means that a child who has one parent with HD has a 50% chance of inheriting the gene and inevitably developing—and dying from—HD. There is at present no cure and no way to alter the course of HD. Life expectancy is generally between 10 and 25 years after the onset of obvious symptoms. Because the HD gene has been identified, it is possible to test whether individuals who are at risk carry the deleterious allele.

It is difficult to decide whether to take this test. Some choose not to for numerous reasons, including the oppressive and emotional consequences of a positive result. This is justified both by the right to informational self-determination and the right to privacy. However, HD not only affects the individual, but also leads to behaviour that can threaten the health of others—for example, a higher risk of traffic accidents because of the neuromuscular disturbances that are a common symptom. On a side note, health insurance companies also claim that they have the right to know about a client's health status.

Uncertainties challenge the central claim of science: that all problems are presumed to be solvable by research

The individual consequences of knowing therefore support a comprehensive right not-to-know; however, society seems to have an opposing legitimate interest to know about the special medical, financial and social needs of HD-affected persons, according to a functionalist perspective (Parsons, 1951). Any advice based on ethical convictions about genetic testing for HD therefore has to weigh up the rights of the individual to self-determination and privacy, the duty of parents to care for a potentially HD-affected child, and the need for society to optimize medical treatment and minimize the costs of care for affected persons. A morally justified decision therefore requires a toolkit of ethical considerations that are able to handle certainties in such a case.

Uncertainties challenge the central claim of science: that all problems are presumed to be solvable by research. Many social, health and environmental issues, however, have been shown to be so complex that it might never be possible to make reliable predictions about the effects of manipulating these systems.

This viewpoint is intended to highlight some important ethical considerations about the limitations of knowledge in the assessment of human health risks. Clearly, acting in a state of uncertainty can create ethical problems: ignorance caused by rejection of knowledge can lead to danger. However, knowledge can also lead to ethical problems: it can create risks if the exposed person decides to accept the threat, imposes it on another person or accepts that such a threat is imposed.

As we have shown, uncertainties about adverse effects can be categorized in a taxonomy of uncertainty (Fig 2). In some situations, these uncertainties might warrant the implementation of the precautionary principle. However, a responsible application of the precautionary principle in a state of uncertainty has to be considered carefully and specifically in every case with respect to all possible outcomes.

  • Beck U (1992) Risk Society. Towards a New Modernity. London, UK: Sage Publications Ltd [Google Scholar]
  • Bora A (2006) Risk, risk society, risk behaviour, and social problems. In Ritzer G (ed), The Blackwell Encyclopedia of Sociology, vol VIII, pp 3926–3932. Oxford, UK: Blackwell [Google Scholar]
  • Collingridge D (1980) The Social Control of Technology. New York, NY, USA: St Martin's Press [Google Scholar]
  • Durkheim É (1996) Über soziale Arbeitsteilung. Studie über die Organisation höherer Gesellschaften, 2nd edn. Frankfurt am Main, Germany: Suhrkamp (orig. 1893: De la division du Travail Social) [Google Scholar]
  • European Union (2006) Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH). Brussels, Belgium: European Union [Google Scholar]
  • Faber M, Manstetten R, Proops JLR (1992) Humankind and the environment: an anatomy of surprise and ignorance. Environ Values 1: 217–242 [Google Scholar]
  • Gross M (2007) The unknown in process: dynamic connections of ignorance, non-knowledge and related concepts. Curr Sociol 55: 1–20 [Google Scholar]
  • Kant I (1783) Prolegomena zu einer jeden künftigen Metaphysik, die als Wissenschaft wird auftreten können. Riga, Lithuania: Friedrich Hartknoch [Google Scholar]
  • Kant I (1785) Grundlegung zur Metaphysik der Sitten. In EW Weischedel (ed; 1968). Opus Edition: Immanuel Kant. Frankfurt am Main, Germany: Suhrkamp [Google Scholar]
  • Kant I (1787) Kritik der reinen Vernunft. In EW Weischedel (ed; 1974). Opus Edition: Immanuel Kant, 2nd edn. Frankfurt am Main, Germany: Suhrkamp [Google Scholar]
  • Kant I (1800) Immanuel Kants logik. Ein handbuch zu vorlesungen, Königsberg, bey Friedrich Nicolovius. In Jäsche EGB (ed), Immanuel Kant: Gesammelte Schriften. 1923, Band IX: Logik. Physische Geographie. Pädagogik. Berlin, Germany [Google Scholar]
  • Knight FH (1921) Risk, Uncertainty, and Profit. Boston, MA, USA: The Riverside Press [Google Scholar]
  • Luhmann N (1993) Risiko und Gefahr. In Krohn W, Krücken G (eds), Riskante Technologien: Reflexion und Regulation. Frankfurt am Main, Germany: Suhrkamp [Google Scholar]
  • Luntley M (2003) Ethics in the face of uncertainty: judgement not rules. Bus Ethics Euro Rev 12: 325–333 [Google Scholar]
  • Neyman J, Pearson ES (1928) On the use and interpretation of certain test criteria for purposes of statistical inference. Biometrika 20A: 175–240 [Google Scholar]
  • Parsons T (1951) The Social System. New York, NY, USA: The Free Press [Google Scholar]
  • Peterson M (2006) The precautionary principle is incoherent. Risk Anal 26: 595–601 [PubMed] [Google Scholar]
  • Peterson M (2007) The precautionary principle should not be used as a basis for decision-making. EMBO Rep 8: 305–308 [PMC free article] [PubMed] [Google Scholar]
  • Pörksen B (ed; 2002) Die Gewissheit der Ungewissheit. Heidelberg, Germany: Carl-Auer-Systeme [Google Scholar]
  • Prauss G (1989) Kant und das Problem der Dinge an sich. Bonn, Germany: Bouvier [Google Scholar]
  • Renn O (2007) Precaution and analysis: two sides of the same coin? EMBO Rep 8: 303–304 [PMC free article] [PubMed] [Google Scholar]
  • Shrader-Frechette KS (1996) Science versus educated guessing: risk assessment, nuclear waste, and public policy. Bioscience 46: 488–490 [Google Scholar]
  • Stirling A (2007) Risk, precaution and science: towards a more constructive policy debate. EMBO Rep 8: 309–315 [PMC free article] [PubMed] [Google Scholar]
  • United Nations (1992) Rio Declaration on Environment and Development by The United Nations Conference on Environment and Development. Geneva, Switzerland: United Nations [Google Scholar]
  • Van Asselt MBA, Rotmans J (2002) Uncertainty in integrated assessment modelling: from positivism to pluralism. Clim Change 54: 75–105 [Google Scholar]
  • Van Asselt M, Vos E (2006) The precautionary principle and the uncertainty paradox. J Risk Res 9: 313–336 [Google Scholar]
  • Weingart P (1999) Scientific expertise and political accountability: paradoxes of science in politics. Sci Public Policy 26: 151–161 [Google Scholar]
  • Wiedemann PM, Schütz H (2005) The precautionary principle and risk perception: experimental studies in the EMF area. Environ Health Perspect 113: 402–405 [PMC free article] [PubMed] [Google Scholar]
  • Wiener JB (1998) Managing the iatrogenic risks of risk management. Health Saf Environ 9: 39–82 [Google Scholar]


Page 2

An ethical situation that may have serious negative implications regarding moral values and duties

  • An ethical situation that may have serious negative implications regarding moral values and duties
  • An ethical situation that may have serious negative implications regarding moral values and duties
  • An ethical situation that may have serious negative implications regarding moral values and duties
  • An ethical situation that may have serious negative implications regarding moral values and duties
  • An ethical situation that may have serious negative implications regarding moral values and duties

Click on the image to see a larger version.