For a complex problem, such as analysing the future evolution, performance and safety of a deep geological repository and its environment, it is impossible to make exact predictions. Some level of uncertainty can, however, be accepted, provided the uncertainties do not compromise the demonstration of safety. As explained in Section ‎4.1, rigor in the consideration and treatment of uncertainty is one of the basic principles underlying the safety assessment methodology. Furthermore, ENSI Guideline G03 (ENSI 2023) states that, for the safety case, data, processes, and model concepts shall be used that are in accordance with the state-of-the-art in science and technology, and that their uncertainties shall be identified12.

Uncertainty management in safety assessment is discussed at length in Chapter 3 of NTB 24‑19 (Nagra 2024t) and includes the following particular aspects.

  • Types of uncertainty

In line with common practice, the safety assessment categorises uncertainty based on its source as data uncertainty (also termed parameter uncertainty), model uncertainty (also termed conceptual uncertainty), and uncertainty in the broad evolution of the safety functions (sometimes termed scenario uncertainty). In some instances, uncertainty may also be classified by its nature as epistemic (caused by deficiencies in knowledge) or aleatory (due to the apparent randomness of relevant phenomena). In principle, the former relates to uncertainty that can be avoided, reduced, or eliminated and the latter relates to uncertainty that cannot be reduced or eliminated. In practice, it can be difficult to make a clear distinction between the aleatory and epistemic uncertainties.

  • Management of uncertainty in the assessment basis

The assessment basis, i.e., the evidence, knowledge, assessment tools, and methodologies developed or acquired by Nagra in support of the safety assessment, is described in Chapter ‎5. The body of information that is contained within the assessment basis includes information gathered from a variety of sources, and the associated uncertainty represents a mixture of epistemic and aleatory types. Epistemic uncertainty is reduced, as far as reasonably possible, e.g., by site characterisation, research, and design considerations. Remaining uncertainties are then identified and, if possible, quantified, e.g., by specifying either likely values and ranges or probability density functions (PDFs) for associated parameters. Where different types or sources of information are available, an ensemble of information, possibly with different representations of uncertainty, is integrated to provide, as far as possible, a coherent, logical, realistic, and defensible description of the disposal system and its environment, including statements of uncertainty.

  • Uncertainty in the broad evolution of the repository system and its components and how they contribute to the safety functions

Uncertainty in the broad evolution is handled in performance assessment and safety scenario development (Chapters ‎6 and ‎7), resulting in a set of safety scenarios, namely the reference safety scenario, a set of alternative safety scenarios and a set of future human action (FHA) safety scenarios. The latter consider potential future actions undertaken by human society that could impact the repository and are treated as a separate class of safety scenarios, due to their particularly speculative nature. Furthermore, “what-if?” cases, which involve extreme and hypothetical assumptions and primarily aim at demonstrating the robustness of the repository system, contribute to bounding the consequences of uncertainty in the broad evolution (see also Section ‎4.5).

  • Management of uncertainty in modelling studies of performance assessment and analysis of radiological consequences

All models used in safety assessment incorporate a substantial degree of simplification. Simplification is needed because of the complexity of the disposal system, the impossibility of complete system characterisation, and the limited understanding that is available of some processes. Simplification can include, for example, the omission of some less well characterised phenomena. Moreover, where omission of a phenomenon cannot be justified, the approach is often to incorporate the complex or poorly understood phenomenon in a relatively simple form in the mathematical models and codes, while applying relatively pessi­mistic ranges to the parameter values to address the resulting uncertainty. Simplifications and assumptions that are subject to uncertainty can be justified on the grounds that they either have a negligible impact on the calculation endpoints (e.g., performance and safety indica­tors), or that they are conservative. Uncertainty in model output arises from the uncertainty in input data as well as from the model uncertainties handled as described above. Two complementary techniques are employed to quantify the uncertainty in model output due to the uncertainty in the input data. The first is called deterministic uncertainty analysis, in which uncertainty in model output is explored by defining and testing specific input parameter values. The second is probabilistic uncertainty analysis, where the selection of input data is carried out based on the probabilities assigned to the data (e.g., using the Monte-Carlo method), and the model output is analysed using standard graphical and statistical methods. Finally, there are irreducible, poorly quantifiable or unquantifiable uncertainties associated with the evolution of the biosphere and future human lifestyles and actions, and stylised approaches are adopted to deal with these. For example, for the purpose of the assessment, possible future human actions that could affect the repository are constrained to those that are possible using present-day technology or moderate developments thereof.

  • Quality assurance and control

The possibility of human error in applying the methodologies presented in this report repre­sents an additional source of uncertainty. To minimise this type of uncertainty, measures of quality assurance and control must be applied to all activities that use or produce models and data. Specific measures to ensure that the models and databases used in safety assessment are fit for purpose, that the mathematical models are implemented correctly in computational models, that the computational models are reliable and that they are applied correctly and without error are presented in underlying reports (Nagra 2024sNagra 2024u, Nagra 2024k, Nagra 2024oNagra 2024p). These include a comparison of model outputs with the results of experiments covering a range of spatial and temporal scales and with observations of natural systems and the verification of numerical codes, e.g., by benchmarking against analytical solutions and against other codes that can address the same or similar problems.

The potential for human error during the construction, operation, and closure of the repository cannot be entirely ruled out. While implementing appropriate quality assurance and control measures is foreseen, uncertainties – such as those arising from failure to meet design requirements – remain a concern. These uncertainties are currently mainly covered by the definition and analysis of “what-if?” cases that assume a hypothetical degraded performance for each of the main repository barriers (see Section ‎7.4). To enhance safety and reliability, these aspects may be more thoroughly evaluated in future assessments, contributing to the ongoing development and revision of design requirements.

Für den Sicherheitsnachweis sind Daten, Prozesse und Modellkonzepte gemäss Stand von Wissenschaft und Technik zu verwenden und deren Unsicherheiten aufzuzeigen. ↩