Model Predictions Questionedby Alan Rose
Davis noted that computer modelling is "in" these days and that, as a consequence model validation is also "in". AECL has promised the AECB to validate some 15 safety codes. He commented that if computer predictions are to be used for decision making it is important to know how reliable the predictions are.
Davis went through the process of model validation to demonstrate that the development of the codes is a reasonably well understood process, and that the codes themselves are generally sound. The main problem in using them is the subjectivity of the exercise. He pointed out that the user must interpret the information available and match it to the input requirements of the model, select code options to obtain the answers desired, choose appropriate parameter values and finally interpret the output. These are subjective decisions and how well the result relates to reality depends on the skill and experience of the user.
Davis described a five year validation study called BIOMOVS II (biosphere model validation study) funded and directed by 5 organisations from three countries and participated in by 160 organisations in 31 countries. The object was to use scenarios such as the Chernobyl event to compare predictions by different modellers given the same information and using the same codes, one from AECL, the Chalk River Environmental Research Pathways Analysis Code (CHERPAC), one from Poland, the Central Laboratory for Radiological Protection (CLRP, irreverently called "slurp") and one from the UK with the unfortunate acronym RUINS.
Davis restricted his discussion to a scenario relating to iodine 131 fallout from Chernobyl. He noted the type of information provided the ten experienced modellers in this exercise and the predictions they were to obtain, namely the amount of activity deposited on the ground, the radionuclide concentration on grass and the radionuclide concentration in milk.
The end results of this exercise showed large variations between the ten modellers and these variations were determined to arise from many sources including actual mistakes, misinterpretation of given information, use of inappropriate parameters etc. The problems arose not from the codes but from the subjective decisions made by the modellers.
The study produced a list of recommendations to reduce the effects of user subjectivity, most important of which was to use a consensus approach to all judgmental decisions made during the assessment and to involve the data suppliers, code developers and other code users in the decisions. Analyses done in isolation were obviously prone to error.