Arne Oshaug1
1 Nordic School of Nutrition, University of Oslo, Norway.
Introduction
Background
The purpose of evaluation
Developing an evaluation system
Qualitative versus quantitative methodologies
Measuring efficiency
Skills needed in evaluation
Concluding remarks
Recommendations
References
In the literature discussing nutrition education projects, evaluation is generally mentioned. At the same time it is underlined that a nutrition education programme is usually only one component of a strategy designed to ultimately influence individual behaviour to solve nutritional problems (ICN, 1992). Evaluation should therefore be integrated in the whole process from start to finish, and must necessarily assess the effect of all types of interventions in a nutrition education strategy. This paper will discuss how nutrition education programmes can be evaluated, how an evaluation system can be developed, how different types of evaluation methods can be used in data collection, how to measure efficiency of programmes, and which skills are needed in evaluation of nutrition education programmes.
Programme managers and planners need to be accountable to funding agencies and policy makers. They must, therefore, distinguish useful current programmes from ineffective and inefficient ones, and plan, design, and implement new efforts that effectively and efficiently have the desired impact on the target group. To do so they must obtain answers to a range of questions, such as: is the strategy based on priorities from a broad analysis of the nutrition situation, needs assessment, and cultural and behavioural aspects? Are the interventions selected likely to ameliorate significantly the nutrition problems? Is the most appropriate target population selected? Will the various interventions reinforce, or counteract each other? Is the intervention being implemented in the ways envisioned? Is it effective? How much does it cost? If the nutrition education programme is one of several interventions, how can its effect or impact be separated from the impact of other interventions?
Many more questions could be raised, but with those already mentioned, one sees that the scope of evaluation is wide. Evaluation activities range from simple counting of events, to very complex and sophisticated qualitative and quantitative analysis. Evaluation theory and procedures are basically the same in various interventions such as health, education, welfare, and other human service policies and programmes. The distinction between the use of evaluation in the various approaches lies primarily in the focus of the evaluation (Rossi & Freeman, 1993).
In the last 25-30 years there have been many success stories and failures for interventions in education of the public (Klein et al., 1979; Weiss, 1987; Chapman & Boothroyd, 1988; ICN, 1992; Oshaug, 1994; Luepker et al., 1994). These experiences have brought about a more realistic perspective on the barriers of successful implementation of nutrition-related programmes, and the impact that can be expected from them. In a world with resource limitation and more realistic expectations from nutrition programmes, the need for evaluation efforts increases as societies attempt to cope with food and nutrition problems as part of their human and social distress.
Much of the literature on evaluation presents conflicting viewpoints on choice of paradigms, definition, practical approach, choice of methodology for data collection and analysis, and use of the results. Many attempts have therefore been made to clarify the meaning of evaluation and unmask the distinction between evaluation and other related concepts such as assessment, measurement or research. Still the picture is not clear, and in 1981 Stake rightly pointed out that many arguments resembled persuasions (Stake, 1981).
For the benefit of those who lost their way among the various evaluation models, approaches, and persuasions, several attempts have been made to put some order into the growing evaluation literature through classifications of evaluation approaches (House, 1980 and 1986; Stuffelbeam & Webster, 1980; Guba & Lincoln, 1981 and 1989; Oshaug, 1992; Rossi & Freeman, 1993). Based on these critical reviews, several dimensions in a conceptualisation of evaluation have emerged.
Why evaluate?
Society, which finally pays the bill for nutrition education activities, has a right to know how resources have been used and the final impact of educational programmes. Evaluation of educational programmes are undertaken for several reasons: to judge how the nutrition education programmes are planned and executed, how the programme personnel have performed, and to increase the effectiveness of programme management and administration; to assess the utility of new programmes; and to satisfy programme sponsors (see Figure 1) (Oshaug, 1992; Rossi & Freeman, 1993; Oshaug et al., 1993). In all evaluation efforts it is very important that the purpose of the evaluation is clear from the beginning.
The functions of evaluation
Evaluation of nutrition education programmes includes not only collection of qualitative and quantitative data, but also their analysis and interpretation for the purpose of making judgement and decisions. In this context, evaluation is seen to have two main functions: formative and summative. Formative evaluation is used to improve and develop programme activities as they are carried out, and is therefore continuous. Summative evaluation measures the outcome of an activity or set of activities (Oshaug, 1992). It is also used to satisfy the accountability2 requirements of programme sponsors. By providing feedback or involving people in evaluation activities, programme beneficiaries can be motivated about its usefulness. Furthermore, evaluation may have psychological or sociopolitical functions as it is used to increase the awareness of educational activities or promote public relations. Another function is to facilitate supervision. In an organisation responsible for a nutrition education programme, it is the responsibility of a manager to evaluate personnel and programme activities under her or his responsibility. This may be referred to as the administrative function of evaluation (see Figure 2) (Oshaug, Benbouzid & Guilbert, 1993).
2 Rossi and Freeman (1993) discuss six common types of accountability studies in established programmes, directed at providing information about various aspects of programmes to stakeholders: 1. Coverage; 2. Service; 3. Impact; 4. Efficiency; 5. Fiscal; and 6. Legal accountability.
Figure 1: Reasons for evaluating nutrition education programmes
To assess: |
· impact or effect, |
· how programmes are planned and executed, |
· how programme personnel perform, |
· how effectiveness can be improved, |
· the utility of a programme, and |
· to satisfy the programme sponsors. |
Figure 2: Functions of evaluation
· Improve and develop activities of programmes as they are carried out |
· Measure outcome |
· Accountability |
· Provide feedback to or involve beneficiaries in evaluation activities |
· Create or increase the awareness of educational activities |
· Promote public relations |
· Evaluate programme personnel |
· Facilitate supervision |
Definition of evaluation
One dimension, and a recurrent question is, how to define evaluation. Guba and Lincoln (1989), agree that it is reasonable to begin with a definition of what we shall mean by the term evaluation. They proceed, however, by stating that definitions of evaluation are human mental constructions, whose correspondence to some "reality" is not and cannot be an issue. Therefore, according to their opinion, "there is no 'right' way to define evaluation, a way that, if it could be found, would put an end to argumentation about how evaluation is to proceed and what its purposes are". Such statements are not very helpful to non-professional evaluators. They contribute to the confusion and protect the field as a playground for the "good guys". Luckily several authors think a definition is important and can be provided.
In a recent and authoritative book, evaluation is defined in such a way that it comprises a whole programme cycle, from assessment of problems and needs to outcome or impact evaluation of social programmes (Rossi & Freeman, 1993). Here the definition of evaluation is:
Evaluation is the systematic application of social research procedures for assessing the conceptualisation, design, implementation, and utility of social intervention programmes.
This definition includes any type of information gathering from the very start of a situation analysis to the final outcome of social programmes. It is a very broad definition, and beyond many people's understanding of evaluation. The assessment part of programme conceptualisation is a part of the evaluation. The authors' specific reference to social research procedures is, however, limiting, and one may accuse them of being biased in their theoretical orientation, evaluation procedures, selection of methodology and analytical approach. Other writers also see evaluation as an integrated part of programme planning and management, whether it is a training/education programme, a specific nutrition intervention, development activities, or education of the public (McMahon, Barton & Piot, 1980; Romiszowsky, 1984; Oshaug, 1992; Oshaug et al., 1993). For community nutrition, evaluation has been defined as follows (Oshaug, 1992):
The evaluation of a programme is a systematic collection and delineation and use of information to judge the correctness of the situation analysis, critically assess the resources and strategies selected, to provide feedback on the process of implementation and to measure the effectiveness and the impact of an action programme.
This is also a broad definition, but it links the evaluation activities to a specific programme or activity. Here evaluation is seen as an essential management tool for all community nutrition activities, including nutrition education of the public. It includes a range of methodologies from medicine and social science to those specific to nutrition. All definitions stress the importance of planning the evaluation at the same time as the programme to be evaluated.
A common approach to evaluating an educational programme is what is often called a systematic approach (Oshaug et al., 1993; Rossi & Freeman, 1993). According to this approach, evaluation should be built into all phases of programme planning, implementation, and management.
Integrating evaluation into programme planning
Assessment of the situation can be considered as a part of an evaluation system (Rossi & Freeman, 1993). This might, however, create confusion by calling most of the activities in planning, evaluation activities. What is essential, however, is that evaluation begins with a clear definition of a nutrition education programme's goals and objectives.
Goals and objectives - linking programmes and evaluation
Goals and objectives of a nutrition education programme are based on nutritional needs. These are identified through assessment of the nutrition situation, based on, for example, an overview of regional or national plans for food and nutrition (if such exist); a profile of diseases and problems related to food and nutrition; the problems which can be solved by nutrition education; the factors that contribute to nutrition-related problems of all kinds and the level at which they operate (national, regional, local, household and individual); a description of the various actors and target groups; and a list of the systems that can support nutrition education activities (Oshaug, 1992).
Having this information, the goals and measurable objectives (including outcomes) can be specified. Goals and objectives for nutrition education programmes are all based on the assumption that there is room for improvement and that nutrition education is the right strategy to be used. Although a nutritional deficiency may be easy to recognise, a precise assessment of the empirical situation is usually required before planners can formulate specific, realistic objectives and design a nutrition education programme to achieve them.
Specification of goals and objectives is very important, both for an education programme itself, and for the evaluation. For the programme they give direction, expected results and time frames, and for the evaluation, criteria for measurements. Many programmes have suffered from poorly developed objectives, which also made evaluation difficult (Wholey, 1981; Chapman & Boothroyd, 1988; Oshaug et al., 1993).
Goals are generally broad, abstract, idealised statements about desired long-term expectations. For evaluation purposes, goals must lead to operationalisation of the desired outcome, that is, the condition to be dealt with must be specified in detail. These operationalised statements are referred to as objectives (Rossi & Freeman, 1993). Objectives must be formulated precisely, specifying expected outcome(s) and how, where, and under what conditions results will be achieved. For educational programmes the following elements of an objective are suggested (Oshaug, 1992):
An objective should contain:
· the expected change - outcome (e.g. behavioural, nutritional status);· the conditions under which the expected change is to take place, including, for example, the geographical area, time, target group and activities used; and
· the criterion, or the extent of the expected change that will satisfy the objective.
It is important that the various objectives of an educational programme have different time perspectives3. In management literature one refers to "milestones", meaning specific objectives to be achieved at certain stages in the programme implementation. These are important because they can be followed and reported on during implementation.
3 Short-lived interventions may produce measurable results, but new behaviours are fragile and can rapidly disappear. Education projects that have been evaluated over time strongly support the need for a long-term, intensive effort (ICN, 1992).
For planning evaluation of Nutrition Education for the Public it is important to develop an evaluation system (see Figure 3).
Figure 3: Components in an evaluation system
· Context |
· Input |
· Process |
· Outcome/impact |
Context evaluation
Context evaluation ensures that past experience is brought into the process of planning. It focuses on the initial decisions in the nutrition education programme. Usually, most of the information needed has already been collected during the situation analysis, and/or a baseline study. If the available information is not sufficient, data from a sample or pilot programme, or anecdotal data may be collected to give better understanding of the problem. Context evaluation is normally carried out to refine objectives and activities, and ensure that they are realistic and relevant to the problems addressed in the nutrition education programme.
Context evaluation is also used to analyse contextual factors that may not have been directly addressed in the objectives but that have a bearing on implementation. These factors include the religion, race and ethnic background and sex of the target group in the community, and general socio-economic and political issues. Such an evaluation can focus on factors that may impede a programme, and thereby enabling staff to plan on coping with them (Oshaug, 1992).
In nutrition education programmes it is essential for programme planners and implementors to understand how different target populations perceive reality, how they use and perceive symbols and colours (which may be used by the education programme), and how a nutrition education message would be received, understood and possibly acted upon by the target population.
Input evaluation4
4 Rossi and Freeman (1993) discuss fine-tuning established programmes, which is similar to the input evaluation discussed here, but basically focuses on ongoing established projects.
Input evaluation of a nutrition education programme is an important part of the preparation for implementation of the programme. It takes a critical look at the adequacy and appropriateness of the resources available to carry out the programme. A programme can be said to have at least four types of input:
· the programme plan;
· the material resources;
· human resources such as programme staff; and
· time5, particularly that allocated for the initial phase, evaluation, feedback, and follow-up.5 Many evaluations of otherwise well-designed programmes show that programme planners consistently under-estimate time and effort needed to adopt a new practice (ICN, 1992).
At this point, the main concern is the quality of the inputs, that is, the likelihood that they will help or hinder the implementation of the programme. This can be done in various ways, but one can start by looking at the programme plan. Some of the activities planned may conflict, owing to conflicts between objectives, competition for scarce resources or other reasons. The following list gives examples of questions which may be useful (Oshaug, 1992):
· Are goals and objectives specified?· Do they contain criteria?
· Are they based on a detailed situation analysis?
· Are they tested for relevance and feasibility?
· Are the activities tested for practicability and feasibility?
· Are the education materials tested for relevance?
· Have target groups been involved in any stage of programme conceptualisation and design?
· Does the programme staff have adequate skills and competence?
· Does the plan include feedback to the local community, the target group(s), authorities and others?
· Is cost per beneficiary estimated?
When considering the answers to such questions, the consequence of the negative answers are important to assess. Will the gaps revealed prevent successful implementation? Should the programme be modified?
Process evaluation
Process evaluation is a tool for monitoring progress. It indicates, while the strategies and activities are implemented, whether they are likely to generate the expected results. Process evaluation should also indicate whether the work is done on time. If the activities do not meet expectations, they may be changed or even stopped. It is much better to change a programme during implementation than await a retrospective analysis to find out where it went wrong and who was responsible for the failure - when it is too late (Oshaug, 1992). Therefore, careful monitoring identifies programme constraints that have been overlooked or underestimated, provides insight into audience characteristics that were misunderstood, and suggests important factors that have changed during the course of the programme. Process evaluation provides programme planners with information to improve the design and management of the programme, and to strengthen future efforts (ICN, 1992). Process evaluation is important to understanding and interpreting outcome and impact findings.
The nature of the process evaluation depends on the problem and the programme involved. Some problems and programmes demand daily evaluation or immediate data collection, while others need only occasional checking. Several factors should be considered when planning a process evaluation, such as: objectives, target population, strategies and activities, scheduling, actors, and resources.
The objectives of the programme will spell out the outcome or short-term achievements (milestones) on the way to the goal. Well-formulated objectives are essential for process evaluation.
Because the completion of one activity may be a prerequisite for the start of another, it is essential to draw up a clear schedule for the programme. One programme can have several objectives with different schedules for achievement.
In addition, one should have a clear picture of all the programme staff and their responsibilities for initiating and implementing activities. Several questions about actors can be asked in process evaluation. For example, if an activity goes wrong, who or what is creating problems? Are the people involved in implementation acting as expected? What can be corrected and how can this be done?
Finally, the implementation of activities requires timely availability of resources. The use must be co-ordinated to avoid extra cost and maximise the benefits. Process evaluation can facilitate this.
When planning process evaluation, one needs to decide what indicators to use. This choice depends heavily on factors such as the nature and complexity of the programme, the criteria of the objectives, the context in which it is implemented, the people involved in the implementation, and the duration and target group of the programme.
Data collection for process evaluation
Process evaluation may focus on gradual changes in the target group (related to the specified objectives), and/or performance of programme personnel. Here I will focus on the last type of process evaluation. The complexity of the process evaluation will depend on the resources available and the expertise of the evaluator. As a rule, one should aim at data collection activities that are as simple and economical as possible. "High technology" monitoring and sophisticated quantitative analytical procedures are not always necessary for process evaluation. There are many sources of data that should be considered in the design of process evaluation of nutrition education programmes: direct observation by an evaluator, data from programme personnel, programme records, information from programme participants or their associates, and data on food use (in households) and/or sale (at markets, in shops).
Monitoring simple changes (use and sales of foods, or recording of implemented activities) may be straightforward, while collecting observational data can be more sensitive and complex. If the responsibility is given to an evaluator (external or internal), he or she can fill out regular reports, linked, for example, to milestones. This may simply include reporting on how separate activities were implemented. If process evaluation includes assessing demonstrations, information meetings, traditional theatre, etc., participant observation can be used. In such cases, uniform recording will be important. It may be useful to provide an observer with a list of important types of activities, attitudes, and behaviours of programme personnel. In nutrition training, attitude rating scales have been used (Oshaug et al., 1993). Such normative judgement is used to measure the clarity of an instructor's presentation and/or to assess the nature of the encounter between participants and the nutrition education programme activities (demonstrations, traditional theatre, information meetings, etc.).
Caution here is necessary. Direct observation methods appear attractively simple, but are not easily taught and learned. They are time consuming and can produce data that are difficult to summarise and analyse (Rossi & Freeman, 1993). The less structured the observation method and the more complex the nutrition education programme strategy, the more difficult it will be. Direct observation may also change the behaviour of programme personnel when the evaluator is present. It is advisable to combine direct observation with other types of process evaluation activities and use it to complement and facilitate the analysis and interpretation of other types of evaluation results.
Use of information from process evaluation
Process evaluation results have a number of uses, depending on the purpose of the evaluation, at what stage of development of the programme is, and the funding agency. An important function of process evaluation here, when it is a part of a comprehensive evaluation, is to provide information about the congruence between programme design and implementation. The results should therefore be fed back to project managers and staff on a continual basis. Discovery of fluctuations and changes over time may permit changes in the programme or fine-tuning. A plan for use and dissemination of process evaluation6 findings should be made when planning the evaluation system.
6 This should be part of a plan for dissemination of all types of evaluation findings from all the different evaluation activities. It is important to present the findings in ways which correspond to the needs and competencies of the relevant stakeholders.
Outcome or impact evaluation
When evaluating the outcome of an intervention, a distinction must be made between gross and net outcome (see Figure 4) (Rossi & Freeman, 1993). The gross outcome consists of all observed changes in the period in question. The gross outcome measure in a nutrition education programme might be defined as any change in the diet of the participants compared to the diet before the programme started (the difference between pre- and post-programme values on selected measures).
The net outcomes are more difficult to measure. In assessment of net outcomes in a nutrition education programme, we try to measure for example the dietary changes which are caused by the intervention. In impact assessment we are primarily concerned with the net outcome.
Figure 4: Measuring outcome of a nutrition education programme
· Gross outcome: all changes in the period in question |
· Net outcome: changes attributed only to the nutrition education programme |
The dietary and nutritional changes seen in a specific period may be attributed to at least three effects:
· the effect of the intervention (net outcome of the nutrition education programme);
· the effects of extraneous confounding factors; and
· design effects, which are artefacts of the evaluation process itself.
Extraneous confounding effects
Observed, or lack of, associations may be due to a mixing of effects between the exposure, the outcome, and an extraneous factor. This is referred to as confounding (Rothman, 1986; Hennekens & Buring, 1987). For an extraneous factor to be a confounder, it must have an effect. That is, the factor must be predictive of the outcome. The effect need not be causal. Frequently only a correlate of a causal factor is identified as a confounding factor. A common example is social class, which itself is presumably causally related to few if any diseases, but is a correlate of many causes of disease. Similarly, some would claim that age, which is related to nearly every disease, is itself only an artificial marker of more fundamental biologic changes. In this view, age is not a causal risk factor, but nevertheless it is a potential confounding factor in many situations (Rothman, 1986). In order to control for confounding, a number of methods can be used, partly in design - including restriction in the admissibility criteria for subjects - and partly through stratified and multivariate analysis.
· Secular trends:
Relative long-term trends in a community - called secular trends - may produce changes in gross outcomes that enhance or mask the net effects of an intervention. Figure 5 shows a hypothetical example of an educational programme aimed at increasing the use of green leaves. A gradual economic improvement in a country may lead to increased consumption of the leaves, which earlier were consumed in small or insignificant amounts. Such low consumption may create nutritional deficiencies, in particular in the most disadvantaged groups. An economic improvement may therefore enhance the effect of a nutrition programme among poor people. A programme may therefore appear effective, measured by gross outcome, although it actually had much less net effect (situation A). Similarly, an effective programme to improve the nutritional situation of poor people may appear to have little or no effect assessed by gross outcome, because of a general downward nutritional trend, caused by economic recession, in the country (situation B). Such an economic recession may lead to a decrease in the consumption of foods, including green leaves, which were nutritionally important for the poorest segment of the community. In such a situation, a nutrition education programme may in reality have been effective and mitigated the nutritional problems by preventing a worse decrease in consumption of green leaves. The secular trends in situation A and B in Figure 5 were very different, but the net outcome was the same.
Figure 5: Theoretical examples of secular trends with confounding effects
· Interfering events:
Like long-term secular trends, short-term interfering events, such as exposure to other types of educational material, can enhance or mask changes.7 These interfering events are difficult to check. An earthquake that disrupts communications and hampers the delivery of food supplements may interfere with a nutrition programme. A threat of war may make it appear that a programme to enhance community co-operation, for example to establish food banks, has been effective, when in reality it is the potential crisis that has brought community members together (Rossi & Freeman, 1993).
7 A number of agencies, governmental and non-governmental organizations produce educational material for adults in nutrition, focusing on different aspects of nutrition and various target groups.
· Design effects:
Design effects result from the evaluation process itself and are thus always present and consistently threaten the validity of impact evaluation. It is important to remember that the act of evaluation itself is an intervention, and thus may have an impact.
· Stochastic effects:
Chance-produced fluctuations - stochastic effects - in estimation of intervention effects based on empirical data, complicate the assessment of intervention effects. Assessing the role of chance consists of hypothesis testing, that is performing tests of statistical significance to determine the likelihood that sampling variability can explain the observed results, and estimation of confidence intervals to indicate the degree of precision of the measurement (Hennekens & Buring, 1987). Sample size should be large enough to permit stratified analysis without losing statistical power to such an extent that stratified analysis is impossible. Stochastic effects are only important if conclusions are dependent on statistical analysis.
· Measurement reliability:
A measurement is reliable to the extent that, in a given situation, it produces the same results repeatedly. The smaller the within-person variability, the better the reliability (Klaver et al., 1988; Burema, van Staveren & van den Brandt, 1988). Although all measurements are subject to reliability problems, they are so to varying degrees. Measurements of dietary intake, for instance, using recognised methods, are subject to less consistent measures from one data collection to another, than measurements of height and weight (Willett, 1990). The sources of unreliability lie both in the nature of the measurement, and the instruments used.
Nutrition education aims to change dietary intake or behaviour influencing nutrition. It is well recognised that the reliability of dietary intake measurement is weak (Beaton et al., 1979 and 1983, Cameron & van Staveren, 1988; Witschi, 1990; Barrett-Connor, 1991).
The reliability of intake measurement using food frequency questionnaires is higher than for the 24-hour diet recall. Part of this improvement is an artefact, because a food frequency questionnaire affords a limited number of options for types of food and portion size. This leads to less variability and higher reliability than a more open-ended method like 24-hour diet recall. Because reliability is in part a function of the precision of the data, differences between repeated recalls increases with decreasing simplicity of the question (Barrett-Connor, 1991). For example, reported consumption of green vegetables is expected to vary less from day to day than is consumption of one specific leafy vegetable. The effect of unreliability in measures is to dilute and obscure real differences. This problem is often referred to as "attenuation due to unreliability" (Rossi & Freeman, 1993). As with most of the possible extraneous confounding effects, weak measurement reliability leads towards null effect.
In evaluation, therefore, when assessing impact on diet, it is important to have a clear definition of the purpose of the assessment and thereby select appropriate methods and variables. Often a simple food frequency questionnaire may be good enough when looking at changes in food use. However, if food intake must be assessed, one would need personnel with a high level of skills to undertake a 24-hour recall, food history, instruct on food records, etc. This is particularly important in the analysis of the data, conclusions drawn and recommendations.
Bias or lack of internal validity8
8 Validity is an expression of the degree to which a measurement measures what it purports to. Consequently, any systematic error of the measuring instrument affects the validity of the measurement (Klaver et al., 1988).
The validity of a study is usually separated into two components, namely internal validity (the inferences drawn as they pertain to the actual subjects in the study), and external validity or generalisability (the validity of the inferences as they pertain to people outside the study population) (see Figure 6) (Rothman, 1986). Internal validity implies an accurate measurement of study subjects, apart from random errors. The internal validity is influenced by various types of biases. Bias at any stage of inference tends to produce results or conclusions that differ systematically from the truth (Sackett, 1979). Rothman (1986) proposes three general types: selection bias, information bias and confounding.
Figure 6: Bias or lack of internal validity
· Selection bias |
· Information bias |
· The Hawthorn effect |
· Underlying factors (e.g. smoking) |
· Selection bias:
This type of bias results from the way subjects are selected for the study. The common element of such biases is that the relation between risk factors and outcome is different for those who participate and those who would theoretically be eligible for study, but do not participate (Greenland, 1977).
Uncontrolled selection is difficult to deal with. Even if some person or agency selects targets for participation, such selection is uncontrolled in the sense that the evaluator cannot materially influence who will or will not be a participant. If the participants in a programme are volunteers, a self-selection bias is inevitable. This type of bias presumably derives from an initiation process, that allows communities with special interest and motivation to be selected. Often they are more progressive, more affluent, or otherwise different from other communities in ways that affect outcome measures. Often, programmes based on the voluntary co-operation of individuals, households, or other units are most likely to be subject to processes leading to self-selection bias (Rossi & Freeman, 1993, Rothman, 1986).
In programmes where people are invited to participate, the problem of self-referral bias appears. Self-referral is normally considered a threat to validity, since reasons for self-referral may be associated with the outcome measures (Criqui, Austin and Barrett-Connor, 1979). Another source of selection bias derives from refusal, non-response or drop-outs among the target group (Hennekens & Buring, 1987). Subjects who leave a programme may be different from those who remain. The consequence is often that those who stay with a programme to its end are those who may have needed the programme least and were most likely to have changed on their own (Wilhelmsen et al., 1976; Rossi & Freeman, 1993).
· Information bias:
Information (or observation) bias results from systematic differences in the way data used for classification of subjects are obtained. The consequences of the bias are different depending on whether the classification error is non-random (dependent on another variable, either exposure or outcome) or random (independent of another variable). The existence of classification errors that are not independent of another variable is referred to as differential misclassification, whereas the existence of classification errors for either exposure or outcome that are independent of the other, is considered nondifferential misclassification (Rothman, 1986).
The most serious problem is the differential non-random misclassification. The effect can be biased in the direction of producing either an over-estimation or under-estimation of the true association. When people are interviewed about dietary intake they tend to reply according to what they consider is healthy, or give the answer they think the interviewer wants. If, for example, a person in a control group is consistently under-reporting her/his food intake because he or she wants to be a beneficiary of a nutrition intervention programme, he or she may be wrongly classified into a group with low energy intake, or low access to food. This differential misclassification may overestimate the effect of a programme. A similar situation may occur in communities where birth certificates are not in use. Consistent under-reporting of children's ages will lead to non-random or differential misclassification, producing an under-estimation of the true level of child malnutrition in a community (Oshaug et al., 1994). Unfortunately, it is often very difficult to estimate the precise effect of differential misclassification (Hennekens & Buring, 1987).
Non-differential or random misclassification has generally been considered a lesser threat to validity than differential misclassification, since the bias introduced by non-differential misclassification always leads towards the null condition (Copeland, Checkoway & McMichael, 1977; Hennekens & Buring, 1987; Rothman, 1986). In impact evaluation using variables on dietary intake as determinants in the analysis, non-differential misclassification creates substantial problems. The deficiency states for essential nutrients are comparatively easy to classify. This is, however, different from most issues confronting nutrition evaluators dealing with non-communicable diseases, which are now rapidly increasing in poor countries all over the world (ICN, 1992a; ICN, 1992b; Marmot, 1992).
The problem of misclassification varies for different determinants, with diet possibly as the most difficult. Firstly, a major problem leading to non-differential misclassification is the lack of valid practical methods to measure the usual dietary intake. Thus, a 24-hour diet recall cannot be used to identify individuals whose intakes are consistently high or low (Dwyer, 1988), except perhaps in communities where dietary patterns are extremely monotonous. In a developing country setting, it is often even more difficult to get valid and reliable dietary intake data. In order to improve the validity and reliability, combined methods are therefore often used (Bingham et al., 1988). Secondly, with few exceptions, all individuals are exposed to hypothesised causal dietary factors such as fat, vitamins, the vitamins A and C and other antioxidants, and non-nutrients including those of toxic nature. The exposure cannot be characterised as present or absent, but rather as a continuum from low to high. In periods, the exposure for the same individual might be high or low (like high intake of fruits and vegetables in seasons of high availability, but low intake of the same foods during off-season). This makes it difficult to classify a person as having a consistently high or low nutrient intake, and non-differential misclassification can easily occur. When an effect exists, bias from non-differential misclassification of exposure is always in the direction of the null value (Rothman, 1986).
· The Hawthorn effect
An intervention programme will often create an effect no matter what the programme is about. In experiments involving pharmacological treatments, it is known as the "placebo effect", and in social programmes like nutrition education for the public, as the "Hawthorn effect" (Rossi & Freeman, 1993). The Hawthorn effect refers to a study conducted in the 1930s, showing that any change in the working environment brings about a rise in worker productivity. It is argued that the Hawthorn effect is not specific to any particular research or evaluation design. It may be present in any study involving human subjects. In other words any nutrition education programme can bring about dietary changes, no matter what the message is. It is difficult to estimate the importance of the Hawthorn effect, and some argue that one may easily exaggerate its importance (Franke & Kaul, 1978).
· Underlying factors influencing diet
Today there are changes that takes place in poor and rich countries which have an impact on nutrition and are sometimes so serious that they can create social unrest (Strasser, Damrosch & Gains, 1991; Wiecha, Dwyer & Dunn-Strohecker, 1991; Stitt, Griffiths & Grant, 1992; Chen, Kleinman & Ware, 1994; Forman, 1994; Golden & Baranov, 1994). People migrate frequently within and between countries, leading to unstable, or the establishment of different, social networks (Pedersen, 1995). As unemployment increases, social relations change, leading to a decrease in lasting family relations, and an increase in re-establishment of relations with other adults (new friends, marriage or co-habitation). Such societal and contextual changes may have positive or negative nutrition implications (McConaghy, 1989; Oshaug, 1994; Pedersen, 1995).
One concrete example is the change in smoking habits throughout the world. Smoking is presently decreasing in rich countries and increasing in poor countries (Barry, 1991; Gray, 1992; Samet, 1993; Mackay, 1994). A number of studies have demonstrated that smokers eat differently from non-smokers. The pattern is similar among men and women of various ages, and in different countries (Kato, Tominaga & Suzuki, 1989; Pamuk et al., 1992; Suyama & Itoh, 1992; Zheng et al., 1993). It is suggested that smoking acts both as a causal factor and as a marker of unhealthy life style (Castelli, 1990; Morabia & Wynder, 1990; Whichelow, Erzinclioglu & Cox, 1991; Hulshof et al., 1992; Perkins, 1992; Strickland, Graves & Lando, 1992; Midgette, Baron & Rohan, 1993). It may be argued that smokers purchase different foods compared to non-smokers because cigarettes are relatively expensive and so compete with food expenditures. If food access were the same, one might assume that there would be no difference in dietary intake. It turns out, however, that in a situation where smokers and non-smokers have the same food access, the smokers have a more unhealthy diet than non-smokers (Oshaug et al., 1995). The changing pattern of smoking will therefore affect people's food habits in particular in urban areas, and thus their nutrition situation (Harpham & Stephens, 1991). This shows that controlling for smoking is important in any evaluation which compares dietary intake in different groups exposed to nutrition education.
Who should evaluate?
In deciding who should perform the evaluation, the first distinction to be made is between internal and external evaluators. An internal evaluator is usually a part of the programme concerned and reports directly to its managers. The internal evaluator's objectivity and external credibility are (often rightly) said to be lower than those of an external evaluator. Because external evaluators are not directly involved or employed in the programmes they examine, they enjoy more independence (Oshaug, 1992), but they may be less discerning about context.
The second distinction is between what can be called professional and amateur evaluators. This distinction reflects differences in training and expertise, not a value judgement of the quality of an evaluation. Evaluation is the focus of the professional evaluator's training and work. The professional training of an amateur evaluator, however, usually focuses on other topics, and evaluation is only a part of her or his job.
The amateur is normally less skilled in evaluation than the professional. Nevertheless, the former might have a better understanding of a programme's evaluation needs, be able to develop better rapport with the staff and will be able to use the information and results of the evaluation faster (often directly), in particular if it is an internal evaluation (Oshaug, 1992).
The discussion so far has basically focused on quantitative evaluation. While impact or outcome evaluation is often quantitative, process evaluation and monitoring also use qualitative information. The relative advantages and disadvantages have been debated extensively in the evaluation literature (Rossi & Freeman, 1993). In the 1970s and early 1980s quantitative evaluation, was heavily criticised (Cook & Reichardt, 1979; Patton, 1980; Lincoln & Guba, 1985). Patton (1978) writes:
"Evaluation research is dominated by the largely unquestioned, natural science paradigm of hypothetico-deductive methodology. This dominant paradigm assumes quantitative measurement, experimental design, and multivariate, parametric statistical analysis to be the epitome of "good" science... By way of contrast, the alternative to the dominant hypothetico-deductive paradigm is derived from the tradition of anthropological field studies. Using the techniques of in-depth, open-ended interviewing and personal observation, the alternative paradigm relies on qualitative data, holistic analysis, and detailed description derived from close contact with the targets of study. The hypothetico-deductive, natural science paradigm aims at prediction of social phenomena; the holistic-inductive, anthropological paradigm aims at understanding of social phenomena. "
Patton agrees, however, that from a utilisation-focused perspective on evaluation, neither of these paradigms is intrinsically better than the other. They represent alternatives from which the evaluator can choose (Patton, 1978). Today the hypothetico-deductive paradigm is not seen in such a negative light. A statistical analysis is not limited to parametric analysis and most evaluators will collect both qualitative as well as quantitative information. As pointed out by Rossi and Freeman (1993), qualitative evaluators often tend to be oriented toward making a programme work better by feeding information on the programme to its managers (formative evaluation). In contrast, quantitatively-oriented evaluators view the field as one primarily concerned with impact or outcome evaluation (summative evaluation). The polemics against or for pure qualitative or quantitative evaluation obscure the critical point - namely, that each approach is useful, and the choice of approaches depends on the evaluation question at hand. Rossi and Freeman (1993) point out that qualitative approaches can play critical roles in programme design and are important means of monitoring programmes (process evaluation). In contrast, quantitative approaches are much more appropriate in estimates of net impact, as well as in assessments of the efficiency of programme efforts.
However, qualitative procedures are difficult and expensive to use if the evaluation depends entirely on this. For example, it would be very difficult and expensive (Rossi & Freeman say that it would be virtually impossible) to build on qualitative observation in large-scale surveys.
The critical issue is thus fitting the approach to the purpose of the evaluation. The use of both qualitative and quantitative, and multiple methods9, can strengthen the validity of findings, if results produced by different methods are congruent and/or complement each other10 (see Figure 7).
9 Often referred to as "triangulation" (Green & McClintock, 1985).10 Congruence here means similarity, consistency, or convergence of results, whereas complementarity refers to one set of results enriching, expanding upon, clarifying, or illustrating the other. Thus, the essence of the triangulation logic is that the methods represent independent assessments of the same phenomenon and contain offsetting kinds of bias and measurement error (Green & McClintock, 1985).
Figure 7: The use of qualitative vs. quantitative methodologies in evaluation
· Both types of methodologies are important |
· Qualitative methodologies are useful in monitoring and process evaluation |
· Outcome/impact evaluation is often quantitative |
· Use of both types of methodologies strengthen validity of findings |
The procedures employed in efficiency assessment (cost-benefit and cost-effectiveness) are often highly technical, and the analysis is based on numerous assumptions (Sønbø Kristiansen, Eggen & Thelle, 1991; Rossi & Freeman, 1993). Nutrition education for the public aiming at changing behaviour has to compete with other programmes for resources. Policy makers and funding agencies (government agencies, United Nation agencies and NGOs) must decide on how to allocate funding among these various programmes. In this competition a central question is: which programmes would show the biggest payoffs per money unit of expenditure?
For decision makers the reference programme is often the one that produces the most impact on the most targets for a given level of expenditure. This simple principle is the foundation for cost-benefit and cost-effectiveness analyses. These analyses provide systematic approaches to resource allocation. From a conceptual point of view, perhaps the most significant value of efficiency analysis is that it forces evaluators and programme personnel to think in a disciplined fashion about both costs and benefits (Rossi & Freeman, 1993).
In cost-benefit analyses the outcomes of nutrition education programmes are expressed in monetary terms:
For example a cost-benefit analysis would focus on the difference between money expended on the nutrition education programme and the money savings from reduced expenditure for treating dietary-related diseases (anaemia, goitre, vitamin A related blindness, etc.), loss of productive capacity, life years gained, quality of life years saved, etc.
In cost-effectiveness analyses the outcome for nutrition education programmes is expressed in substantive terms:
For example a cost-effectiveness analysis of the same nutrition education programme as above would focus on the estimation of money expended to change the diet of each target.
Efficiency analysis can be done either in the planning or design phases of a programme. It is then called ex ante analysis. Ex ante analyses are not based on empirical information, and therefore run the risk of either under- or over-estimating the benefits of effectiveness. Most commonly, efficiency analyses of programmes take place after their completion, often as part of their impact evaluation. This is called ex post analysis where the purpose is to assess whether the costs of the intervention can be justified by the magnitude of net outcomes (Rossi & Freeman, 1993). An important strategy in efficiency analysis is to undertake several different analyses of the same programme, varying the assumptions made which are open for review and checking. This is called sensitivity analysis.
Cost-benefit analysis is controversial because only a portion of programme inputs and outcomes may reasonably be assigned a monetary value. One must ultimately place a value on human life in order to fully monetise the programme benefits (Zeckhauser, 1975; Sønbø Kristiansen, Eggen & Thelle, 1992; Rossi & Freeman, 1993). Efficiency analysis may be impractical and unwise for several reasons (Rossi & Freeman, 1993):
· The required technical procedures may be beyond the resources of the evaluation programme.· Political or moral controversies may result from placing economic values on particular input and outcome measures. This may obscure the relevance and minimise the potential utility of an evaluation.
· Efficiency assessment may require taking different costs and outcomes into account, depending on the perspectives and values of sponsors, stakeholders11, targets and evaluators themselves. This may be difficult for at least some of the stakeholders to understand, and may obscure the relevance and utility of evaluations.
11 Stakeholders are individuals or organizations directly or indirectly affected by the implementation and results of interventions programmes (Rossi & Freeman, 1993).· In some cases, the data needed for undertaking cost-benefit calculations are not fully available. The analytic and conceptual models may be inadequate, and often untested underlying assumptions may lead to faulty, questionable and unreliable results.
There are therefore considerable controversies about converting outcomes into monetary values. Cost-effectiveness analysis is seen as a more appropriate technique than cost-benefit analysis (Rossi & Freeman, 1993). Cost-effectiveness analysis requires monetising only the programme's cost, and the benefits are expressed in outcome units.
To identify one specific skill profile for nutrition evaluators would be impossible. Nutrition is a field which is cross- or inter-disciplinary in nature. Similarly, evaluation is a cross-disciplinary undertaking, where borrowing of methodologies from many disciplines has been extensive. Evaluation is not a "profession", at least in terms of the criteria that are often used to characterise nutritionists, physicians, sociologists, agronomists and other groups. Evaluators use a range of approaches, such as large-scale, randomised field experiments, time-series analysis, qualitative field methods, quantitative cross-sectional studies, rapid appraisal methods, focused group discussions, and participant observation. The role definition of an evaluator in general terms is therefore blurred and fuzzy (Rossi & Freeman, 1993).
Clearly it is impossible for every person involved in evaluating nutrition education of the public to be a scholar in all relevant sciences and disciplines, and to be an expert in every methodological procedure. In evaluation of nutrition education programmes, it is therefore important to be open to hiring consultants who are experts in methods the evaluators themselves cannot cover.
Instead of attempting to make an extensive list of the skills needed, we can consider some examples linked to the various types of evaluation discussed above.
An evaluator has an important role in assessing the correctness of problem identification (context evaluation). Skills are therefore needed in diagnostic procedures for defining the nature, size, and distribution of the nutrition problem. This may include analysis of existing data to assess or provide a baseline, rapid appraisals, qualitative needs assessment, forecasting needs, estimating nutrition parameters, estimating nutrition/disease-risk behaviours, and assessing the selection of targets (incidence/prevalence measurements, identification of population at risk, etc.). Several of these skills are also relevant in process and outcome evaluation. Furthermore, skills are also needed in using indicators to identify trends, measure programme coverage, identify effects and impact, assess biases and confounding factors, and disseminate evaluation results to various stakeholders.
Evaluation theory, research design topology, and methodology, are discussed in many books which can be recommended for further reading (Levin et al., 1981; Rossi, Wright & Anderson, 1983; Shadish, Cook & Leviton, 1991; Rossi & Freeman, 1993). Evaluation can be simple or complex. The methods chosen depend on the evaluator's competence and the aims of the evaluation. Experimental and quasi-experimental designs have often been discussed, but such rigorous designs have been criticised. In evaluating nutrition education programmes, one should feel free to look at various options, aiming at the simplest system that works, and seeking the best method or set of methods for answering the questions that address the objectives of the evaluation. Having chosen a type of evaluation and the questions and indicators to use, one will be better able to decide between the use of, for example, quantitative or qualitative methods, questionnaires, guides, general interviews, focused groups, key informant interviews, and participant observation (Oshaug, 1992).
(i) Integrate evaluation in the programme from the planning phase.
(ii) Clarify the purpose of the evaluation.
(iii) Develop an evaluation system which takes account of all phases of the nutrition education project.
(iv) Decide if the evaluation should be internal or external, or both.
(v) When evaluating inputs, make sure that programme objectives are properly specified and that they contain criteria, and that the activities are relevant and feasible.
(vi) When evaluating impact of nutrition education on diet, use combined dietary assessment methods in order to improve validity.
(vii) Use multiple methods (triangulation) in data collection and analyses. This will strengthen the validity of findings if the results produced by different methods are congruent.
(viii) In analyses, be careful to control for extraneous confounding factors and bias.
(ix) In efficiency analyses, select a cost-effectiveness analysis rather than a cost-benefit analysis because it is more appropriate for nutrition education programmes.
(x) In internal evaluation, assess the competence of the evaluator(s) needed for the evaluation. Be open to hiring consultants who are experts in methods not available in the programme, or for training of programme personnel.
(xi) Evaluation should be part of further training for nutrition personnel, and training in evaluation methodology should be provided for programme personnel.
(xii) Resources for evaluation should be specified in the general budget for nutrition education programmes.
(xiii) Adequate time should be allocated to nutrition education programmes, with the timing of the evaluation clearly identified.
(xiv) Make a plan for dissemination of the evaluation results and ensure that they are presented in ways which correspond to the needs and competencies of the relevant stakeholders.
Barry, M. 1991. The influence of the US tobacco industry on the health, economy, and environment of developing countries. New Engi J Med, 324: 917-920.
Barrett-Connor, E. 1991. Nutrition epidemiology: how do we know what they ate? Am J din Nutr, 54: 182S-7S.
Beaton, G.H., Milner, J., Corey P., et al. 1979. Sources of variance in 24-hour dietary recall data: implications for nutrition study design and interpretations. Am J Clin Nutr, 32: 2456-559.
Beaton, G.H., Milner, J., McGuire, V., Feather, T.E. & Little, J.A. 1983. Source of variance in 24-hour recall data: implications for nutrition study design and interpretation. Carbohydrate sources, vitamins, and minerals. Am J Clin Nutr, 37: 986-95.
Bingham, S.A., Nelson, M., Paul, A.A., Haraldsdottir, J., Bjorge Loken, E. & van Staveren, W.A. 1988. Methods for data collection at an individual level. In Cameron, A.E. & Staveren (eds.) Manual on methodology for food consumption studies. Oxford, Oxford University Press.
Burema, J., van Staveren, W.A. & van den Brandt, P.A. 1988. Validity and reproducibility. In Cameron, M.E. & van Staveren, W.A. (eds.) Manual on methodology for food consumption studies. Oxford, Oxford University Press.
Cameron, W.E. & van Staveren, W.A. (eds.) 1988. Manual on methodology for food Consumption studies. Oxford, Oxford University Press.
Castelli, W.P. 1990. Diet, smoking, and alcohol: influence on coronary heart disease risk. Am J Kidney Dis, 16(4 Suppi I): 41-46.
Chapman, D.W. & Boothroyd, R.A. 1988. Evaluation dilemmas: conducting evaluation studies in developing countries. Evaluation and Programme Planning, 11: 37-42.
Chen, L.C., Kleinman, A. & Ware, N.C. 1994. Health and social change in international perspective. Harvard series on population and international health. Boston, Harvard University Press.
Cook, T.D. & Reichardt, C.S. 1979. Qualitative and quantitative methods in evaluation research. Beverly Hills, CA., Sage Publications,
Copeland, K.T., Checkoway, H., McMichael, A.J. et al. 1977. Bias due to misclassification in the estimation of relative risk. Am J Epidemiol, 105: 488-95.
Criqui, M.H., Austin, M. & Barrett-Connor, E. 1979. The effect of non-response on risk ratios in a cardiovascular disease study. J Chron Dis, 32: 633-38.
Dwyer, J.T. 1988. Assessment of dietary intake. In Shils, M.E. & Young, V.R. (eds.) Modern Nutrition in Health and Disease. Philadelphia, Lea & Febiger.
Forman, S. (ed.) 1994. Diagnosing America: Anthropology and public engagement. The University of Michigan Press, Ann Arbor.
Franke, R.H. & Kaul, J.D. 1978. The Hawthorn experiments: First statistical interpretation. Am SociolRev, 43: 623-43
Golden, R.E. & Baranov, M.S. 1994. The impact of the April 1992 civil unrest on the Los Angeles REI WIC programme and its participants. Public Health Rep, 109: 606-14.
Gray, N. 1992. Evidence and overview of global tobacco problem. Monogr Nad Cancer Inst, 12: 15-16.
Green, J. & McClintock, C. 1985. Triangulation in evaluation. Design and analysis issues. Evaluation Rev, 9: 523-45.
Guba, E.G. & Lincoln, Y.S. 1981. Effective evaluation. San Francisco, Jossey-Bass.
Guba, E.G. & Lincoln, Y.S. 1989. Fourth generation evaluation. London, Sage Publications.
Harpham, T. & Stephens, C. 1991. Urbanization and health in developing countries. World Health Stat Q, 44: 62-69.
Hennekens, C.H. & Buring, J.E. 1987. Epidemiology in medicine. Boston, Little, Brown and Company.
House, E.R. 1980. Evaluating with validity. Beverly Hills, CA., Sage Publications. House, E.R. 1986. New directions in educational evaluation. London, The Falmer Press.
Hulshof, K.F., Wedel, M., Lowik, M.R., Kok, F.J., Kistemaker, C., Hermus, R.J., ten Hoor, F. & Ockhuizen, T. 1992. Clustering of dietary variables and other lifestyle factors (Dutch Nutritional Surveillance System). J Epidemiol Community Health, 46: 417-24.
ICN. 1992. Communication to improve nutritional behaviour: the challenge of motivating the audience to act. ICN/92/INF/29. International Conference on Nutrition. Rome, FAO/WHO Joint secretariat for the Conference.
ICN. 1992a. Nutrition and development - a global assessment. PREPCOM/ICN/92/3. International Conference on Nutrition. Rome, FAO/WHO.
ICN. 1992b. Promoting appropriate diets and healthy lifestyles. PREPCOM/ICN/92/INF/10. Major issues for nutrition strategies. Theme paper No. 5. International Conference on Nutrition. Rome, FAO/WHO.
Kato, I., Tominaga, S. & Suzuki, T. 1989. Characteristics of past smokers. Int J Epidemiol, 18: 345-54.
Klaver, W.A., Burema, J., van Staveren, W.A. & Knuiman, J.T. 1988. Definitions of terms. In Cameron M.E. and van Staveren W.A. (eds.) Manual on methodology for food consumption studies. Oxford, Oxford University Press.
Klein, R.E., Read, M.S., Riecken, H.W., Brown, J.A., Pradilla, A. & Daza, C.H. (eds.) 1979. Evaluating the impact of nutrition and health pro grammes. New York, Plenum Press.
Kleinbaum, D.G., Kapper, L.L. & Morgenstern, H. 1982. Epidemiologic research. New York, Van Norstrand Reinhold Company.
Levin, R.A., Solomon, M.A., Hellstern, G.M. & Wollmann, H. (eds.) 1981. Evaluation research and practice. Comparative and international perspectives. Beverly Hills, CA, Sage Publications.
Lincoln, Y.S. & Guba, E.G. 1985. Naturalistic inquiry. Newbury Park, CA, Sage Publications.
Luepker, R.V., Murray, D.M., Jacobs, D.R. et al. 1994. Community education for cardiovascular disease prevention: Risk factor changes in the Minnesota heart health programme. Am J Public Health: 1383-93.
Mackay, J. 1994. The tobacco problem: commercial profit versus health - the conflict of interests in developing countries. Prev Med, 23: 535-38.
Marmot, M. 1992. Coronary heart disease: rise and fall of a modern epidemic. In Marmot, M. & Elliott, P. (eds.) Coronary heart disease epidemiology. From aetiology to public health. Oxford, Oxford University Press.
McConaghy, J. 1989. Adults' beliefs about the determinants of successful dietary change. Community Health Studies, 13: 492-502.
McMahon. R., Barton, E. & Piot, M. 1980. On being in charge. A guide for middle-level management in primary health care. Geneva, WHO.
Morabia, A. & Wynder, E.L. 1990. Dietary habits of smokers, people who never smoked, and exsmokers. Am J Clin Nutr, 52: 933-37.
Midgette, A.S., Baron, J.A. & Rohan, T.E. 1993. Docigarette smokers have diets that increase their risk of coronary heart disease and cancer? Am J Epidemiol, 137: 521-29.
Neyzi, O., Gulecyuz, M., Dincer, Z., Olgun, P., Kutluay, Uzel, N. & Saner, G. 1991. An educational intervention on promotion of breastfeeding complemented by continuing support. Paediatric and Perinatal Epidemiol, 5: 286-98.
Oshaug, A. 1992. Planning and managing community nutrition work: Manual for personnel involved in community nutrition. International Nutrition Section, WHO Collaborating Centre, Nordic School of Nutrition, University of Oslo. (2nd ed. 1994).
Oshaug, A. 1994. Nutrition Security in Norway? A Situation Analysis. Scandinavian J Nutr, 38 (Suppi 28): 1-68.
Oshaug, A., Benbouzid, D. & Guilbert, J-J. 1993. Educational handbook for nutrition trainers: A handbook on how educators can increase their skills so as to facilitate learning for the students. World Health Organization, Geneva/WHO Collaborating Centre, Nordic School of Nutrition, University of Oslo.
Oshaug, A., Pedersen, J., Diarra, M., Ag Bendech, M. & Hatloy, A. 1994. Problems and pitfalls in the use of age in anthropometric measurements: A case from Mali. J Nutr, 124: 636-44.
Oshaug, A., Bjonnes, C.H., Bugge, K.H. & Trygg, K.U. 1995. Tobacco smoking, an independent determinant for unhealthy diet? A cross sectional study of Norwegian workers on platforms in the North Sea. Eur J Publ Health (in press).
Pamuk, E.R., Byers, T., Coates, R.J., Vann, J.W., Somwell, A.L., Gunter, E.W. & Glass, D. 1992. Effect of smoking on serum nutrient concentrations in African-American women. Am J din Nutr, 59: 891-5.
Patton, M.Q. 1978. Utilization-focused evaluation. Beverly Hills, CA, Sage Publications.
Patton, M.Q. 1980. Qualitative evaluation methods. Beverly Hills, CA, Sage Publications..
Pedersen, J. 1995. Drought, migration and population growth in the Sahel: The case of the Malian Gourma: 1990-1991. Pop Stud, 49: 111-126.
Perkins, K.A. 1992. Effects of tobacco smoking on caloric intake. BrJ Addict, 87: 193-205.
Romiszowsky, A.J. 1984. Producing instructional systems: Lesson planning for individualized and group learning activities. London, Kogan Page.
Rossi, P.H. & Freeman, H.E. 1993. Evaluation: A systematic approach. London, Sage Publications.
Rossi, P.H., Wright, J.D. & Anderson, A.B. (eds.) 1983. Handbook of survey research. New York, Academic Press.
Rothman, K.J. 1986. Modern epidemiology. Boston, Little, Brown and Company.
Sackett, D.L. 1979. Bias in analytic research. J Chronic Dis, 32: 51-63.
Samet, J.M. 1993. The epidemiology of lung cancer. Chest, 103: 20S-29S.
Shadish, W.R., Cook, T.D. & Leviton, L.C. (eds.) 1991. Foundations of programme evaluation. Theories and practice. London, Sage Publications.
Stake, R.E. 1981. Setting standards for educational evaluators. Evaluation News, 2: 148-52.
Stitt, S., Griffiths, G. & Grant, D. 1992. Homeless & hungry: the evidence from Liverpool. Nutr and Health, 9: 275-87.
Strasser, L.A., Damrosch, S. & Gains, J. 1991. Nutrition and the homeless person. J Community Health Nurs, 8: 65-73.
Strickland, D., Graves, K. & Lando, H. 1992. Smoking status and dietary fats. Prev Med, 21: 228-36.
Stuffelbeam, D.L. & Webster, W.J. 1980. An analysis of alternative approaches to education. Educational Evaluation and Policy Analysis, 1: 5-20.
Suyama, Y. & Itoh, R. 1992. Multivariate analysis of dietary habits in 931 elderly Japanese males: smoking, food frequency and food preferences. J Nutr Elder, 12: 1-12.
Sønbø Kristiansen, I., Eggen, A.E. & Thelle, D.S. 1991. Cost effectiveness of incremental programmes for lowering serum cholesterol concentration: is individual intervention worth while? Br Med J, 302: 1119-22.
Weiss, C.H. 1987. Evaluating social programmes: what have we learned? Society Nov/Dec: 40-45.
Whichelow, M.J., Erzinclioglu, S.W. & Cox, B.D. 1991. A comparison of the diets of non-smokers and smokers. Br J Addict, 86: 71-81.
Wholey, J.S. 1981. Using evaluation to improve programme performance. In Levin, R.A., Solomon, M.A., Hellstem, G-M. & Wollmann, H. (eds.) Evaluation research and practice. Comparative and International Perspectives. London, Sage Publications.
Wiecha, J.L., Dwyer, J.T. & Dunn-Strohecker, M. 1991. Nutrition and health services needs among the homeless. Publ Health Rep, 106: 364-74.
Wilhelmsen, L., Ljungberg, S., Wedel, H. & Werko, L. 1976. A comparison between participants and non-participants in a primary preventive trial. J Chronic Dis, 29: 331-39.
Willett, W. 1990. Nutritional epidemiology. Oxford, Oxford University Press.
Witch, J.C. 1990. Short-term dietary recall and recording methods. In Willett Nutritional epidemiology. Oxford, Oxford University Press.
Zeckhauser, R. 1975. Procedures for valuing lives. Public Policy, 23: 419-64.
Zheng, W., McLaughlin, J.K., Gridley, G., Bjeike, E., Schuman, L.M., Silverman, D.T., Wacholder, S., Co-Chien, H.T., Blot, W.T. & Fraumeni Jr., J.F. 1993. A cohort study of smoking, alcohol consumption, and dietary factors for pancreatic cancer. Cancer Causes Control, 4: 477-82.