Taxonomy of PR and Communication Evaluation
Introduction
A wide range of models of PR and communication evaluation exist using a wide range of terms including inputs, outputs, outtakes, outflows, outgrowths, effects, results, and impact. An even wider range of metrics and methods for evaluation are proposed for each stage. The field is confusing for many practitioners.
This page presents a taxonomy of evaluation tailored to strategic public communication – a taxonomy being a mapping of a field to produce a categorisation of concepts and terms – in short, to show where things go and where they fit in relation to each other. This taxonomy identifies:
The major stages of communication (such as inputs, outputs, etc.); The key steps involved in each stage (such as distribution of information, reception by audiences, etc.); Examples of metrics and milestones that can be generated or identified as part of evaluation at each stage; and The most commonly used methods for generating these metrics and milestones. Taxonomy
A taxonomy is not the same as a model, as a taxonomy attempts to list ALL the main concepts, terms, metrics, methods, etc. in a field, while a model is an illustration of a specific program or activity to be applied in practice. However, models should be based on the concepts and methods identified as legitimate in the field and apply them appropriately.
An important benefit of a taxonomy is that it puts concepts, metrics, methods, etc. in their right place – e.g., it avoids output metrics being confused with outcome metrics. The authors of the widely used PR text Effective Public Relations, Cutlip, Center and Broom have noted repeatedly in editions from 1985 to the late 2000s that “the common error in program evaluation is substituting measures from one level for those at another level” (1985, p. 295; 1994, p. 44; Broom, 2009, p. 358). Emeritus Professor of Public Relations Jim Grunig similarly says that many practitioners use “a metric gathered at one level of analysis to show an outcome at a higher level of analysis” (2008, p. 89). PR and strategic communication is not alone in this. The widely-used University of Wisconsin (UWEX) guide to program logic models for evaluation says, for example, “people often struggle with the difference between outputs and outcomes” (Taylor-Power & Henert, 2008, p. 19).
No taxonomy is ever complete, but the taxonomy presented here draws on a wide range of research studies to be as comprehensive as possible (see ‘Introduction to the AMEC Integrated Framework for Evaluation’ for details of the origin and basis of this taxonomy and the framework itself).
Notes for using this taxonomy
The listed steps, metrics, milestones, and methods are not exhaustive, and not all are required in every program. They are indicative of common and typical approaches to evaluating public communication, such as advertising, public relations, marketing communication, etc. Practitioners should choose relevant metrics, milestones, and methods, ideally selecting at least one at each stage.
The arrangement of inputs, activities, outputs, etc. should not be interpreted as a simple linear process. Feedback from each stage should be applied to adjust, fine-tune, and change strategy and tactics if necessary. Evaluation is an iterative process.
Not all evaluations can show impact, particularly when evaluation is undertaken within a relatively short time period following communication. Impact often occurs several years ‘downstream’ of communication. Also, the objectives of some public communication are to create awareness (an outtake or short-term outcome) or build trust (an intermediate outcome)[xii]. However, as a general rule, evaluation should report well beyond outputs and outtakes. Evaluation should identify and report outcomes at a minimum and, when possible, impact.
An important feature of this taxonomy is that impact includes organizational, stakeholder, and societal impact/outcomes. This aligns with program evaluation theory and program logic models (e.g., Kellogg Foundation, 1998/2004; Taylor-Power & Henert, 2008; Wholey, 1979; Wholey, Hatry, & Newcomer, 2010) and with Excellence Theory of PR, which calls for evaluation to be conducted at (a) program level; (2) functional level (e.g., department or unit); (3) organizational level; and (4) societal level (L. Grunig, J. Grunig & Dozier, 2002, pp. 91–92).
SUPPORTING FOOTNOTES FOR THE TAXONOMY OF EVALUATION
Trust is considered an intermediate outcome because it is sought in order to achieve a longer-term impact, such as being elected to government, customers continuing to do business with a company, etc. It is not an end goal in itself. Some program logic models refer to this first stage as Inputs/Resources. Advanced outtakes overlap with, or can be the same as, short-term outcomes. This is why the most commonly used program logic models do not use outtakes as a stage. Outtakes and outcomes can be cognitive and/or affective (emotional) and/or conative (behavioural). Long-term outcomes overlap with and are sometimes considered to be the same as impact. Impact is often evaluated only in relation to the organization. However, as noted in the introduction, impact on stakeholders, publics, and society as a whole should be considered. This is essential for government, non-government, and non-profit organizations focussed on serving the public interest. But also, impact on stakeholders and society affects and shapes the environment in which businesses operates (i.e., this evaluation forms part of environmental scanning, audience research, and market research that will inform future planning). Causation is very difficult to establish in many cases, particularly when multiple influences contribute to impact (results), as is often the case. The three key rules of causation must be applied: (a) the alleged cause must precede the alleged effect/impact; (b) there must be a clear relationship between the alleged cause and effect (e.g., there must be evidence that the audience accessed and used information you provided); and (c) other possible causes must be ruled out as far as possible. Some include planning in inputs. However, if this occurs, formative research (which should precede planning) also needs to be included in inputs. However, most program evaluation models identify formative research and planning as key activities to be undertaken as part of the communication program. Inputs are generally pre-campaign/program. Reception refers to what information or messages are received by target audiences and is slightly different to exposure. For example, an audience might be exposed to a story in media that they access, but skip over the story and not receive the information. Similarly, they may attend an event such as a trade show and be exposed to content, but not receive information or messages (e.g., through inattention or selection of content to focus on). Learning (i.e., acquisition of knowledge) is not required in all cases. However, in some public communication campaigns and projects it is. For example, health campaigns to promote calcium-rich food and supplements to reduce osteoporosis among women found that, first, women had to be ‘educated’ about osteoporosis (what it is , its causes, etc.). Similary, combatting obesity requires dietary education. Whereas understanding refers to comprehension of messages communicated, learning refers to the acquisition of deeper or broader knowledge that is necessary to achieve the objectives. Ethnography is a research method based on intensive first-hand observation over an extended period, often supplemented by interviews and other research method. Netnography is online ethnography in which online users are closely monitored to identify their patterns of behaviour, attitudes, etc. via their comments, click trails, and other digital metrics. Net promoter score is a score out of 10 based to a single question: ‘How likely is it that you would recommend [brand] to a friend or colleague?’ Scores of 0–6 are considered ‘detractors’/dissatisfied; scores of 7–8 are satisfied but unenthusiastic; and scores of 9–10 are those considered loyal enthusiasts, supporters, and advocates. (See) Econometrics is the application of mathematics and statistical methods to test hypotheses and identify the economic relations between factors based on empirical data (see) ABBREVIATIONS
CPM Cost per thousand (mille = Latin for thousand)
CRM Customer relationship management (data commonly held in CRM databases)
KPI Key Performance Indicator (see OTS Opportunities to see (usually calculated the same as ‘impressions’)
PEST An evaluation framework that examines ‘political’, ‘economic’, ‘social’ and ‘technological’ factors
PESTLE An evaluation framework that examines ‘political’, ‘economic’, ‘social’, ‘technological’, ‘legal’ and ‘environmental’ factors (also used as PESTEL, with ‘environmental arranged before ‘legal) (see http://pestleanalysis.com/how-to-do-a-swot-analysis)
ROI Return on investment
SMART Refers to objectives that are ‘specific’, ‘measureable’, ‘achievable’, ‘relevant’ (e.g., linked to organizational objectives), and ‘time-bound’ (i.e., within a specified period)
SWOT A strategic planning method that examines ‘strengths’, ‘weakness’, ‘opportunities’, ‘threats’
TARPs Target audience ratings points based on the ratings system used in advertising (see )
UWEX University of Wisconsin Extension program (see Taylor-Power & Henert, 2008)