Reporting the Results from a Purposeful Sample
Greater care needs to be taken when reporting the results from a purposeful sample than from a generalizable sample. Most people are familiar with generalizable sampling and how it is used to measure the level or magnitude of population characteristics. This familiarity may be the reason why the results from purposeful samples are incorrectly reported as if they were the results from a generalizable sample. The results from a purposeful sample cannot be used to extrapolate to the population as a whole. An explicit clarification of this limitation should always be included in the description of the selected purposeful sampling method.
What can be reported depends entirely on how the sample was selected. Each purposeful sampling method introduces bias into the sample for the intent of learning something specific about the population. The reporting of results should clearly state that intent and be specifically limited to it.
When reporting the results of a purposeful sample, it helps to focus on six key questions about the methodology and the evidence. These are:
- How was the sample selected?
- What type of information was this sample expected to yield?
- What did the auditors look for?
- What did the auditors find?
- What can be inferred from the results given how the sample was selected?
- What other information corroborates this inference?
How was the sample selected?
While many people understand generalizable sampling, the same cannot be said of purposeful sampling. One reason that explains this difference is that the variation in methods is much larger for purposeful sampling (see Appendix 4 for descriptions of various methods), and this variation has a large influence on how to interpret results. For these reasons, it is important to describe in simple language to the reader of an audit report how the purposeful sample was selected.
What type of information was this sample expected to yield?
The key to using purposeful sampling appropriately is to provide a logical rationale between the method of selection and the types of conclusions one can make. Auditors should describe how their method of selection can logically allow them to draw a specific conclusion. Examples are in Text Box 4.
Text Box 4 – Theoretical Examples of Descriptions of Sampling Methods and their Potential
Example 1 – Selecting a sample with the widest possible variation in cases:Generating a sample that includes the widest possible variation of cases will allow the audit to conclude that the potential for risk exists in practically any area of the population and is not restricted to a narrow segment as previously assumed. Example 2 – Using a single index case to demonstrate the existence of a serious threat:The common practice of open USB ports on government computers has the potential to result in widespread introduction of computer viruses throughout government computer systems. We identified one case that shows that the risk is real. The user in this case had a high level of computer security knowledge and his computer configuration had a reasonable level of software protection. Despite these advantages, the use of open USB ports and common use of flash drives resulted in a system-wide infection. |
What did the auditors look for?
The report should describe the type and extent of examination performed. Examination of cases in purposeful sampling is usually broader and more extensive than in generalizable sampling.
What did the auditors find?
The report should describe the findings objectively. It is a recommended good practice for auditors to avoid reporting rates or statistical proportions derived from a purposeful sample, especially when small numbers are involved, because readers may assume the findings can be generalized to the population. It is better to report raw figures (not percentages) and use examples to illustrate typical findings. If rates or statistical proportions are reported it is recommended to specify clearly that they relate to the sample, not the population.
What can be inferred from the results given how the sample was selected?
Inference from purposeful samples is dependent on what was found, how the sample was selected, and the uniformity of the results. While interpretation of findings from a generalizable sample is a relatively simple exercise of extrapolation, interpretation of findings from a purposeful sampling requires the establishment of a logical argument. Given the manner in which the sample was selected, what can the results tell the readers with certainty? It is also important to remind the readers of what cannot be inferred from the finding, such as a prediction of population error rates.
What other information corroborates this inference?
Finally, remember that evidence from purposeful sampling has a higher need for corroboration from other sources of evidence (what is sometimes called triangulation of evidence) than findings from generalizable sampling. When designing the audit approach, auditors should plan for multiple methods of inquiry to provide information from different perspectives, which will help in forming a cohesive argument.
An example of a good description of a purposeful sampling approach is in Text Box 5.
|
Audit: Office of the Auditor General of Canada – Indian and Northern Affairs Canada – Meeting Treaty and Land Entitlement Obligations, published November 2005 Findings from the audit (excerpt): 7.48 We reviewed land selection files in both Saskatchewan and Manitoba to gain insight into particular problems and assess departmental performance against stated commitments. To accomplish this, we extracted two purposeful samples. In the first sample, we asked both Saskatchewan and Manitoba to identify their best cases and more problematic cases, for a total of 24 files. The function of this sample was to determine what the Department determines as successes and failures. In our second sample, we selected 44 files representing a cross-section of land selection situations with specific characteristics, such as the location (urban or rural, northern or southern), affiliation (part of multi-party framework agreement or an individual agreement), and status (complete or still in process). These cases were reviewed and compared with the best and problematic cases. 7.49 In both samples, we found that most files contained the key documents required to the point that the files had progressed in the process. The majority of completed files were from Saskatchewan, whereas Manitoba had a significant number of files still in the process. Overall, we found little evidence of communication with individual First Nations, such as notes-to-file and records of meetings—documentation necessary for properly managing selections. 7.50 Most significant was the considerable variation in file management methods by individual project officers. This ranged from detailed and comprehensive checklists, to periodic notes-to-file, to very little file management whatsoever. This finding is of particular concern when coupled with the level of staff turnover noted among project officers responsible for the selections in each sample. Some files had as many as four project officers in six years. If information is not systematically collected and recorded, then it quickly becomes an issue of high risk when coupled with staff turnover and the Department's unreliable data systems. 7.51 Within the sample of best and problematic cases, almost all of the files that the Department considered a success had proceeded or were proceeding through the process with few complications. However, we noted that the processing times were much faster in Saskatchewan than in Manitoba. Conversely, most of the files that the Department considered problematic were more complex, involving third-party interests, difficulties with municipalities, and procedural matters (for example, land selections deemed ineligible under the framework agreements). 7.52 In our second sample, we found that the majority of files had the same characteristics as the problematic cases in our first sample. Very few cases had gone through the process without complications, and most were delayed at some point in the process. Delays were most often caused by third-party interests, such as concerns of municipalities or issues related to natural resources (minerals or oil and gas). When the files remained stalled for several years, there were no plans to resolve the impasse. Conditions attached by the Department to regional approval-in-principle, a document indicating the region’s support for selections, were another cause of delays. Given that most of the files in our second sample exhibited the same characteristics as the problematic files in the first sample, we believe there are critical issues that must be addressed in planning and file management. |


