When and who should analyse the data collection

As far as data collection goes, the “when” part of this question is relatively simple: data collection should start no later than when you begin your work – or before you begin in order to establish a baseline or starting point – and continue throughout. Ideally, you should collect data for a period of time before you start your program or intervention in order to determine if there are any trends in the data before the onset of the intervention. Additionally, in order to gauge your program’s longer-term effects, you should collect follow-up data for a period of time following the conclusion of the program.

The timing of analysis can be looked at in at least two ways: One is that it’s best to analyze your information when you’ve collected all of it, so you can look at it as a whole. The other is that if you analyze it as you go along, you’ll be able to adjust your thinking about what information you actually need, and to adjust your program to respond to the information you’re getting. Which of these approaches you take depends on your research purposes. If you’re more concerned with a summative evaluation – finding out whether your approach was effective, you might be more inclined toward the first. If you’re oriented toward improvement – a formative evaluation – we recommend gathering information along the way. Both approaches are legitimate, but ongoing data collection and review can particularly lead to improvements in your work.

The “who” question can be more complex. If you’re reasonably familiar with statistics and statistical procedures, and you have the resources in time, money, and personnel, it’s likely that you’ll do a somewhat formal study, using standard statistical tests. (There’s a great deal of software – both for sale and free or open-source – available to help you.)

If that’s not the case, you have some choices:

*You can hire or find a volunteer outside evaluator, such as from a nearby college or university, to take care of data collection and/or analysis for you.
*You can conduct a less formal evaluation. Your results may not be as sophisticated as if you subjected them to rigorous statistical procedures, but they can still tell you a lot about your program. Just the numbers – the number of dropouts (and when most dropped out), for instance, or the characteristics of the people you serve – can give you important and usable information.
*You can try to learn enough about statistics and statistical software to conduct a formal evaluation yourself. (Take a course, for example.)
*You can collect the data and then send it off to someone – a university program, a friendly statistician or researcher, or someone you hire – to process it for you.
*You can collect and rely largely on qualitative data. Whether this is an option depends to a large extent on what your program is about. You wouldn’t want to conduct a formal evaluation of effectiveness of a new medication using only qualitative data, but you might be able to draw some reasonable conclusions about use or compliance patterns from qualitative information.
*If possible, use a randomized or closely matched control group for comparison. If your control is properly structured, you can draw some fairly reliable conclusions simply by comparing its results to those of your intervention group. Again, these results won’t be as reliable as if the comparison were made using statistical procedures, but they can point you in the right direction. It’s fairly easy to tell whether or not there’s a major difference between the numbers for the two or more groups. If 95% of the students in your class passed the test, and only 60% of those in a similar but uninstructed control group did, you can be pretty sure that your class made a difference in some way, although you may not be able to tell exactly what it was that mattered. By the same token, if 72% of your students passed and 70% of the control group did as well, it seems pretty clear that your instruction had essentially no effect, if the groups were starting from approximately the same place.

Who should actually collect and analyze data also depends on the form of your evaluation. If you’re doing a participatory evaluation, much of the data collection – and analyzing – will be done by community members or program participants themselves. If you’re conducting an evaluation in which the observation is specialized, the data collectors may be staff members, professionals, highly trained volunteers, or others with specific skills or training (graduate students, for example). Analysis also could be accomplished by a participatory process. Even where complicated statistical procedures are necessary, participants and/or community members might be involved in sorting out what those results actually mean once the math is done and the results are in. Another way analysis can be accomplished is by professionals or other trained individuals, depending upon the nature of the data to be analyzed, the methods of analysis, and the level of sophistication aimed at in the conclusions.