Program Development & Evaluation

Program Development & Evaluation


In short:  Developing a new therapy manual, program, or intervention is fun, rewarding, and a great credential for your CV.  To qualify as an IP, your project will also have to include some method of evaluating your intervention. This type of project combines your new program with a study proposal to figure out if your program is working. It’s a lot like a research proposal – except that you are the creator of the intervention! 

Key Components

  • Literature review supporting your program
  • Intervention
  • Program materials (e.g. manual)
  • Implementation procedure
  • Proposed method of evaluation
  • Measures to be used
  • Data analysis plan 

The whole story:  Your proposed program must address a clinical need, aimed at provision of mental health services (e.g., prevention, treatment, psychoeducation, etc.) for a population dealing with a mental health issue or related condition (e.g., elderly adults dealing with loneliness, teenagers dealing with disordered eating).

The need for a new program design must be based on empirical literature identifying service gaps and limitations of existing programs. The new program design should also be grounded in relevant psychological theory or empirically-supported frameworks (e.g., attachment theory, trauma-informed care, etc.). 

That's why a program development and evaluation project begins with a literature review which should generate insights from both theory and existing practice to apply to the design of your new clinical program, intervention, or service model. 

Your section on the intervention itself should describe program goals, program content, delivery method (telehealth or in-person), and the intended outcomes of the program. Any program materials (e.g., training manuals, marketing handouts) should be adequately described in the text and included as appendices (e.g., one-page-handouts, flyers, logos).

Your report should provide a clear implementation procedure, describing how the program could be delivered in a real-world setting (e.g., schools, clinics, community organizations, primary care physician offices, etc.). Regarding program participants, implementation plans should include detailed workflows about how patients move through the program. How is recruitment handled? What will occur following each patient’s first encounter with the program? How will new patients be assigned to clinicians? These workflows may also include policies for referring to a higher level of care. 

Implementation plans may also include the hiring and training of staff (e.g., training length and what training will entail), the marketing of the program, and the expected workload for each staff member (e.g.  caseload expectations for each mental health professional in the program). 

The program evaluation portion is simply a study design that collects and analyzes quantitative data (qualitative evaluation is not an option for the IP). This involves picking milestones when your program will be evaluated. It also involves a specific, testable set of questions or hypotheses about your program’s impact on participants. 

As with any study, identify research measures (e.g., scales, tests, experiments, etc.) that are documented as having adequate reliability and validity and are capable of measuring program effectiveness. Where necessary, one or two modified scales and novel research instruments designed by the student may be part of the set of research instruments used. 

IRB approval is not required -- unless you're actually implementing your program! 

Data collection is also not required, but you should include a proposed data analysis plan. Explain what kind of data you expect to obtain (e.g., does the sample include clinicians, patients, other informants?). Discuss also your expected sample size, and include an analysis of a minimally-viable sample size (e.g., using G*Power or a similar software tool). Also discuss the specific statistical tests you would conduct.  You may describe expectations for the sample characteristics (e.g., will the data be normally distributed or skewed), and the statistical choices you expect to make. 

The Discussion should provide a nuanced account of how the program and the expected results of its evaluation can add value by addressing service gaps to the field of clinical psychology. Limitations of the program, as well as of its evaluation, should be acknowledged and described.

Your final integrative project should follow a clear structure (e.g., abstract, introduction, methods, results, discussion, references) and provide well-organized tables or figures as applicable. All references should be properly listed and all writing should be in APA style.

To learn more: 

Relevant classes at TC:

  • Implementation Science (CCPX)
  • Causal inference for program evaluation (HUDM)
Back to skip to quick links