EdTech 505: Chapter Summaries

Kim Hefty

EdTech 505

Week 11 Assignment:

  1. Go back and review chapters 1-9 in the Boulmetis & Dutwin text. Then write down your thoughts about each chapter. Minimum 50 words for each chapter (1-9). It can be a summary and/or critique and/or your general thoughts. Be sure each chapter writing is specifically attuned to the particular chapter and not something generic. Maintain focus on the chapter you are writing about. Remember to designate your responses by chapter so it is obvious which writing goes with which chapter.

 

Summary: Chapter 1 – What Is Evaluation?

Evaluation consists of program objectives, collecting data and analysis of the data. Although there is not one agreed upon definition there is some analysis to evaluate the degree to which objectives have been met or the analysis helps to make decisions.   Evaluation is driven by why the evaluation is actually being done. Evaluation specifically looks at efficiency, effectiveness and the impact, short or long termed. Evaluation is central for decision making. It can allow for evaluation of alternatives or it can identify areas where improvement is necessary. Reasons to evaluate include:

  • mandate,
  • make a case for a new program,
  • anticipate a need to justify, improve, or change a program
  • to seek funding.

Regardless of motivation, monitoring, formative assessments and summative assessments provide information needed to evaluate programs. The critical parts of the evaluation design format provide the starting point for program evaluation.

 

Summary: Chapter 2 – Why Evaluate? 

There are multiple reasons to evaluate a program and these reasons can have a significant impact on the evaluation. Evaluation can be necessary for funding a program, because it is required by one or more stakeholders, or to impact decisions.

Reasons to evaluate include:

  • determining benefits
  • meet need for planning
  • determining effectiveness of approaches

There are a variety of reasons why an evaluation is necessary but the ultimate result of any evaluation should impact what was evaluated. This impact could be changing the project, improving the project or eliminating the project completely. However, it is important to realize that the evaluation can be significantly impacted by the skill of the evaluator or the evaluation model. This factor must be considered when deciding how to use the ultimate findings of an evaluation.

 

Summary: Chapter 3 – Decision Making: Whom to Involve, How, and Why?

Evaluation is part of a program cycle and may take place in multiple phases. Everyone involved in a program should be involved in at least part of the cycle. The involved or impacted parties are referred to as stakeholders. The parts of the cycle are:

  • Philosophy and goals
  • Needs assessment
  • Program planning
  • Implementation and formative evaluation
  • Summative evaluation

Evaluators should be brought in at the goals phase, but often are not. More often they are brought in during the implementation phase.   The evaluator works with program staff to clarify goals and identify needs and help develop the evaluation design. The evaluator will look at specific activities, formulate evaluation questions, monitor, measure progress, and report to program staff. The ultimate goal of an evaluator is to report the findings to the various stakeholders and help determine a program’s overall effectiveness and efficiency.

 

Summary: Chapter 4 – Starting Point: The Evaluator’s Program Description

The EPD (Evaluator’s Program Description) clarifies and reveals all the goals, objectives and measurements in place for an evaluation. The components of an EPD are:

  • Goal statements
  • Activities
  • Evaluation procedures

When creating an EPD the evaluator must work closely with all of the stakeholders, this will create a solid foundation on which the evaluation results can be trusted. The evaluator must ask pertinent and clarifying questions. The recipient(s) of the evaluation should be carefully considered and multiple EPDs may be necessary. Standards must be clearly developed. The EPD is created to understand a program’s goals and objectives and to help in deciding a specific evaluation model.

 

Summary: Chapter 5 – Choosing an Evaluation Model

This was the most impactful of all the chapters for me and provided a lot of clarity, “…evaluation differ(s) from research strategies in that evaluation results are provided to the appropriate stakeholders for the purpose of program or project improvement … the purpose of research, in contrast, is causal links …”. This is why we are not supposed to have a control group! Until this chapter, this had been a huge struggle conceptually for me. Chapter 5 presents multiple evaluation models: adversary, art criticism, decision making, discrepancy, goal based, goal free, systems analysis and transactional. The chapter also clearly defines the evaluation design format and their purposes.

 

Summary: Chapter 6 – Data Sources

Data is either qualitative (descriptive) or quantitative (numeric). That is it is either based on detailed observation or numerical facts. There are four levels of data:

  • Nominal
  • Ordinal
  • Interval
  • Ratio

How an evaluator collects and records information is an essential and critical aspect to any evaluation. The data is often already there (pre-existing) and a good evaluator will know how to uncover it. Or the evaluator may need to generate new data. It is the evaluator’s responsibility to make sure they collect reliable and dependable data and that the data is relevant to the evaluation.

 

Summary: Chapter 7 – Data Analysis

Chapter 7 was another one of my favorite chapters, I think it’s because I am a math teacher and the topic was well within my comfort zone. Data analysis is qualitative or quantitative. Statistics refers to a tool or technique. Statistics is often an applied mathematical formula. Some of the most useful statistical tools in data analysis are:

Measures of Central Tendency

  • Mean
  • Median
  • Mode

Measures of Variability

  • Range
  • Quartiles
  • Percentiles
  • Outliers

Statistics can be complex and easily manipulated but it can also be simple and straightforward. Evaluators must carefully use statistics and watch for errors and illogical associations between data and conclusions. Great care should be given when deciding which statistical tools are used and technology should be used to calculate the measures.

 

Summary: Chapter 8 – Is It Evaluation or Is It Research?

I would categorize chapter 8 and chapter 5 together in their level of importance. If I had written this text I would have put them sooner, since I believe the distinction between the two is so critical. The important components that distinguish whether a project is research or evaluation are intention and design, as well as the usage of qualitative or quantitative data. Sampling and population are also important components of both research and design. Sampling can be done with probability and non-probability methods. Surveys are also often utilized.

It is critical to remember that evaluation has no comparative component! However, gathering information for both research and evaluation may share some characteristics. Employing appropriate rigor for an evaluation will streamline data collection. The ultimate goal differential between the two should also be considered. Keeping the end in mind – program improvement – will go a long way toward guiding program development, implementation, and evaluation.

 

Summary: Chapter 9 – Writing the Evaluation Report

Evaluation is only valuable if someone else sees it and can use it. An evaluation report is the presentation of all the components of an evaluation. It must answer or summarize all the questions posed by the stakeholders, without bias or influence. It should be a report of the facts. A report is the communication of all the findings. There are multiple “styles” of reports and the audience must be considered when choosing a format. A good report should include the following:

  • Summary
  • Statement of Purpose
  • Background Information
  • Description of the Evaluation Study
  • Results
  • Discussion of the Program and Its Results
  • Conclusion and/or Recommendations

A good evaluation report will try to avoid any hint or suggestion of bias. The report should be as objective and professional as possible and should convey as much relevant information as possible. The stakeholders must be able to easily and clearly read and understand the entirety of the report.

 

Reference

Boulmetis, J., & Dutwin, P. (2011). The ABCS of evaluation (3rd ed.). San Francisco (CA): Jossey-Bass.

—————————————————————————————————————–

  1. Review all the previously assigned Internet readings. Did one or more stand out or make impression on you? If so, why? Your written comments can be specific about one or more of the readings and/or general comments about all the readings. You do not necessarily have to comment about each reading. That is, discuss the readings as an aggregate. No need to discuss each one, unless you want to. It’s your choice as to which ones you include in your comments. Be sure to reference the readings so it’s apparent which ones you’re referring to in your comments. Also, please be specific enough in your comments (summary and/or critique and/or general thoughts about the specific readings you choose to discuss) so they do not come across as generic without connection to the readings. Minimum 150 total words.

 

Of all the internet readings, the one that stood out the most to me was – http://www.nao.org.uk/publications/Samplingguide.pdf  A Practical Guide to Sampling provided an overview of sample design and methods along with tips on interpreting and reporting results. I particularly enjoyed this reading because I thought that it was very clear and easy to follow. The guide is a supplement to hands-on support by the NAO Statistical and Technical Team. The information is presented in an attractive, colorful format and provides practical examples of sampling techniques. The guide starts with “Why Sample,” and explains the rationale for sampling. The next section “Sample Design” explains how to structure and interpret sample design. The information needed requires precision and should avoid unintended bias. The goal of design is to “achieve a balance between the required precision and the available resources”. The guide presents ideas for defining the population but provides cautions relating to privacy of data and contracting of outside evaluators. Sample size is another important aspect of sampling. The size depends on five key factors: margin of error, amount of variability, confidence level, and population size and population proportion. Within the guide a data table illustrates the quantity needed in a sample to achieve various levels of precision, with a 95% confidence level. The guide also provides multiple examples of weighting and post-weighting samples to demonstrate ways that data may be made more meaningful. The guide does a good job of explaining the definitions, uses and limitations of a variety of sampling methods. A flow chart illustrates how to choose the best sampling method for a designer. In order to extract a sample, the guide provides instructions for use of Excel and SPSS and how to analyze results. In addition, relevant formulas and tips on are provided for reporting results in the Appendix.

I have worked with statistics for over 20 years as a math teacher. The concepts in this reading can often be overwhelming to a non-math person, especially confidence levels. The reading was so well presented and contained enough relevant examples that I actually have used it with my own statistics students.

Optional Bonus: Design/create/develop another assignment (You don’t have to actually do it.) that facilitates reviewing all the work in the course to date. That is, come up with an assignment that could be used at this point in the semester the next time EDTECH 505 is taught. This bonus is optional and is worth one extra point.

Go back and review chapters 1-9 in the Boulmetis & Dutwin text. Explain what was your favorite chapter and why. What impact did it have on you personally and/or professionally? What impact did it have on your project? How can you use what you have learned in the future? Minimum 150 words

Optional Bonus 2: Design/create/develop another discussion that facilitates reviewing all the work in the course to date. That is, come up with a discussion that could be used at this point in the semester the next time EDTECH 505 is taught. This bonus is optional and is worth one extra point.

What do you wish you knew at the beginning of the semester that you know now? Be specific. Why do you wish you knew that?

[ I wish that I more clearly understood the difference between evaluation and research … I wish I had read chapter 5 of the text book earlier. Because I feel my project is actually better suited for a research project than an evaluation project. ]

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s