Sunday 6 December 2009

Paper ECUR 809

Program Evaluation Final

ECUR 990

ECUR 990 Nov.


SURVEY QUESTIONNAIRE RESULTS 809

Survey Results: Data Analysis

Tuesday 17 November 2009

Assignment # 5 Final Survey Questionaire

Survey Questionnaire


To take this survey please click here:
https://survey.usask.ca/survey.php?sid=17783


Please read below explanation of previous work done: plan and preliminary versions of this survey questionnaire.

Friday 6 November 2009

Three Preliminary Versions of Survey Questionnaire

Assignment # 5- ECUR 809
The focus of this assignment is on clients’ level of satisfaction and their suggestions for the improvement of the Spanish Intermediate Program (SIP) offered by the Community Center in the City of Ottawa.
Overview:
This assignment aims the improvement of SIP in light of the clients' suggestions and responses to the survey questionnaire in The Community Center in the City of Ottawa. Clients in this particular case refers to students or participants of the Program “Spanish Intermediate and Conversation” (this program is complemented with Tennis Teaching) offered in the Community Center in Ottawa. According to Jody L. Fitzpatrick, James R. Sanders and Blaine R. Worthen (2004 p.100-111), there are standards and criteria to judge programs which may have a marked effect in improving the efforts of teachers and the whole community to improve programs: the purpose of the program (s), its rationale and objectives, its content, process and implementation (instructional technology and activities or strategies), and evaluation, form overall judgments about its effectiveness. Below each one is briefly explained:

1. Purpose and objectives: Community Center Programs are offered by season for all members of the community. Their rationale states that the community believes that the programs enhance growth and quality of life of the clients and their academic achievement; the teachers have the primary responsibility for their growth. Rationale of the City of Ottawa and the Community Center believe that the teachers’ performance enhances instruction, clients’ satisfaction and achievement.
Objectives or specific statements of what the project sets out to accomplish:
each teacher shall develop a program for their students in his/her area of expertise and under general guidelines provided by Ottawa City.
Specific objectives are outputs to be achieved; these are immediate or specific, concrete results (direct products of project activities). Each program is to be reviewed by the teacher and students to make changes or improvements, according to own immediate needs, for example: enhancing student achievement, performance and satisfaction, a more productive use of time, to increase professional and personal interactions and discussions, greater sharing of responsibility and leadership, increasing knowledge, involvement, and continuous learning.
2. Content: specific tasks to complete the course content in the classroom; content included in the Program kit or educational package created and published by the City of Ottawa and the Community Center.
3. Process and Implementation
: include strategies for goal achievement, activities or actions taken to achieve the program: registering to the course, attending lectures or lessons, seminars and workshops, reading or doing research, peer coaching, mentorship, creation of personal portfolios, videos and recordings, mentorship, understanding Lesson planning, workshops, and meetings, discussing contents, organizing groups and setting ‘drills’, examples, exercises, models, instructional media, training opportunities; using equipment and educational materials and supplies, etc; reflective journal, developing collaboration and learning group, speaking the language (Spanish) in a real setting (playing tennis) or other initiatives that enhance instruction and student achievement.
4. Satisfaction relates to effectiveness:
Impacts of SIP on students’ satisfaction shows in some way the effectiveness of the Program. Also, there must be long-term outcomes such as increasing knowledge, involvement, and ownership, continuous and permanent learning, increasing progressive knowledge, involvement, and ownership

Plan of Evaluation:

Who should be involved?
Engage Stakeholders:Teachers, administrators, supervisors, coordinators, volunteers and students; however, the focus of this work will be on the students or participants of the Community Center, particularly of Spanish Intermediate and Conversation (SIP).
How might they be engaged?
Students will be invited in meetings, email, survey-questionnaires.
Focus of the Evaluation:
What are you going to evaluate?
Description of Community Center: Programming (see below logic model 1)
Clients’ satisfaction – students’ reactions (see below logic model 2)
What is the purpose of the evaluation?
The purposes of this evaluation are to evaluate the level of satisfaction about (a) the organization, design and implementation (teaching) of the program; (b) their suggestions and recommendations to better achieve the goals of the program, namely, the enhancement of student learning and achievement.
Who will use the evaluation? How will they use the information?
- Teachers, administrators, supervisors, coordinators, volunteers and students.
- To assess the level of satisfaction with the organization, design and teaching of the program and to propose improvements or changes that can help teachers and students to meet the goals.
What questions will the evaluation seek to answer?
General questions:
Do Community Program (CP) helps participants or clients in their personal and professional growth and satisfaction? Is the Program meeting the goals set out by the Center? What are the reactions of students regarding the program? Are they satisfied with their achievement of goals and performance?
Specific questions:
Do objectives, content and activities match properly? How the teacher implement it?
Do Programs encourage students to develop personally and professionally?
Is the Program being used in the way that it was intended?
How is the Program perceived by students? Are they satisfied with their performance in the programs?
What information
do you need to answer the questions?
Indicators – How will I know it? Level of satisfaction of participants.
When is the evaluation needed? At the beginning, in the middle and at the end of the program (s).
What evaluation design will you use? Takes in consideration the Consumer-Oriented Evaluation Approach but the focus is on the Goal-Oriented Approach.
Assessment and evaluation are best addressed from the viewpoint of the students’ reactions to 1) teachers & teaching, 2) class-assignments and 3) assignments-materials.
Collect the information
What sources of information will you use?
Existing information:
Web site – Programs – Written materials provided by the Community Center – Teachers materials – samples of students’ work and/or experiences (videos, photos, etc).
People:
Teachers, administrators and the focus will be on the students' satisfaction.
What data collection method(s) will you use?
E-mail survey questionnaire to students - and teachers - (a larger sample).
Questionnaire- Interview (a small sample of four students).
________
Bibliography:
Jody L. Fitzpatrick, James R. Sanders and Blaine R. Worthen, Program Evaluation: AlternativeApproaches and Practical Guidelines (Boston: Allyn and Bacon, 2004) p.100
City of Ottawa: www.ottawa.ca The Glebe Community Center
Plan of evaluation: http://learningstore.uwex.edu/pdf/G3658-02.pdf

Versions of Survey Questionnaire: Different versions were developed during the process of designing and testing of the survey. Each one included a sample checklist, a variety of question types such as scale rating, short answer, and open-ended; the final version is now in the web site. Two preliminary versions were designed and evaluated together with four students. They provided suggestions regarding the following issues: clarity of questions, wording, style, and importance. A third version was developed and posted in the web site of the University of Saskatchewan. The students answered the questionnaire but their concern was on correcting some discrepancies between items and the characteristics of the actual program. They also made suggestions to improve clarity, wording and style. The fourth version is the final one, which is now on the web site of the University of Saskatchewan. Below is the preliminary versions as they were presented to the students for their evaluation:
A. – Version One or Preliminary version
Short Answer: yes or not
1. Does the content cover a significant portion of the program competencies?
2. Is the content up-to-date
3. Is the course level appropriate for most students?
4. Are objectives, competencies, or tasks stated in the student materials?
5. Are tests included in the materials?
6. Are performance checklists included?
7. Is a student’s guide included that offers how to manage and perform the course theory and practice?
8. Is the material presented in a logical sequence?
Scale rating: Quality and satisfaction Judgments. Use +, 0, - to rate or degree of the quality or your satisfaction with specific aspects of the course:
Quality and satisfaction of objectives, competencies, and/or tasks_____
Degree or match between learning activities and objectives______
Quality of test tests and degree of match with objectives________
Quality and satisfaction with of performance checklists and degree of match with objectives________
Quality and satisfaction of directions for how students are to proceed through the materials_______
Quality of visuals, videos, games, experiences, practices_______
Overall design of the learning activities for individualized instruction_____
Quality and satisfaction on safety practices_____
Satisfaction with degree of freedom from bias with respect to sex, race, origin, age, religion, etc,?________
Quality and satisfaction of content list or the course content-map and competencies covered by the course_________
Short answer: brief comment
Does the course have basic elements, such as those listed below? Please mark with an “x” and make a comment if necessary:
a) Clearly stated outcomes objectives____
b) Sufficient directions_____
c) Other materials required____
d) Prerequisite knowledge based and existing programs___
e) Fit with knowledge base and existing programs___
Process information: what is the nature and frequency of interactions among students or clients relevant others? ________________________________
Have these interactions being evaluated?____________________________
Open ended questions: Please explain or illustrate
- Is evaluation an integral part of (a) the development and (b) the implementation of the program?
- Is there evidence of effectiveness available regarding the course?

B. Second Version: the modified version based on the testing of the survey with four individuals:
Short answer: Yes or not
1. Is the program content of Intermediate Spanish up-to-date?
2. Is the program level appropriate for most students?
3. Are objectives, competencies, or tasks satisfactory stated?
4. Is the program presented in a logical sequence?
5. Are you satisfied with the program have basic elements, such as those listed below?
Scale rating: Choice decision-making. Please write on the spaces below: Very Good (VG), Good (G) or Bad (B), and make a comment if necessary:
a) Outcomes, objectives, competencies or tasks____
b) Directions or instructions for how students are to proceed through the materials___
c) Materials ____
d) Prerequisite knowledge based ___
e) Performance checklists____
f) Student’s guide_____
g) Fit with knowledge base and program___
h) Tests
Scale rating: Judgments. Use +, 0, - to rate or degree of the quality or your satisfaction with specific aspects of the course:
Degree or match between learning activities and objectives______
Quality of test tests and degree of match with objectives________
Quality and satisfaction with of performance checklists and degree of match with objectives________
Quality of visuals, videos, games, experiences, practices_______
Overall design of the learning activities for individualized instruction_____
Quality and satisfaction on safety practices_____
Satisfaction with degree of freedom from bias with respect to sex, race, origin, age, religion, etc,?________
Quality and satisfaction of content list or the course content-map and competencies covered by the program_________
Open ended question:
Please feel free to make any suggestions, comments that can help us to improve our Program on Spanish Intermediate (complemented with Tennis instruction) for the next Spring/Summer:
________________________________________________________________

Third Version of the survey questionnaire: It was posted in the web site and the students evaluated it. The students made different corrections and suggestions to improve it. Based on their comments, most items were re-written and the questionnaire was re-organized and redesigned.

Preview of Survey_ #17783_ ..

Please see below Planning Program Evaluation Steps that served as the basis for the whole work.

Friday 30 October 2009

21947403 Evaluation Steps

Wednesday 14 October 2009

Logic Models ECUR 809 Assignment # 4

(A)Logic Model 1 - Flow Chart: General
Worksheet Flowchart 1

Logic Model 2 - Flow Chart: More Specific

21010006-WorksheetFlowchart-ECUR-809

B. Description of the Logic Model- Assignment # 4:
- scope of logic model (how much they cover);
- the number of levels included;
- the description of levels included;
- the direction of information flow
- the amount of text;
- the visual layout.
Each of these variables is described in turn, below.
Scope of Logic Model: The flow chart is a logic model designed for evaluation purposes of the whole programs at the Community Center. It begins with vision, mission, values, motivations, expectations, etc and at the end of the day, the purpose of evaluation is to find out if the programs are making the difference. The community center offers complex, multi-component programs that may require the development of separate logic models for each program component or activity. Thus, I designed one general and a more specific one that can help me to promote the need of evaluating ‘clients satisfaction’ in the community programs (which include Spanish & Tennis Teaching). Assessment and evaluation are best addressed from the viewpoint of the students’ reactions to 1) teachers &teaching, 2) class-assignments and 3) assignments-materials.
Number of Levels: The first flow chart logic model includes several ‘levels’ (goals, population of interest, long and short term objectives and indicators). The second one, includes strategies, activities, process indicators.
Description of Levels: There is no standard set of terminology for our logic models. So, the first one includes general terms and the second one applies more specifics. The discussion will begin with stakeholders:Who should be involved or engaged? Teachers, administrators, supervisors, coordinators, volunteers and students;the focus of my work will be on the students or participants of the Community Center Programs. How might they be engaged? Students will be invited in staff meetings, email, survey-questionnaires.
Direction of Information Flow: Both flow charts-logic models flows from moving from left to right starting with objectives and focus of the Evaluation: What am I going to evaluate?
The Community Center Programming (logic model 1) and the clients’ satisfaction – students’ reactions and satisfaction (logic model 2).
What is the purpose of the evaluation?
The purposes of this evaluation are to evaluate the extent to which (a) the organization and programs help the members of the Community in their personal and professional growth; (b) the participants or students are meeting the goals of the programs, namely the enhancement of student satisfaction and achievement.
Amount of Text: It is well known that the amount of text included in a logic model can vary greatly between logic models. It can be sparse and in point form, or highly detailed. As a matter of preference and the function, my logic models include the information needed for our purposes of presenting the most important issues:
Who will use the evaluation? How will they use the information?
- Teachers, administrators, supervisors, coordinators, volunteers and students.
- To assess the effectiveness of the programs and make changes and improvements to help teachers and students to meet the goals.
-To improve students achievement and satisfaction.
What questions will the evaluation seek to answer?
General questions:
Do Community Programs (CP) help participants or clients in their personal and professional growth and satisfaction? Are Programs meeting the goals set out by the Center? What are the reactions of students regarding those programs? Are they satisfied with their achievement of goals and performance?
Specific questions:
Does the community offer varied programs? Do teachers have adequate resources to implement them? Do they see growth in their students as a result of their CP?
Do Programs encourage students to develop personally and professionally?
Are Programs being used in the way that they are intended?
How are Programs perceived by students? Are they satisfied with their performance in those programs? What are the benefits to students?
Visual Layout: As we know there are many ways to approach visuals and overall layout. This is a highly subjective issue, but an important one as good visual design can greatly enhance the understandability of a logic model. In these cases I tried to avoid confusion and focus on the following questions:
What information do I need to perform evaluation? or to answer the questions?
Indicators – How will I know it? Level of satisfaction of participants.
When is the evaluation needed? At the beginning, in the middle and at the end of the program (s).
What evaluation design will you use? Consumer-Oriented Evaluation Approach.
Assessment and evaluation are best addressed from the viewpoint of the students’ reactions to 1) teachers & teaching, 2) class-assignments and 3) assignments-materials.
Collect the information:
What sources of information will you use?
Existing information:
Web site – Programs – Written materials provided by the Community Center – Teachers materials – samples of students’ work and/or experiences (videos, photos, etc).
People:
Teachers, administrators and the focus will be on students satisfaction.
What data collection method(s) will you use?
E-mail survey questionnaire to students - and teachers - (a larger sample).
Questionnaire- Interview (a small sample of four students).

About Assignment # 5- ECUR 809: The focus for my assignment will be on students’ testimonials of their experience with CP (a sample of four students).My focus will be on Programs: "Spanish Intermediate and Tennis Instruction"

Source: "Logic Models" Online:
http://www.thcu.ca/infoandresources/publications/logicmodel.wkbk.v6.1.full.aug27.pdf

Tuesday 13 October 2009

ECUR 809 Assignment # 3

ECUR 809 Assignment # 3: Evaluation of Organization -Performing an Evaluation Assessment.
Determining the feasibility and direction of my evaluation:
I have selected a community center as organization to use as a model for the rest of the course its programs (Adult General Interest programs such as Intermediate Spanish): http://www.gnag.ca/index.php
http://www.gnag.ca/index.php?page=154 I live close by; so, I can access individuals for input in my work. I chose the City of Ottawa: http://www.city.ottawa.on.ca/ specifically a neighborhood as organization because during the Spring/Summer I taught Spanish (complemented with tennis lessons)(www.moretennis.blogspot.com) as part of the "Ultra Play" program: http://www.ottawatennis.com/detail.php?news_id=294
Kindly please see below overview of my chosen organization:
Organization: A Community Center in the City of Ottawa, ON Canada
Program: "Adult General Interest - Spanish: Intermediate/conversational"
Model of Evaluation Assessment:student-centered evaluation Assessment.
According to the Student Evaluation: A Teacher Handbook(Saskatchewan Education, 1991) student evaluation should focus on the collection and interpretation of data which would indicate student progress. This, in combination with teacher self-evaluation and program evaluation, provides a full evaluation. Chapter one states that, "Assessment and evaluation are best addressed from the viewpoint of selecting what appears most valid in allowing students to show what they have learned." In general, the main phases are the following: preparation, assessment, evaluation (formative, diagnostic, and summative) and reflection. Below each one is briefly described:
Preparation: what is to be evaluated, the type of evaluation (formative, summative, or diagnostic) to be used, the criteria against which student learning outcomes will be judged, and the most appropriate assessment strategies with which to gather information on student progress. Decisions made during this phase form the basis for planning during the remaining phases.In the Spanish Intermediate and Conversation Program the criteria and strategies are guided by an instructor (graduate student) from the University of Ottawa.
Assessment: identify information-gathering strategies, construct or select instruments, administer them to the student, and collect the information on student learning progress. The identification and elimination of bias (such as gender and culture bias) from the assessment strategies and instruments, and the determination of where, when, and how assessments will be conducted are important considerations. Performing an evaluation assessment process of the Program "Adults General Interest," Spanish Intermediate and Conversation Program in the Community Center, City of Ottawa, requires an appropriate approach. The Stake's "responsive" approach seems to be an an adequate way to reporting the "success and failure" of that program. Stake (1975, p.19)recommended the "clock" model to reflect the prominent recurring events in a responsive evaluation: talk with clients, program staff, audiences; identify program scope; overview program activities; discover purposes, concerns; conceptualize issues, problems; identify data needs re issues; select observers, judges, instruments, if any; observe designated antecedents, transactions and outcomes; thematize: prepare portrayals, case studies; validate, confirm, attempt to dis confirm; winnow, for audience use; and assemble formal reports, if any. In this sense, the Stake's model helps in reporting evaluation assessment of Intermediate & Conversation Spanish Program, in which not only questionnaires but also specific tests and sample work portfolios were assessed. See example of past questionnaires.
Evaluation: the information gathered during the assessment phase is used to make judgments about student progress. Based on the judgments (evaluations), decisions about student learning programs are made and reported to students, parents, and appropriate school personnel.
Reflection: allows pondering the successes and shortfalls of the previous phases. Specifically, evaluate the utility and appropriateness of the assessment strategies used, and make decisions concerning improvements or modifications to subsequent teaching and assessment. Instruments contain questions that encourage reflection on student assessment,teachers'planning, and on the structure of the curriculum. In the Intermediate & Conversational Spanish, successes of the program of Intermediate and Conversational Spanish, we can mention the following: excellent Audio CD cassettes; and an exciting vacation with a great learning opportunity, offered in combination with similar programs to study Spanish complemented with other programs, such as sports tennis and golf, games and with latin dance programs.
Until now no failures have been reported. To the contrary, students are looking for more "living spanish" programs.

Sources: Program Evaluation, Particularly Responsive Evaluation (Occasional Paper No. 5, p.19) by R.E. Stake, 1975b, Kalamazoo, MI: Western Michigan University Evaluation Center, Adapted by permission. Cited in pag. 138 in Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Online: www.wmich.edu/evalctr/pubs/ops/ops05.pdf
"Student Evaluation: A Teacher Handbook" Retrieved September 24th, 2009 Online:
http://www.saskschools.ca/~ischool/Drafting10/curr/part25.htm
Flowcharts:
http://www.scribd.com/doc/21010132
http://www.scribd.com/doc/21010006

Saturday 26 September 2009

ECUR 809 Assignment # 2 - Model

Assignment # 2 Model or approach to evaluate the ECS Programming for Children with Severe Disabilities

Summary: Children with severe/profound disabilities are eligible for Program Unit Funding from Alberta Education. According to the Medicine Hat Catholic organization, the “ECS Programming for Children with Severe Disabilities” evaluates and selects eligible children and then it offers educational programs that must meet the individual child’s needs. The educational programming combines center-based programs and in-home programs. The teacher develops an Individual Program Plan with goals and objectives reflective of the child’s needs. The center-based programming takes place in settings such as preschools, kindergartens and day cares.

What approach is appropriate to evaluate this program? In order to effectively evaluate ECS programming, I suggest using qualitative methods to conduct a “naturalistic” evaluation model in light of Emil J. Posavac and Raymond G. Carey’s theory (2003), combined with the “participant-oriented evaluation” approach, wisely described by Jody L. Fitzpatrick, James R. Sanders and Blaine R. Worthen (2004).

In the naturalistic evaluation model, “the evaluator becomes the data gathering instrument, not surveys or records. By personally observing all phases of the program and holding detailed conversation with stakeholders, the evaluator seeks to gain a rich understanding of the program, its clients, and the social environment” (Posavac and Carey, 2004, p.28). In other words, personal observations and detailed reports are necessary to explain information about the home visits, which should be carefully planned and documented. Also this model is useful in explaining the child’s instruction in a classroom setting at a center or school. The steps in preparing to conduct an evaluation comprise: identifying the program and its stakeholders, becoming familiar with information needs, the planning evaluation and evaluating the evaluation itself.

In the participant-oriented evaluation approach, evaluators should not be distracted from what was really happening to the participant in the program by focusing only in “stating and classifying objectives, designing an elaborate evaluation system, developing technically defensible objective instrumentation, and preparing long detailed technical reports” (Fitzpatrick, Sanders and Worthen, 2004, p. 130).The participant-oriented evaluation stresses first hand experience with program activities and settings and involvement of program participants in evaluation. This approach is “aimed at observing and identifying all (or as many as possible) of the concerns, issues, and consequences integral to the human services enterprise” (p.131). Evaluators need to avoid focusing only on the results or on isolated comments or numbers, charts, figures, and tables, missing important individual facts.

In short, a “naturalist & participant-oriented evaluation” combined approach will provide the plurality of judgments and criteria or methods of inquiry that will help the evaluators portray the different values and needs of individuals and groups served by the educational programming. This model requires active involvement of participants. By involving participants in determining the criteria and boundaries of the evaluation, evaluators serve an important educative function by creating “better-informed” program participants.

Sources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.

Posovac, E., & Carey, R. (2003). Program Evaluation:Methods and Case Studies. (6th edition). New Jersey: Prentice Hall

Medicine Hat Catholic, Separate Regional Division # 20.
Retrieved Sept 13th, 2009 from
http://www.mhcbe.ab.ca/cec/

Wednesday 23 September 2009

A Revised Version of Assignment # 1

Assignment #1 A completed evaluation Case: Client Satisfaction Survey Analysis, Quebec Regional Office, Human Resources Development (HRDC) in Quebec, Canada, 2001.

The study is an analysis of requirements and proposal for a client satisfaction measurement program. Circum Network Inc. is listed as the author of this study. It was prepared for Evaluation Services Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development.

Model or process used in the evaluation: An improvement-focused model (Posovac, 2003, p.29). While providing the essential methodological foundations, “the self-directed” training document takes a pragmatic and “integrated” approach to conducting client satisfaction surveys. It includes devices such as decision trees and checklists. The project was carried out in three phases. First, the team developed the standardized questionnaires. Then the researchers developed an operational framework for the client satisfaction measurement program. Finally, they developed the analytical support tools and mechanisms.According to the report, there is a consensus within the HRDC-Quebec Region that systematic, rigorous measurement of client satisfaction with the products and services offered by the Region is essential to building the fifth pillar of the region’s vision: delivering services of the highest quality. There is also a consensus that the primary responsibility for improving the region’s services lies with the HRCCs and other operational centres, because it is they that control the daily delivery of services.

Strengths
: (1) The goals were to plan a client satisfaction measurement program and an analysis of requirements and proposal for a client satisfaction measurement program; the team produced a self-directed training document on the implementation of client surveys. (2) The report presents clearly the development of standardized questionnaires, operational and implementation framework and pre-testing. (3) The report is a Guide for human resources development (employees who do not necessarily have the knowledge required to conduct formal, systematic surveys). The tools offered in the guide are to be used for measuring, sampling, collecting data, analyzing data, interpreting results, and implementing recommendations. The results included standardized questionnaires for various types of clients and various services’ conditions. (4) Researchers/evaluators helped programs’ staff to learn how to discover discrepancies between program objectives and the needs of the target population, between program implementation and program plans, between expectations of the target population and the services actually delivered, or between outcomes achieved and outcomes projected (Posovac, 2003, p.29).
In this sense, I think in this case, it was used an approach similar to Stake's countenance model explained by Jay Wilson (2009) because there was a need for formalized evaluation. It was not just anecdotal but descriptive data was necessary. It included description and judgment, intents, and observations, which were compared to standards then a judgment was made. In short, there was a "mix" or mixture of parts pieces (quantitative/qualitative elements) and, as a result, it was an “artistic” evaluation (creative thinking in the minds of evaluators).
Regarding possible weaknesses, I see a couple of things that could be considered in future program evaluations: (1) the program does not compare or discusses all sides of program evaluation both positive and negative. Deliberation of pros and cons are not evident in the survey evaluation of the program. It would be useful if this discussion or deliberative process can take place. (2) It does not describe participant’s “reactions” to and “learning” from the innovative program evaluation, as well as “behavior” changes in real job performance, and other potpourri “results.” A round table or discussion of these issues could help to enlightening program evaluation.

Sources
:
Circum Network Inc. (2001) An integrated approach to conducting client satisfaction surveys analysis of requirements and proposal for a client satisfaction measurement program. Prepared for Evaluation Services’ Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development Canada.
Retrieved September 7, 2009 from:
http://www.circum.com/textes/program_hrdc_quebec_2001.pdf

David Crawford (2009) Evaluation exploration. Retrieved September 4, 2009, from http://www.ag.ohio-state.edu/~brick/evalexpl.htm

Miller, R., & Butler, J. (2008) Using an adversary hearing to evaluate the effectiveness of a military program. The Qualitative Report, 13(1), 12-25. Retrieved September 5, 2009 from http://www.nova.edu/ssss/QR/QR13-1/miller.pdf

Posovac, E., & Carey, R. (2003). Program Evaluation: Methods and Case Studies. (6th edition). New Jersey: Prentice Hall.

Wednesday 9 September 2009

ECUR 809 Assignment #1 (First Version)

ECUR 809.3-83551 Assignment #1 Case: Client Satisfaction Survey Analysis, Quebec Regional Office, Human Resources Development (HRDC) in Quebec, Canada, 2001.

Summary: Circum Network Inc. is listed as the author of this study. The program evaluation was prepared and developed for Evaluation Services Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development. It is an analysis of requirements and a proposal for a client satisfaction measurement program. It is a guide for human resources development (employees who do not necessarily have the knowledge required to conduct formal, systematic surveys), that is, a self-directed training document on the implementation of client surveys. The tools offered in the guide are to be used for measuring, sampling, collecting data, analyzing data, interpreting results, and implementing recommendations. The document presents clearly the development of standardized questionnaires for various types of clients and various services’ conditions, operational and implementation framework and pre-testing.

The model or process used in the evaluation is the improvement-focused model.
The purpose is to help programs’ staff to learn how to discover discrepancies between program objectives and the needs of the target population, between program implementation and program plans, between expectations of the target population and the services actually delivered, and between outcomes achieved and outcomes projected, among others (Posovac, 2003, p.29). While providing the essential methodological foundations, “the self-directed” training document takes a pragmatic and “integrated” approach to conducting client satisfaction surveys. It includes devices such as decision trees and checklists. The project was carried out in three phases comprising the development of (1) the standardized questionnaires, (2) the operational framework for the client satisfaction measurement program, and (3) the analytical support tools and mechanisms. According to the report, there is a consensus within the HRDC-Quebec Region that systematic, rigorous measurement of client satisfaction with the products and services offered by the Region is essential to building the fifth pillar of the region’s vision: delivering services of the highest quality. There is also a consensus that the primary responsibility for improving the region’s services lies with the HRCCs and other operational centers, because it is they that control the daily delivery of services.

I see a couple of things that could be considered in future program evaluations: (1) the program does not compare or discusses all sides of program evaluation both, positive and negative. Deliberation of pros and cons are not evident in the survey evaluation of the program. It would be useful if this discussion or deliberative process can take place. (2) It does not describe participant’s “reactions” to and “learning” from the innovative program evaluation, as well as “behavior” changes in real job performance, and other potpourri “results.” A round table or discussion of these issues could help to improve the process of program evaluation. Nelson

Sources:
Circum Network Inc. (2001) An integrated approach to conducting client satisfaction surveys analysis of requirements and proposal for a client satisfaction measurement program. Prepared for Evaluation Services' Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development Canada. Retrieved September 7, 2009 from:
http://www.circum.com/textes/program_hrdc_quebec_2001.pdf

David Crawford (2009) Evaluation exploration. Retrieved September 4, 2009, from http://www.ag.ohio-state.edu/~brick/evalexpl.htm

Miller, R., & Butler, J. (2008) Using an adversary hearing to evaluate the effectiveness of a military program. The Qualitative Report, 13(1), 12-25. Retrieved September 5, 2009 from http://www.nova.edu/ssss/QR/QR13-1/miller.pdf

Posovac, E., & Carey, R. (2003). Program Evaluation: Methods and Case Studies. (6th edition). New Jersey: Prentice Hall.

Please see above a more complete version of my Assignment # 1 (after Dr. Wilson's comments, Sept 25th, 2009)

Wednesday 2 September 2009

What is Program Evaluation?

First Session:
To define and understand “What is program evaluation?”
To understand the historical foundations of program evaluation.
To identify and develop appropriate evaluation assessment techniques used in educational and other program settings.

"Program evaluation is the systematic collection of information for use to improve effectiveness and make decisions with regard to what those programs are doing and affecting." University of Minnesota http://www.evaluation.umn.edu/

"Essentially you are trying to answer the question, "Does the program do what it says it does?". Because evaluation is on-going your evaluation may steer your client in a particular direction and it will also be used to inform the next evaluation" (Jay Wilson, 2009)

I found the following useful links:
http://www.epa.gov/evaluate/whatis.htm
In its broadest definition, Program Evaluation is a systematic way to learn from past experience by assessing how well a program is working.
- Program evaluation is almost always retrospective, i.e., examining and learning from experience, though it may include prospective elements. For example, an analytical study that makes use of data on past performance to estimate future results would be an evaluation, but one done prospectively to estimate the effectiveness of a new environmental program based on assumptions about its design and/or operation would not be.
- An evaluation can be systematic without being elaborate or expensive. It’s possible to keep it simple and affordable

http://www.epa.gov/evaluate/whatis.pdf

http://www.ocde.k12.ca.us/downloads/assessment/WHAT_IS_Program_Evaluation.pdf

http://www.en.wikipedia.org/wiki/Program_evaluation
Program evaluation is a systematic method for collecting, analyzing, and using information to answer basic questions about projects, policies and programs. Program evaluation is used in the public and private sector and is taught in numerous universities.. Program evaluations can involve quantitative methods of social research or qualitative methods or both. People who do program evaluation come from many different backgrounds: sociology, psychology, economics, social work.

http://gsociology.icaap.org/methods/evaluationbeginnersguide.pdf

http://non-profit-governance.suite101.com/article.cfm/board_member_selfassessment
umass.edu/oapa/oapa/publications/online_handbooks/program_based.pdf
http://www.macleans.ca/education/universities/article.jsp?content=20070323_155000_816
http://www.medicine.usask.ca/pt/general-information/school-of-physical-therapy-operations-manual-1/Program%20Evaluation.pdf/view
http://www.evaluationcanada.ca/site.cgi?s=4&ss=2&_lang=an
http://www.uwex.edu/ces/pdande/evaluation/index.html

Requested Readings:
http://www.managementhelp.org/evaluatn/fnl_eval.htm
http://pathwayscourses.samhsa.gov/eval101/eval101_toc.htm
http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm
http://delicious.com/wi11y0/809
http://www.evaluationcanada.ca/site.cgi?s=1
http://www.eval.org/

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R.(2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage.
Posovac, E., & Carey, R. (2003). Program Evaluation – Methods and Case Studies. (6th edition). New Jersey: Prentice Hall

First assignment: a description of how to do program evaluation in Canada
http://www.spcottawa.on.ca/CBRNO_website/How2program_evaluation.htm#Client
http://www.mhcbe.ab.ca/cec/specialeducation-studentservices/ECSPROGRAMMINGFORCHILDRENWITHSEVEREDISABILITIES.pdf

PENDING ASSIGNMENTS
Assignment # 5 Design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. Original version and the modified version based on the testing of the survey with four individuals. Deadline: November 20th

Major Assignment: Project
Evaluation Plan (Proposal) Dec. 11 - 50 marks
A Proposed Evaluation of the “Spanish Intermediate & Conversational and Ultra Play - Tennis After School” Programs in a Community Center: A Case Study of the City of Ottawa: Parks & Recreation Master Plan Experience
By Nelson Dordelly-Rosales
The purpose of this paper will be to design an evaluation plan after completion the “Ultra Play - After School” Program in Sandy Hill Community Center: A Case Study of the City of Ottawa – Parks & Recreation Experience. The plan will be a theoretical paper that outlines the program and the goals or objectives to be evaluated. It will demonstrate my ability to analyze a program, integrate the different tools and theories addressed recently into an evaluation plan, determine a suitable evaluation plan and create the instruments I would use to conduct the analysis. Essentially the purpose of this evaluation plan is to convince Mr. Martin Travis, Coordinator of Parks and Recreation that I should be “the evaluator for the evaluation.” Hence, I want to convince him that I am the “best” to perform the evaluation of the above mentioned program, and that I have the best team to help me on this matter.
Through case study, this paper will lend insight to ways through a program-based evaluation, or a logic “improvement-focused” model (Posavac, et. al, 2003) can facilitate a “holistic” approach to the evaluation of ‘Ultra Play-After School program.’ So, an important piece of this evaluation plan is to describe, or elaborate upon, different reasons for selecting this particular model and approach. While the term “program” is used, I find a logic model equally useful for describing group work, team work, community-based collaborative and other complex organizational processes as I seek to promote results-based performance. For presentation of paper, I will use a case study format that includes following components:
• Abstract – a brief summary of the major points of the study as well as a short list of key words.
• Introduction
Prior evaluations or policies in a Community in Ottawa:
http://www.gnag.ca/index.php
http://www.gnag.ca/index.php?page=67&id=20

In comparison to:
http://www.topeka.org/parksrec/garfield.shtml

http://www.uml.edu/centers/CFWC/Community_Tips/Program_Evaluation/Program_Evaluation.html

http://www.skaneatelescommunitycenter.com/index.php?option=com_facileforms&Itemid=124


A case in Florida Miami Beach:
http://www.shakealegmiami.org/site/c.kkLUJbMQKpH/b.2521629/k.BF03/Home.htm

Program or outcome evaluation assesses the extent to which planned activities produce the desired outcome among a target population. Evaluation is considered and set up when the project is designed. Program or outcome evaluation assesses the extent to which planned activities produce the desired outcome among a target population. The PE model is led by the Planning and Evaluation Committee consisting of key representatives from the collaborating organizations Planning, Development and Communications is the staff support unit charged with assisting the organization in its efforts to improve its ability to become a self-correcting organization through planning, monitoring and evaluation efforts.
http://www.ccpfc.org/rd/eval_center.cfm

Importance: http://www.cdc.gov/eval/framework.htm

Examples:
www.phoenix.gov/ARTS/eval0405.pdf

http://www.afterschoolflorida.hhp.ufl.edu/evaluation_links.html

Models of Evaluation:
http://www.edtech.vt.edu/edtech/id/eval/eval_models.html

o Defining and addressing the need to change program evaluation methods that rely heavily on data gathering by postal mail to online instruments.
o What do you propose to do?
o What is my plan?
o What are my objectives: INPUT
o Involving stakeholders
• Model & Method: OUTPUT
o Description - How to do it?
o Evaluation Matrix
o Reasons for selecting particular foci and approaches
o What are the challenges and or roadblocks?
o What are the assessment instruments to be used?
o Measurement considerations & Data collection
• Evaluation: OUTCOME
o How well can we meet my objectives?
o Why I and my team are the best to perform the evaluation
o Emerging trends
• IMPACT:Conclusion, Summary and Recommendations
o Summarize what I learned from this experience.
o What would I recommend to others who would like to replicate my efforts?
o What would they need to be prepared for?
o What needs to be improved?
o Strengths and weaknesses
o Other issues

Saturday 29 August 2009

Schedule Fall-Winter 2009-2010

Fall Dr. Jay Wilson
ECUR 809.3-83551 Models & Methods for Evaluation of Educational Programs.
Calendar: 9 am 250 pm

September 5 - 26
October 17
November 21

ECUR 990- 82712 Curriculum Research Dr Janet McVittie --- Education Building 10 Sep 03, 2009 - Dec 04, 2009 Seminar
http://www.usask.ca/education/people/mcvittiej.htm
http://www.usask.ca/education/coursework/mcvittiej/edcur322.html
http://www.usask.ca/education/coursework/mcvittiej/resources/index.html

Elluminate sessions: September 19 - October 3
November 7 and 21 -

January 16, February 27
March 13 - March 27

GRS 960-86150 Ethics & Integrity Dr. Diane J. Martz
http://www.spheru.ca/spheru-1/research-tem/dr.-diane-martz/dr.-diane-martz/?searchterm=Martz

Blackboard 2 hrs take a test until you get CR Sep 03, 2009 - Dec 04, 2009 Supervised Self Instruction
Research by Dr Martz: http://www.spheru.ca/research-projects/rural-youth-risk-behaviours-and-healthy...
Winter
ECUR 991-27193 Scholarship in Teaching - Portfolios Dr Timothy. Molnar http://www.usask.ca/education/people/molnart.htm
Class 9:00 am - 11:50 am S Education Building 3133
Jan 04, 2010 - Apr 08, 2010 Seminar
Saturday
9:00-11:50pm

Wednesday 26 August 2009

Articles Reviews by Nelson Dordelly-Rosales

Article Review # 1
Bringing the background to the foreground: what do classroom environment that support authentic discussions look like?

References:

Hadjioannou, X. (2007). Bringing the Background to the Foreground: What do Classroom
Environment that Support Authentic Discussions Look Like? American Educational
Research Journal; 44 (2), 370-399

Mapiasse, S. (2007). Influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia [Electronic version]. International Education Journal, 8 (2), 393-407.
Overview
Hadjionannou (2007) focused on authentic or dialogic discussions in the classroom. Authentic discussions are a classroom-based speech genre in which participants commonly explore issues of interest by articulating ideas and opinions. A case study research is done to shed light on “authentic discussions” using different qualitative approaches such as recorded class sessions, interviews, and field notes. The researcher identified seven elements that appeared to be related to the student’s involvement in classroom activities and to the social relationships among community members. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation.
Problem/Issue and the Importance/Significance
Hadjionannou (2007) aimed to answer the question what do classroom environments that support authentic discussions look like? The study examines the features of the environment of a fifth grade classroom community. The author reported part of a wider qualitative research that sought (a) to examine the issue of interpersonal relationships within the classroom, specifically to analyze the texture of talk in the authentic discussions of the community under study, (b) to explore participant perspectives, and (c) to evaluate the classroom environment.
Research Question
What are the features of the classroom environment of this discourse community that frequently used authentic discussions?
Sample and sample selection process
The community under study was a fifth-grade class of 24 students and their teacher. The fifth-grade classroom community under study was part of Grassroots Elementary School (pseudonym), a quintessentially middle-class school in a midsize town in Florida.
Data Collection Method/ Data Analysis Method
Data collection included observation, participant interviews, and audio and video recordings of class sessions during a five month period on an almost daily basis. The process of identifying the major elements of the classroom environment was a generative one, and it began with the initial coding of the field notes and the interview transcripts. Each individual was interviewed four times, using a flexible interview protocol: the author audio and video-recorded four book talk sessions, which were transcribed verbatim and analyzed through discourse-analysis. The goal was to use a database with highly contextualized descriptors to systematically illustrate the content of the data. The findings of the discourse analysis were used primarily for describing the texture of talk in authentic discussions, but also capturing the elements of the classroom environment in action.
Trustworthiness/Validity Considerations
In addressing credibility, Hadjionannou (2007) presented a detailed picture of the phenomenon under scrutiny. The investigator provided sufficient detail of the context of the field-work, identified the elements that seemed to shape the environment of the classroom community under study, and described “how those elements functioned as repeatable threads woven to create the fabric of the classroom’s social life.” (p. 374). The researcher suggested that reproducing the environment described in this study in another classroom would be impossible because “the environments in communities are in constant flux, and they are shaped by the personalities and the agendas of community members as well as by the unique circumstances of each community” (p.396). However, through dialogic discussions in the classroom and cultivating amiable relations between the students, teachers can provide opportunities for student self-expression, lively interactions, and substantive collaboration in any classroom.
Ethical Issues
Confidentiality and anonymity was guaranteed to participants of the study. In the ethics literature, confidentiality is commonly viewed as akin to the principle of privacy. In this study the researcher used a pseudonym to identify the institution object of study or elementary school.
Reflection - Questions
Were the studies of value? Why or why not? Mappiasse (2007) examined the influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia, and analyzed seven dimensions that support democratic climate in the classroom: active participation, avoidance of textbook dominated instruction, reflective thinking, student decision-making and problem-solving choices, controversial issues, recognition of human dignity, and relevance. Hadjionannou (2007) explored the environment of the fifth-grade classroom community in Florida and analyzed the elements that support authentic discussions. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation. Both studies identified the elements that seem to shape the environment of the classroom and described how those elements functioned as repeatable threads woven to create the fabric of authentic discussions and a democratic climate in the classroom’s social life. The two studies provide insights of great value for teaching and learning.
What were the strengths of the two studies? The results indicated that the democratic climate and authentic discussions have significant effects on student engagement, student knowledge and interpretation skill. Mappiasse (2007) centered in the advantages of democratic environment in the classroom; Hadjionannou (2007 emphasized in the importance of interpersonal and social interaction among students and teachers. The classroom environment is extremely important to effective teaching and learning. These studies described in great detail different indicators of a good classroom social environment; they also are good examples of qualitative research.
What were the limitations of the two studies? Subjects in each study were students of only one institution. Therefore, the results were limited in their applicability to other institutions. Similar research studies should be repeated in other institutions and in different subjects to determine whether those aspects of the classroom environment that appeared essential to effective teaching are similar to those obtained in these studies.
How would you have changed the two studies to improve the quality of the research? For the first study, I would enlarge the sample size and I would add a questionnaire for data collection. For the second study, I would add participant interviews and audio and video recording of class sessions. In addition, for the first study, it would also be necessary to explain instrument validation and the reliability of items as the second study did.
How would you incorporate the findings of the two studies into your classroom? I would like to develop a similar qualitative research study selecting a convenience sample of schools in Venezuela. In research and teaching, I would incorporate the democratic environment using meaningful classroom activities. I would work toward knowing my students and use this knowledge to create positive, trusting, and respectful relationships with them.
It is important to make students engage in authentic dialogue or discussion and learning activities, especially in civic education classrooms that involve law and education. We should provide the students with opportunities to obtain deeper understanding of the civic values and enable them to implement democratic values critically and responsibly in their social interactions; that is, to engage individuals and groups in developing a clear statement of belief about what strong democracy would look like.

Article Review # 2 by Nelson Dordelly-Rosales
Investigating Self-Regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects


References:

Zimmerman, B. (2008). Investigating Self-regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects
American Educational Research Journal; 45 (1), 166-183

Zimmerman, B. (2002). Becoming a Self-regulated Learner: an Overview.
Retrieved June 2, 2009 from
http://findarticles.com/p/articles/mi_m0NQM/is_2_41/ai_90190493/

Overview
Zimmerman (2008) assessed students’ self regulated learning (SRL) online. The focus was on processes and motivational feelings or beliefs regarding learning in authentic contexts using computer ‘traces’ (or gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalyses measures. The results revealed that students in high SRL online classes were more engaged in their writing than students in low-SRL classes, and that students in the training group reported significantly greater increases in time management skill and self-reflection on their learning than those in the control group. Students in the self-regulation training condition also displayed increases in several measures of motivation. Their willingness to exert effort, their task interest, their learning-goal orientation, and their perceptions of self-efficacy all increased after training and their feelings of helplessness declined significantly. Students in “the self-regulation training group displayed significantly greater gains in math achievement than students in the control group” (p.175).
Problem/Issue and the Importance/Significance
The study defined the issue of innovative environment and how it impacts the students’ use of self regulatory processes during the course of learning. The study is significant because it enlightens the motivation and self-regulation process. One of the lessons for instructors and learners was that self-regulation strategy measure can predict students’ academic grades and their teachers’ ratings of their proactive efforts to learn in class.
Research Questions
The first question concerned the innovative software program (called “gStudy environment”), that is, how traces measure SRL as compared to self-reported measures. The researcher assessed changes in self-regulation during learning. The second question dealt with students’ levels of SRL in personally managed contexts, such as at home or in the library. The idea was to find out if students’ levels of SRL were linked to improvements in the students’ overall academic achievement. The third question involved whether teachers can modify their classrooms to foster increases in self-regulated learning. The fourth question concerned the role of students’ motivational feelings and beliefs in initiating and sustaining changes in their SRL.
Sample and sample selection process
Teachers were randomly assigned to either an experimental or a control group. Nine teachers were trained to convey the underlying cyclical model and to develop homework exercises, quizzes, and a final examination in arithmetic skill. The control group of eight teachers gave the same homework assignments and tests but received no self-regulation training. The students in both experimental conditions kept diary accounts of SRL events.
Data Collection/Analysis Methods
The author used innovative qualitative as well as quantitative methods that included teacher and student data collection and different analysis methods (observation forms, portfolio assessments, interviews, and questionnaires) to measure SRL. Teachers in the SRL training condition gave students a copy of the cyclical model of self-regulation along with a picture of a ‘learning expert’, who recommended self-regulatory practices that the teacher modeled for them. Students were given daily feedback and were encouraged to set challenging goals and choose a specific strategy for themselves. Students in the experimental group were given points on the basis of their homework answers. The students were assessed in their interests, attitudes, and self-related cognition before and after a five week training program. The students’ calibration of the accuracy of their achievement was significantly correlated with their actual posttest score.
Instructional and ethical issues
Technology is a tool that can change the nature of ESL. However, the role of the teacher and instructor is critical in providing guidance and support to self-regulated academic learning.
Reflection - Questions
Were the studies of value? Why or why not? In previous study, Becoming a Self-Regulated Learner, Zimmerman (2002) showed that self-regulation learning (ESL) is not a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills. The author identified how a student’s use of specific learning processes, level of self-awareness, and motivational beliefs combine to produce self-regulated learners. In the most recent study, Zimmerman (2007) showed that when compared to control students, SRL trained students displayed significant increases in homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill (in fact, the self-regulation training group passed an entrance exam for admittance to a higher level school, which was an increase of 50% compared to past cohort). Both studies were of value
What were the strengths of the two studies? Zimmerman (2000) showed that self-regulated, independent learners take responsibility for what they learn and he analyzed how far they can go with this knowledge. In the second study Zimmerman (2007) showed that (1) “gStudy environment” can provide students with many more ways to self-regulate their learning than provided by traditional instructional software, (2) the “think-aloud methodology” is an effective way to assess students’ self-regulatory processes online, (3) training in self-regulation learning and time-management skills can be implemented by teachers as part of their classroom assignments and strategic planning, and (4) the “micro-analytic methodology” (used to improve athletic skills) for assessing SRL processes and sources of motivation (goal setting and strategic planning, self-reflection, predictive sources of motivation) improve self-regulation. The results showed that the experimental group reported significantly greater increases in time management skill and self-reflection on their learning, homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill, than the control group.
What were the limitations of the two studies? In general, there are still raising new questions for future research: more research is needed regarding the accuracy of students’ reports of using self-regulatory processes. In trying to answer the global question: How do students become masters of their own learning processes? Zimmerman (2007 says that “there was not a standardized measure of students’ writing achievement, and this limitation precluded determination of the effects of students’ SRL on their writing competence” (176). Students in the high and low-SRL classes did not display significant differences in measures of motivation (beliefs, values etc), which is attributed to the ineffectiveness of the measures.
How would you have changed the two studies to improve the quality and usefulness of the research? I would follow Zimmerman’s research approach (2007) and would take his suggestions. There is a need to (a) extend the use of the four ways to assess the effectiveness of academic interventions designed to motivate recalcitrant students to engage on SRL, (b) extend a micro analytic methodology to learning academic tasks over longer periods of time when students’ motivation is expected to wane, (c) apply additional measures of motivation and feelings, such as anxiety and goal orientation, (d) extend the “think-aloud methodology” to see if planning and motivation will emerge as significant predictors of students’ mental models study.
How would I incorporate the findings of the two studies into your classroom? I would provide innovative environment (gStudy software, think-aloud protocols, diaries of studying, direct observation, microanalyses measures) so that students become masters of their own learning process: SRL a “proactive processes that students use to acquire academic skill, such as setting goals, selecting and deploying strategies, and self-monitoring one’s effectiveness” (166).


Article Review # 3 by Nelson Dordelly-Rosales

Students’ Perceptions of Characteristics of Effective College Teachers: A Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis

By Anthony J. Onwuegbuzie; Ann E Witcher; Kathleen M T Collin; Janet D Filer; e al

Reference(s):

Onwuegbuzie, A.J., Witcher, A. E., Collin, K.M., Filer, J.D., et al. (2007). Students’ Perceptions of Characteristics of Effective College Teachers: Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis. American Educational Research Journal; 44 (1), 113-160

Suwandee, A. (1995). Students' perceptions of university instructors' effective teaching characteristics [Electronic version]. Studies in Language and Language Teaching Journal,5, 6-22

Overview
Onwuegbuzie, et al. (2007) assessed the content-related validity and construct-related validity of the Teaching Evaluation Form (TEF). Using sequential-mixed methods analysis lead the researchers to the development of a more complete form, the CARE-RESPECTED Model of Teaching Evaluation (CRMTE), which includes three of the least represented themes of the TEF: student-centered, enthusiast, and ethical. The words consistency, fair evaluator, and respectful describe the item ethical. The CRMTE is a useful data-driven test that will benefit all stakeholders –college administrators, teachers, and, above all, students.
Problem/Issue and the Importance/Significance
The problem was students’ perceptions of characteristics of effective college teachers: a validity study of a teaching evaluation form using a mixed-methods analysis. To Onwuegbuzie, et al. (2007) “the TEFs (a) are developed atheoretically and (b) omit what students deem to be the most important characteristics of effective college teachers” (p.151). In an era in which information gleaned from TEFs is used to make decisions about faculty, this potential threat to validity is disturbing and warrants further research.
Research Questions
What themes reflect effective college teachers as identified by students? What students’ attributes affect perceptions of effective college teachers? What is the content-related validity and construct-related validity pertaining to a TEF?
Sample and sample selection process
Participants were 912 undergraduate and graduate students (out of 8,555 students enrolled) from various academic majors enrolled at a public university in a mid-southern state of the United States. The sample represented 10.7% of the total population and reflected 68 degree programs offered by the university.
Data Collection and Analysis Methods
This study used a multistage mixed-methods analysis to collect and to assess the content-related validity and construct-related validity of TEF. The researchers approached instructors/professors before the study began to solicit participation of their students and thus maximize participation rate. The researchers collected qualitative data (e.g., respondents’ perceptions of the questionnaire), and quantitative data (e.g., response rate information, missing data information) before the study began (plot phase) and used member checking techniques to assess the appropriateness of the questionnaire and the adequacy of the time allotted to complete it, after the major data collection phases. A sequential mixed-methods analysis (SMMA) was undertaken to analyze students’ responses. The process included: data reduction, data display, data transformation, data correlation, data consolidation, data comparison and data integration. This analysis, incorporating both inductive and deductive reasoning, employed qualitative and quantitative data-analytic techniques.
Limitations/Delimitations/Assumptions
Because the sample represented students at a single university whose perspectives about effective teachers were gathered at a single point in time, the extent to which the present findings are generalizable to students from other institutions is not clear.
Trustworthiness/Validity Considerations
The focus of the study was on population validity, ecological validity, temporal validity and adequate external validity. The findings cast some serious doubt on the content-related validity and construct-related validity of TEF scores (e.g., endorsement of most themes varied by student attribute: gender, age). The validity of responses might have been affected by the fact that “the students’ perceptions were assessed via a relatively brief-self-report instrument” (p.144).
Reflection - Questions
Were the studies of value? Why or why not? Both studies were of great value. In the study by Suwandee (1995) data was obtained from 505 university students in the Faculty of Science. The results indicated that students considered an effective teacher as one who has a good knowledge of his/her subject and applies pedagogical skills, making difficult topics easy to understand and explains clearly; his/her personality is generous, willing to help students in and out of the classroom; and his/her research-teaching background shows a well-prepared instructor for class. Onwuegbuzie and others (2007) identified characteristics that students considered effective college teaching comprising four metha-themes, which were the following: communicator, advocate, responsible, and empowering, and nine themes comprising the following descriptors: responsive, professional, expert, connector, transmitter, director, enthusiast, student centered, and ethical. The researchers developed the CARE-RESPECTED Model of Teaching Evaluation (CRMTE) that emerged from the study, which included the last three descriptors, which were not represented in the TEF. These two studies have added to the current yet scant body of literature regarding the score validity of TEFs.
What were the strengths of the two studies? The studies examined students’ perceptions of characteristics of effective college teachers and the factors that are associated with their perceptions. The researchers used mixed methods for the rationale of optimizing participant enrichment, instrument fidelity and significance enhancement. Findings included a more complete test and the identification of prevalent characteristics, themes and metha-themes for faculty training.
What were the limitations of the two studies? Subjects in both studies were students of only one university. The results are, therefore, limited in their applicability to other institutions. For this reason, similar research studies should be repeated for students in other universities to determine whether their perceptions of effective teaching are similar to those obtained in those two studies. None of these studies found any relationship between GPA and students' perceptions of teaching characteristics. Further research studies should be carried out to determine if there is any relationship between both variables.
How would you have changed the two studies to improve the quality and usefulness of the research? The two studies illustrated how to use a multistage mixed-methods analysis to assess the validity of the teaching evaluation forms. Future research studies should be carried out using a multistage mixed-methods analysis and involve instructors as subjects to determine their perceptions of valued teaching characteristics. Conducting a study using both students and instructors in an educational institution as subjects would improve validity of results. The results obtained for each group can then be compared to determine whether any congruency or discrepancy is observable between students' and instructors' perceptions of effective teaching.
How would you incorporate the findings of the two studies into your institution? We should promote the highest academic standards in our teaching, our scholarship, and the connections between them. Specifically, I should be able to apply the characteristics of teaching that emerged from those studies. I would attempt to do similar research in my home institution. In Venezuela the current TEFs forms do not represent all characteristics that students consider to reflect effective college teaching. Findings regarding the characteristics of effective teaching can be inputs for faculty training. We should provide teaching support and conduct training for faculty, teaching assistants and librarians.

Article Review # 4 by Nelson Dordelly-Rosales
Can Teacher Education Make a Difference?
By Niels Brouwer and Fred Korthagen

References:

Brower, N., & Korthagen, F. (2005). Can Teacher Education Make a Difference?
American Educational Research Journal, 42 (1), 153-224

Crocker, R., & Dibbon, D. (2008). Teacher Education in Canada. Retrieved May 24, 2009
from www.saee.ca/pdfs/Teacher_Education_in_Canada.pdf

Problem/Issue and the Importance/Significance
Brower and Korthagen (2005) examined the graduates’ teaching competence originated from their pre-service programs, as observed in one university teacher education institution that aimed deliberately at integrating practice and theory. This longitudinal study of over a period of 4.5 years aimed to examine the impact of specific characteristics of the teacher education programs in the United States involving the integration of practical experience and theoretical study. The research model included eight variables: curriculum program conditions, non-curricular program conditions organization and content of activities during student teaching, organization and content of activities during college-based seminars, learning effects during pre-service programs, schools context factors during beginning teachers’ entry into the profession, beginning teachers’ experiences and options, learning effects during the first in-service years and personal background variables. The researchers demonstrated that occupational socialization in schools has a considerable influence on the development of graduates’ in-service competence (educating “innovative teachers”). They discussed specific ways in which pre-service teacher education can influence beginning teachers’ professional performance and competence development.
Research Questions
How does teaching competence develop over time? What are the relative influences of teacher education programs and occupational socialization in schools on the development of teaching competence? Which program characteristics are related to competence development? Does the program require beginning teachers to display, in real life situations, the competence that their pre-service programs aimed to foster?
Sample and sample selection process
The whole sample included “357 students, 128 cooperating teachers and 34 university supervisors from 24 graduate teacher education programs. On average, the beginning teachers in the sub sample had more teaching experience, ranging between 12 and 30 months after graduation, than those in the whole sub-sample, which ranged between 11 and 22 months after graduation”(Brower, & Korthagen, 2005, p.155). The reason is that the observations of and interviews with the beginning teachers in the sub-sample were based in part on their questionnaire responses. In order to ensure that sub- samples were representative as possible, the researchers applied several criteria, for example, the largest possible number of school subjects. From the total number of 31 university supervisors, those with the most professional experience were selected.
Data Collection Method/ Data Analysis Method
Quantitative survey data as well as in-depth qualitative data were collected using quantitative and qualitative methods: a longitudinal survey, interviews, observations, a written questionnaire (closed items), and classroom artifacts (program documents). The first step was to determine which activities were carried out in each program, in which order, and at which moments. Then all of the information was schematized. In the questionnaire, repeated measures were used to describe how the programs were implemented, to trace how the students experienced them, and to record their self-evaluations of their progress on the criterion variable. After graduating and finding work, the beginning teachers answered specific questions. After the programs had ended, the graduates completed one additional questionnaire (a few factual questions for those graduates who had not found work as beginning teachers). The University supervisors completed a questionnaire after completion of the entire program. Findings were reported from three epistemological perspectives: the ecological (collaboration and contextual conditions), the genetic (beginning teachers’ experiences) and the activity perspective (respondents’ actions in classroom and schools).
Reflection - Questions
Were the studies of value? Why? The study on teacher education in Canada by Crocker and Dibbon (2008) examined program structures, content emphasis and usefulness, perceptions of teaching knowledge and skill, the practicum experience, and the transition into the teaching profession. Among the important findings, the researchers found that (1) teacher education programs across Canada differ markedly in structure and duration, and (2) there were significant variations among the respondent groups’ perceptions of program content, emphasis, and quality. Relatively few (about 13%) graduates gave overall “excellent” ratings to their teacher education programs, while about half gave “good” ratings. To the researchers, those areas of content, knowledge and skill are highly valued in the field but are not being emphasized as strongly in teacher education programs as they might be. Brower and Korthagen (2005) in the United States analyzed the structure of teacher education programs. They found that those programs may be counterproductive to student teacher learning, and consequently, teacher educators may not display the best examples of good teaching. They also found that during and immediately after their pre-service programs, teachers experience a distinct attitude shift that entails an adjustment to teaching practices existing in schools. The authors showed that “integrative” theory-practice approaches in teacher education, in which student teachers’ practical experiences are closely linked to theoretical input, strengthen graduates’ innovative teaching competence.
What were the strengths of the two studies? Both studies are good longitudinal research that highlight the importance of integrating theory and practice in pre-service teacher education programs, and support the need for educating innovative teachers. Important suggestions for the design of teacher education programs and the conduct of teacher education research are drawn from both studies, for example, finding better ways to support and mentor novice teachers, developing stronger models of collaboration between the teachers and the institutions they serve, and developing a common vision for teacher education which articulates core content and competencies. Teacher education research should take a more longitudinal comparative approach.
What were the limitations of the two studies? Though large scale, longitudinal surveys may offer some advantages in terms of reducing validity threats, the literature suggests that researchers should prepare to deal with problems related with the longevity of longitudinal surveys. Some of the limitations were: resource restrictions, sample size, the absence of comparative information from other similar studies.
How would you have changed the two studies to improve the quality and usefulness of the research? I would take into account suggestions provided by the researchers: (1) refining the selection of respondents and measurement of criterion variables, (2) intensifying qualitative data collection during pre-service programs and carrying out repeated measurements and observations at increasing numbers of standardized moments after graduation, (3) developing a drop out study that can produce clues about differences between graduates who did and did not seek and find work as teachers that were associated with variables other than gender, number of applications, or progress during the pre-service program.
How would you incorporate the findings of the two studies into your classroom? Repeated cross-sectional studies or more longitudinal studies would be of great value in examining trends in teacher education. I would like to be engaged in longitudinal research, particularly the cohort study. We should focus into the ways in which prospective teachers learn from practice and develop competence and positive attitudes. The goal should be to equip teachers for entry into the teaching profession encompassing problem-based learning, authentic contexts and materials.
Reflective Summary of 4 articles
In the first article, Hadjionannou (2007) argued that the best way to understand the educational phenomenon is to view it in its context. To that end, she used different qualitative approaches such as recorded class sessions, interviews, and field notes. As a result she found important elements that appeared to be related to the students’ involvement in dialogic discussions and the social relationships in the classroom. She provided descriptions of classroom environment that supports authentic discussions. In the second article, Zimmerman (2008) used qualitative approaches such as portfolio assessments, direct participant observation and survey questionnaires. The focus was on the development of online measures of self-regulatory learning (SRL) processes and motivational feelings using innovative methods such as computer traces (gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalysis measures. This study adopted an inductive approach to its reasoning; observations were made from data collected through survey questionnaires, and then sought to work towards a theoretical integration of what it had found. Therefore, the study moved from the data to a theory and vice versa. The focus was on the uniqueness of the students in the self-regulation training group. The results revealed that students in the training group reported the greater increase in time management skill and self reflection on their learning than those in the control group.
In sum, both qualitative studies tended to be oriented toward individuals and case studies. They allowed for a richer analysis of subjects and for information to be gathered that would otherwise be entirely missed by a quantitative approach. The qualitative research focused on collecting, analyzing, and interpreting data by observing what people did and said, involving a continual interplay between theory and analysis. In analyzing qualitative data, the researchers discovered patterns such as changes over time or possible causal links between variables. The findings were a personal construction of how researchers viewed events and their job was to persuade us that their interpretation was valid. From the phenomenological point of view, the authors held that the subjects’ perceptions define reality.
In the last articles the authors applied quantitative research methods. Onwuegbuzie et al. (2007) developed a validity study of a teaching evaluation form (TEF). The researchers assessed 912 College students’ perceptions through a survey questionnaire. As a result, the researchers identified a list of characteristics that students considered descriptors of effective college teaching, three of which were not represented in the TEF. The researchers were able to develop a new and more complete form called the CARE RESPECTED Model of Teaching Evaluation (CRMTE). Brouwer and Korthagen (2005) developed a longitudinal study of a period of 4.5 years using questionnaires, interviews, observations and analysis of classroom artifacts to find out if occupational socialization in schools has a considerable influence on the development of graduate teachers’ in-service competence. The researchers quantified the variables of interest and examined the relationships between the variables mathematically through statistical analysis. The researchers showed that “integrative” theory-practice approaches in teacher education strengthen graduates’ innovative teaching competence. Quantitative research methods in both studies, simply put, were about numbers, objective hard data, quantitative and statistically valid results. Tools were used to minimize any bias in collecting information. The studies involved gathering data that is absolute such as numerical data, testing hypotheses, promoting its supposed neutrality.
In conclusion, researchers showed how to work collaboratively across qualitative and quantitative research paradigms. Mixed research rests on rich and varied approaches, which come from multiple disciplines to address different research topics.