Monday, 15 June 2009

Comparison between Historical Research and Evaluation Research

Borg, W.R., and Gall, M.D., (1999). Educational Research: An Introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Chapters 16 and 17

Summary by Nelson Dordelly-Rosales, June 20th, 2009

Historical Research: What is it?

• Historical research is the systematic search for facts relating to questions about the past, and the interpretation of these facts. By studying the past, the historian hopes to achieve a better understanding of present institutions, practices and issues in education.
• There is no single, definable method of historical inquiry (Edson, 1988)

What does historical research mean from the qualitative and quantitative perspectives?

• From the qualitative perspective, historical research means historical inquiry. It proposes to learn from past discoveries and mistakes, and provides a moral framework for understanding the present and predicting future trends.
• From the quantitative perspective, historical research is the systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events.

How to conduct a historical research?

– Definition of a problem: topic (s) or questions to be investigated
– Formulation of questions to be answered, hypotheses to be tested or topics to be investigated.
– Systematic collection and analysis of historical data
– Summary and evaluation of data and the historical sources
– Interpretation: present the pertinent facts within an interpretive framework
– Production of a synthesis of findings or confirmation/disconfirmation of hypotheses or questions (Borg & Gall, 1999, p. 811-831)

What are the types of historical sources?

• Preliminary sources: published aids for identifying the secondary source literature in history. An important requirement is to list key descriptors for one’s problem or topic, e.g., bibliographies and reference works.
• Primary: those documents in which the individual describing the event was present when it occurred, e.g., diaries, manuscripts.
• Secondary: documents in which the individual describing the event was not present but obtained a description from someone else, who may or may not have directly observed the event, e.g., historian’s interpretations (Borg & Gall, 1999, p.815-817).

How to record information from historical sources?

• Examining availability and deciding what information to record from:
- Documents: diaries, memoirs, legal records, court testimony, newspapers, periodicals, business records, notebooks, yearbooks, diplomas, committee reports, memos, institutional files, textbooks, tests.
- Quantitative records: census records, school budgets, school attendance records, test
- Oral history: i.e., records and interviews.
- Relics: an object whose physical or visual properties provide information about the
• Summarizing quantitative data (Borg & Gall, 1999, p. 818-819)

How to evaluate the worth and meaning of historical sources?

• External criticism: evaluation of the nature of the source, e.g., Is it genuine? Is it the original copy? Who wrote it? Under what conditions?
• Internal criticism: the evaluation of the information contained in the source, e.g., is it probable that people would act in the way described by the author? Do the budget figures mentioned by the writer seem reasonable? (Borg & Gall, 1999, p. 821-823).

How to interpret historical research?

• Use of concepts to interpret historical information:
- Concepts are indispensable for organizing the phenomena that occurred in the past.
- Group together those persons, events, or objects that share a common set of attributes.
- Place limits on the interpretation of the past.
• Being aware of bias, values, and personal interests allow researchers to interpret or “reconstruct” certain aspects of past events, but not others. Also, it allows interpreting past events using concepts and perspectives that originated in more recent cases.

What is the role of the historical researcher?

• Historians cannot ‘prove’ that one event in the past caused another, but they can be aware of, and make explicit, the assumptions that underlie the act of ascribing causality to sequences of historical events (Borg & Gall, 1999, p. 831).
• Generalizing from historical evidence means looking for consistency across subjects or an individual in different circumstances (Borg & Gall, 1999, p. 834).
• Causal inference in historical research is the process of reaching the conclusion that one set of events brought about, directly or indirectly, a subsequent set of events (Borg & Gall, 1999, p. 836).

What is Evaluation Research?

• Educational evaluation: is the process of making judgments about the merit, value, or worth of educational programs (Borg & Gall, 1999, p. 781).
• Evaluation Research: is usually initiated by someone’s need for a decision to be made concerning policy, management, or political strategy. The purpose is to collect data that will facilitate decision-making (Borg & Gall, 1999, p. 782).
• Educational Research: is usually initiated by a hypothesis about the relationship between two or more variables. The research is conducted in order to reach a conclusion about the hypothesis - to accept or reject it (Borg & Gall, 1999, p. 783).

How to conduct an ‘Evaluation Study’?

• Clarifying reasons for doing the evaluation
• Identifying the stakeholders
• Deciding what is to be evaluated
- Program goals
- Resources and procedures
- Program management
- Identifying evaluation questions
- Developing an evaluation design and timeline
- Collecting and analyzing evaluation data
- Reporting the evaluation results (Borg & Gall, 1999, p.744-753).

What are the criteria of a good evaluation study?

• Utility: an evaluation has utility if it is informative, timely, and useful to the affected persons.
• Feasibility: the evaluation design is appropriate to the setting in which the study is to be conducted and that the design is cost-effective.
• Propriety: if the rights of persons affected by the evaluation are protected.
• Accuracy: extent to which an evaluation study has produced valid, reliable, and comprehensible information about the entity being evaluated (Borg & Gall, 1999, p.755).

What is involved in ‘quantitatively oriented evaluation’ models?

• Evaluation of the individual.
• Objectives-based evaluation for determining the merits of a curriculum or an educational program.
• Needs assessment.
• Formative and summative evaluation.

(Borg & Gall, 1999, 758-767).

Evaluation of the individual

• This type of research involves the assessment of students’ individual differences in intelligence and school achievement.
• It also involves evaluation of teachers, administrators, and other school personnel.
• Like assessment of students, personnel evaluation focuses on measurement of individual differences, and judgments are made by comparing the individual with a set of norms or criterion (Borg & Gall, 1999, p.759)

Objectives-based evaluation: Four Models

• Discrepancy evaluation between the objectives of a program and students’ actual achievement of the objectives (Provus, 1971).
• Cost-benefit evaluation to determine the relationship between the costs of a program and the objectives that it has achieved. Comparisons are made to determine which promotes the greatest benefits for each unit of resource expenditure (Levin, 1983).
• Behavioral objectives to measure the learner’s achievemen(Tyler,1960)
• Goal-free evaluation to discover the actual effects of the program in operation that may differ from the program developers’ stated goals (Scriven, 1973).

Needs assessment

• This type of research aims to determine a discrepancy between an existing set of conditions and a desired set of conditions.
• Educational needs can be assessed systematically using quantitative research methods.
• Personal values and standards are important determinants of needs, and they should be assessed to round out one’s understanding of needs among the groups being studied.
• Needs assessment data are usually reported as group trends (Borg & Gall, 1999, p. 763)

Formative and summative evaluation

• The function of formative evaluation is to collect data about educational products while they are still being developed. The evaluative data can be used by developers to design and modify the product (Borg & Gall, 1999).

• The summative function of evaluation occurs after the product has been fully developed. It is conducted to determine how worthwhile the final product is, especially in comparison with other competing products. Summative data are useful to educators who must make purchase or adoption decisions (Borg & Gall, 1999).

Evaluation to guide program management

• It includes context evaluation, input evaluation, process evaluation, and product evaluation (CIPP). The CIPP model shows how evaluation could contribute to the decision-making process in program management (Stufflebeam and others 1971).
• Context evaluation involves identification of problems and needs in a specific setting.
• Input evaluation concerns judgments about the resources and strategies needed to accomplish program goals and objectives.
• Process evaluation involves the collection of evaluative data once the program has been designed and put into operation.
• Product evaluation aims to determine the extent to which the goals of the program have been achieved.

What does a ‘qualitatively oriented evaluation’ model mean?

• The worth of an educational program or product depends heavily on the values and perspectives of those doing the judging.
• For example the three following models:
- Responsive evaluation (Stake, 1967)
- Adversary evaluation (positive and negative judgments about the program) (Wolf, 1975)
- Expertise-based evaluation (Eisner, 1979)

Responsive evaluation

• Focuses on the concerns, issues and values affecting the stakeholders or persons involved in the program (Stake, 1967)
• Egon, Guba & Yvonna (1989) identified four major phases that occur in evaluation:
- Initiation and organization: negotiation between the evaluator and the client.
- Identifying the concern’s issues and values of the stakeholders using questionnaires
and interviews.
- Collection of descriptive evaluation using observations, tests, interviews, etc
- Preparing reports of results and recommendations.

Adversary evaluation

• Adversary evaluation relates in certain respects to responsive evaluation (positive and negative judgments about the program) (Wolf, 1975). It uses a wide array of data.
• Four major stages:
- Generating a broad range of issues, the evaluation team surveys various groups involved in the program (users, managers, funding agencies, etc).
- Reducing the list of issues to a manageable number.
- Forming two opposite evaluation teams (the adversaries) and provides them an opportunity to prepare arguments in favor of or in opposition to the program on each issue.
- Conducting pre-hearing sessions and a formal hearing in which the adversarial teams present their arguments and evidence before the program’s decision makers (p.774).

Expertise-based evaluation

• Expertise-based evaluation or educational connoisseurship and criticism or judgment about the worth of a program made by experts (Eisner, 1979)
• One aspect of connoisseurship is the process of appreciating (in the sense of becoming aware of) the qualities of an educational program and their meaning. This expertise is similar to that of an art critic who has special appreciation of an art work because of intensive study of related art works and of art theory.
• The other aspect of the method is criticism, which is the process of describing and evaluating that which has been appreciated. The validity of educational criticism depends heavily on the expertise of the evaluator.

Differences between Historical and Evaluation Research

• Historical research aims to assess the worth and meaning of historical sources, documents, records, relics, oral history, etc. The search is for facts relating to questions about the past, the interpretation of these facts and its significance for the present.
• Evaluation research aims to assess the merit, value, or worth of educational programs and materials of any level of schooling. It facilitates decision-making concerning policy, management, or political strategy to improve educational matters.

• Each type of research addresses different types of questions, and each one is necessary for advancing the field of education. The decision to undertake one of these types of research will depend primarily on the interests of the study. However, both, historical and evaluation research draw to varying degrees on the qualitative and quantitative traditions of research.
• In quantitative evaluation research, objectives provide the criteria for judging the merits of the product, e.g., publication and cost, physical properties, content, instructional properties, etc. In qualitative research, the worth of an educational program or product depends heavily on the values and perspectives of researchers.
• In historical research the historian discovers objective data but also can interpret and critique, making personal observations on the worth & value of findings.

Thursday, 4 June 2009

Summary #1: Statistical Techniques (for processing and analysis of data)by Nelson Dordelly-Rosales
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Taylor, J. K., and Cihon, C. (2007). Statistical techniques for data analysis. (2nd ed.). New York: Chapman & Hall/CRC.
Key terms and definitions
Research Design: refers to all procedures selected by a researcher for studying a particular set of questions or hypotheses. In this process the researcher creates an empirical test to support or refute a hypothesis. The process of designing a research study has several steps: conclusions from previous studies, rationale or theory, questions and hypotheses (or suggested explanation or a reasoned proposal predicting a cause), design, gathering the data, summarizing the data and determining the statistical significance of the results, conclusions and beginning of next study.
Quantitative research: using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Descriptive statistics: also called summary statistics are used to “describe” the data we have collected on a research sample. The main descriptive statistics are: the mean, median, and standard deviation; they are used to indicate the average score and the variability of scores for the sample. Inferential statistics: are used to make inferences from sample statistics to the population parameters. It includes: sampling distributions and confidence intervals, one and two sample topics (comparison of means, ratio of variances), propagation of error in a derived or calculated value, regression analysis, testing hypotheses, drawing inferences.
Types of Quantitative Research Designs: Descriptive studies and studies aimed at discovering causal relationships (causal-comparison, correlation, or experiment). Causal-Comparison: refers to causal or functional relationships between variables (the way in which variables influence or affect each other). The causal-comparative method is aimed at the discovery of possible causes for the phenomenon being studied by comparing subjects in whom a characteristic is present with similar subjects in whom it is absent or present to a lesser degree. Experimental research design: is ideally suited to establish causal relationships if proper controls are used. The key feature of experimental research is that a treatment variable is manipulated. Correlational studies: include all research projects in which an attempt is made to discover or clarify relationships through the use of correlation coefficients. It tells the researcher the magnitude of the relationship between two variables.
Response to questions
The chapter aims to describe and explain the main statistical techniques for processing and analysis of data. It helps the reader to become familiar not only with the language, principles, reasoning and methodologies of statistical techniques of quantitative (research that is rooted in the positivistic approach to scientific inquiry) but also of qualitative (observation, ethnographic interview, survey) research methods
The specific topic of the chapter is the description of three main types of statistical techniques are descriptive inferential, and tests statistics. The chapter also deals with measurements in educational research usually expressed in different types of scores.
The overall purpose is to help the researchers in understanding four kinds of information about statistical tools: (1) what they should know about statistics and what statistical tools are available? (2) Under what conditions each tool is used? (3) What the statistical results mean? And (4) how the statistical calculations are made?
In general, the author is saying that we need to analyze research results effectively. The authors suggest that we make maximum use of data collected and apply appropriate statistical techniques when analyzing our research data.
This information is interesting because statistical techniques are used to a) describe educational phenomena; c) make inferences from samples to populations, d) identify psychometric properties of tests, e) apply mathematical procedures involved in the use of statistical formulas: measures of central tendency, of variability, correlation, tests, etc. A sound research plan is the one that specifies the statistical tools to be used in the data analysis. Statistical tools should be decided upon before data have been collected because different tools may require that the data be collected in different forms.
Closing summary
As researchers we should know that a statistical research project aims to investigate causality, and in particular to draw a conclusion on the effect of independent variables (predictors) on dependent variables (responses); and that there are two types of causal statistical studies: experimental and observational. An experimental study involves taking measurements, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In observational study we just gather data and investigate correlations between predictors and response. Basic steps of an experiment are: planning, design, summary (descriptive statistics), reaching consensus (inferential statistics), and documenting-presenting results. We should be careful in choosing the right statistical tools to be used in the data analysis (see visual graphic) because occasionally a statistical tool is used when the data to be analyzed do not meet the conditions required for the tool in question. After appropriate statistical tools have been selected and applied to the research data, the next step is to interpret the results. Interpretation must be done with care. Fortunately today, as researchers, we have access to the techniques and technology we need to analyze statistical data. Computers can help with data analysis techniques that were once beyond the calculation reach of even professional statisticians. All we need is practical guidance on how to use them. For example, measurement analysis can be performed through the MINITAB Statistical Software, which improves presentation of the results.
The purpose, structure, and general principles of educational research methodology, quantitative and qualitative measurement analysis, and most importantly, the statistical techniques are valuable to everyone who produces, uses, or evaluates data. Descriptive statistics help us to summarize the data we have collected on a research sample, and inferential statistics techniques are important in educational research because they allow us to generalize from a sample or samples, to reach conclusions about large populations. We must be aware of misuses and abuses of statistics in research. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g. instrument precision) which propagate to the combination of variables in the function. Statistical techniques can help us to examine propagation of error if we are not careful. Statistical techniques are useful tools for collecting, classifying and using statistics in research, methods of using numerical facts systematically collected. They are tools for designing research, processing and analyzing data and drawing inferences or conclusions.
A few years ago, I had an excellent research experience using statistics in a survey. I worked in a survey of teaching methods of History teachers at the Ministry of Education in Venezuela. The survey system included the most commonly used survey descriptive statistics, including: percents, medians, means, and standard deviations. The results were presented in tables and we interpreted and drew some conclusions. We established significant differences between data points. Statistical Software was used to improve quality in presenting the final results. We interpreted some similarities and some differences between the teachers of public and private schools. We found that about 60% involved used of the traditional “lecture” method. As a result, the Ministry of Education developed training workshops on a variety of teaching methods.

Summary # 2: Collecting Research Data
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Wallen, E., and Fraenkel, J.C. (2007). Educational Research: A Guide to the Process. (2nd ed.). London: Lawrence Erlbaum Associates.

Key terms and definitions

Data-Collection Tools: questionnaires, interviews, and observations aimed at gathering similar kinds of data, are the most common instruments for data collection in survey research. Other techniques for collecting survey information are tests, self report measures, and examination of records.
Survey research is a distinctive research methodology for systematic data collection. Surveys often are used to simply to collect information, such as the percentage of respondents who hold or do not hold a certain opinion. Surveys can also be used to explore relationships between different variables. The cross-sectional survey: standard information is collected at one point in time from a sample drawn from a predetermined population. When information is collected from the entire population, the survey is called a census. The longitudinal survey: in the longitudinal survey, data are collected from respondents at different points in time in order to study changes or explore time-ordered associations. Three longitudinal designs are commonly employed in survey research: trend studies, cohort studies, and panel studies. Trend studies: in this design a given general population is sampled at each data-collection point. The same individuals are not surveyed, but each sample represents the same population. For example, each year we survey teachers of History and would compare from year to year. Cohort studies: in this design a specific population is followed over a period of time. Panel studies: in this design the researcher selects a sample at the outset of the study and then at each subsequent data-collection point the same individuals are surveyed.
Survey Interview: involves the collection of data through direct verbal interaction between individuals. It permits direct follow-up (in person, telephone, computer, and recording); obtain more data using self-check test, and greater clarity than questionnaires.
Collecting observational data: three types of observational variables may be distinguished: descriptive, inferential and evaluative.

Content Synthesis

The aim of this chapter is to explain different techniques for collecting research data or information. Some of these methods depend on the methodology and the theoretical assumptions used in the research. There is a tendency for researchers in the functionalist, positivist or ‘scientific’ paradigm to collect hard objective numbers by observation, experimentation, and extraction from published sources, questionnaires and structured interviews. They emphasise quantitative techniques over qualitative methods. Law and humanistic researchers in the interpretative and radical humanist paradigms use qualitative methods. However, matching methodologies and methods is the current tendency in educational research. Mixed method research paradigm and triangulation studies are ways to make research studies more robust and rigorous by verifying results through different methods, thus ensuring that the results are not a function of the research method.
The specific topic of the chapter is data-collection tools in surveys to obtain standardized information from all subjects in the sample. The focus is on survey research, a distinctive research methodology for systematic data collection. Information to be collected is assumed to be quantifiable. The chapter helps graduate students in education learn steps needed to carry out the collection of data process. The overall purpose is the improvement of educational research through an appropriate collection of data.
What are the authors saying? The chapter provides an explanation of the techniques for preparing and using tools of survey research, considering the various types of knowledge that can be generated by analysis of survey data. Collecting research data properly is worth doing. Survey research leads to new knowledge and this knowledge contributes to improve education in different ways.
By selecting and using gathering techniques and survey research in an appropriate way, we can avoid mistakes sometimes made by researchers. Cautions include several threats to the validity of the instrumentation process. For example, an extraneous event may cause the respondents to answer differently. It also warns that for our theses we need to obtain university IRB approval for the collection of data from human subjects.

Closing summary

In general, research data may be categorised as primary and secondary data. Primary data are data generated by the researcher using data gathering techniques (questionnaires, interviews, etc). Secondary data are those that have been generated by others and are included in data-sets, case materials, computer or manual databases or published by various private (e.g. Annual Reports of companies), public organisations or government departments (official statistics by the Statistical Office) and International Organisations such the International Monetary Fund and the World Bank and the United Nations, among others. The chapter mainly focuses on what is survey research, what are the data collection tools and what are the types of research survey. It describes the cross-sectional survey and the longitudinal survey. It explains three ways for collecting research data through longitudinal survey, specifically trend studies, cohort studies and panel studies. It also provides excellent examples to illustrate the major characteristics of each type of research survey and describes advantages and disadvantages of each one.
The major purpose of surveys is to describe certain characteristics or variables of a population. Some characteristics are: 1) information is collected from a group of people (rather than from every member of the population) in order to describe some aspects (such as abilities, opinions, attitudes, beliefs, and/or knowledge) of the population of which that group is a part.
2) The main way in which the information is collected is through asking questions through questionnaires or/and interviews. The answers by respondents constitute the data of the study. Among major advantages of survey research are reduced cost and that information collected can be of various types. Among major disadvantages are biases inherent in the data collection process and possible security or confidentiality concerns. I was able to participate in 2001 in a longitudinal survey (cohort studies survey) at Catholic University in Venezuela. In this design, the College sampled the graduating class throughout a couple of years using questionnaires. I realized that there are unique problems and pressures that affect longitudinal studies because of the extended period of time over which data is collected in comparison to cross-sectional studies. One danger is that the issues studied, and the measures and theories used, may become obsolete over the course of the study. Also, the survey was too long and a number of participants left the last questions without response. Reading Borg & Gall (1999) made me reflect on the importance of carefully planning research surveys (and short-term uses for the data should be planned ahead). Indeed, the success is dependent on clearly defining long term goals, specific variables, limitations and delimitations in the generalizability of findings. In order to guard against obsolesced longitudinal research should be theoretically broad-minded and mixed. It is important a carefully selected sample of respondents and selecting a large enough sample size. In order to make legitimate conclusions about the specified population, sampling must be representative and valid statistical assumptions must be present.

Summary # 3: Collecting Research Data with Questionnaires
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Fleming, C. M., and Bowden, M. (2009) “Web-based surveys as an alternative to traditional mail methods” Journal of Environmental Management, 90, 1, pp. 284-292

Key terms and definitions

Questionnaire: can be defined as a set of questions to which participants record their answers, usually within largely closely defined alternatives (Fleming and Bowden, 2009). There are mainly three types: postal or mail questionnaire, online questionnaires and personally administered questionnaires.
Mail Questionnaires: the questionnaires are sent (using the post office) to the sample participants, usually with a pre-paid self-addressed envelope to encourage response. Some advantages are low cost, and anonymity; the respondents can give more thought to the questions; researcher bias is less as compared to administered questionnaires. Some disadvantages are possible misinterpretation of questions, possible problems with language, and lower level response rate (usually is small which requires a second or even a third mailing)
Hand-delivering or personal administered questionnaires: the researchers personally administer the questionnaire to the participants, usually at the participants’ workplace, residence or any other adequate location. Some advantages are faster response as compared to the mail questionnaires, the research can clarify questions to the participant, the researcher can motivate honest answers by emphasising the participants’ contribution, and personal persuasion increases response rate. A possible disadvantage is that the researcher may introduce his/her personal bias and the responses may vary as compared to mail questionnaires.
Online questionnaires: Using online questionnaires enables the researcher to collect large volumes of data quickly and at low cost; direct access to research populations; it is possible to make them friendly and attractive, thus encouraging higher response rates and data entry errors are often low. Some disadvantages are sample bias, technology knowledge of respondents, anonymity, privacy and confidentiality (Fleming and Bowden, 2009).
Content synthesis
The chapter aims to help the reader to understand the main steps in conducting questionnaire surveys, the rules that researchers should follow to guarantee high-quality surveys and the importance of careful planning and sound methodology.
The steps in conducting a questionnaire survey are the following: (1) defining objectives, (2) selecting a sample, (3) writing items, (4) constructing the questionnaire, (5) pretesting, (6) preparing a letter of transmittal, (7) sending out questionnaire and follow-ups, and (8) analysis of the results and preparing the research report. Surveyors should try to make questionnaires attractive, and easy to complete. Also, they should number the questionnaire items and pages; put name and address of person to whom form should be returned; include brief, clear instructions; use examples before any items, etc.
The overall purpose of the chapter is to guide, especially to graduate students of education, and teachers, in how to apply the necessary tools, procedures and techniques for effectively designing and conducting a survey questionnaire in educational research.
The authors are saying that given the objectives of a survey, we as graduate students should know the rules related to questionnaire format and how to write both closed-form and open-ended questionnaire items to measure them.
Information provided is interesting because questionnaires are useful instruments to obtaining access to organisations and, more specifically, to obtain evidence of consensus among the respondents on different issues. With careful planning and sound methodology, the mail questionnaire can be a very valuable research tool in education.
Closing summary
Chapter 8 provides a clear explanation of the steps in conducting a questionnaire survey and the set of rules that researchers should apply when conducting it. Among the rules that we should apply when conducting a questionnaire survey are: to define the problem clearly, list objectives, construct neat items, make the questionnaire attractive, put name and address of person to whom form should be returned. Regarding the form, the questionnaire as such should include brief, clear instructions, use examples before any items, organize the questions in logical sequence and it should be easy to complete. In relation to the organization of content, the questionnaire when moving to a new topic, it should include a transitional sentence to help respondents switch their trains of thought, begin with a few interesting and nonthreatening items, do not put important items at the end and put threatening or difficult questions near the end; and items should meaningful to the respondents. Finally, if there is attitude measurements you should investigate respondents’ familiarity (prove the questionnaire previously with a small sample), and watch out anonymity because non-respondent individuals cannot be identified (but all depends if anonymity is necessary to achieve the specific goals). The authors recommend pre-testing the questionnaire, which requires doing the following: select a sample of individuals from a population similar to our subjects and ask them to repeat their understanding of the meaning of the question in their own words to make sure items are clearly stated; apply the questionnaire to a sample to check the % of replies. Read the subjects’ comments and make necessary changes to improve it; make a brief analysis of the pre-test results. Make necessary changes (adding questions, correcting words, etc). Prepare a letter of transmittal; the authors say that it is important to pre-contact a sample to assure cooperation. Letter must be brief, precise, explain good reasons, assure privacy and confidentiality, if possible, and associate it with some professional institution or organization (authority symbol).
The questionnaire can be a very valuable research tool in education. It is a data collection tool in which written questions are presented that are to be answered by a selected sample of respondents. Collecting research data with questionnaires requires careful planning and sound methodology. The authors describe in detail all the steps that must be taken to carry out a successful questionnaire survey.
The key in carrying out a satisfactory questionnaire study is to begin by clearly defining the research problem and list specific objectives or hypotheses. That is the researcher needs to have a clear understanding of what s/he hopes to obtain from the results. Otherwise, it will be very difficult to make right decisions regarding selection of a sample, construction of the questionnaire, and methods for analyzing the data. Identifying the target population and selecting a sample is also a key to guarantee success in conducting a questionnaire survey.
The researcher also must be very careful in designing or constructing items. The qualities of a good questionnaire survey are the following: Clarity, short items, avoid items that include two separate ideas in the same item, do not use technical terms, jargon or confuse words, ask general questions first and then specific questions and it is important to avoid biased or leading questions (the subject is eager to please), and avoid questions that may be psychologically threatening (low moral). The authors suggest sending questionnaires and follow-ups by using special delivery mail. The questionnaires must be neat, and carefully planned.
Below, there are two diagrams that synthesize statistical techniques and a glossary that can help in understanding some of the terms that are useful in conducting research.
Vern Lindberg (2000) “Uncertainties and Error Propagation”