Monday, October 5, 2009

How do you know if a piece of research work is good or not? how are they evaluated?

How do you know if a piece of research work is good or not? how are they evaluated?

Research is the systematic process of collecting and analyzing information to increase our understanding of the phenomenon under study. It is the function of the researcher to contribute to the understanding of the phenomenon and to communicate that understanding to others.
Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

There are many types of evaluations that do not necessarily result in an assessment of worth or merit -- descriptive studies, implementation analyses, and formative evaluations, to name a few. Better perhaps is a definition that emphasizes the information-processing and feedback functions of evaluation.
Both definitions agree that evaluation is a systematic endeavor and both use the deliberately ambiguous term 'object' which could refer to a program, policy, technology, person, need, activity, and so on. The latter definition emphasizes acquiring and assessing information rather than assessing worth or merit because all evaluation work involves collecting and sifting through data, making judgements about the validity of the information and of inferences we derive from it, whether or not an assessment of worth or merit results.

The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically driven feedback.

There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. Formative evaluations strengthen or improve the object being evaluated -- they help form it by examining the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects or outcomes of some object -- they summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.
Formative evaluation includes several evaluation types:
ท needs assessment determines who needs the program, how great the need is, and what might work to meet the need
ท evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
ท structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
ท implementation evaluation monitors the fidelity of the program or technology delivery
ท process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
Summative evaluation can also be subdivided:
ท outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
ท impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole
ท cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
ท secondary analysis reexamines existing data to address new questions or use methods not previously employed
ท meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question
15 Steps to Good Research
1. Define and articulate a research question (formulate a research hypothesis).
2. Identify possible sources of information in many types and formats.
3. Judge the scope of the project.
4. Reevaluate the research question based on the nature and extent of information available and the parameters of the research project.
5. Select the most appropriate investigative methods (surveys, interviews, experiments) and research tools (periodical indexes, databases, websites).
6. Plan the research project.
7. Retrieve information using a variety of methods (draw on a repertoire of skills).
8. Refine the search strategy as necessary.
9. Write and organize useful notes and keep track of sources.
10. Evaluate sources using appropriate criteria.
11. Synthesize, analyze and integrate information sources and prior knowledge.
12. Revise hypothesis as necessary.
13. Use information effectively for a specific purpose.
14. Understand such issues as plagiarism, ownership of information (implications of copyright to some extent), and costs of information.
15. Cite properly and give credit for sources of ideas.


According to Simon Peyton Jones.
"How to write a good research paper"
Structure

 Abstract (4 sentences)
 Introduction (1 page)
 The problem (1 page)
 My idea (2 pages)
 The details (5 pages)
 Related work (1-2 pages)
 Conclusions and further work (0.5 pages)
Criteria for a good piece of research
by:Simon Peyton Jones and Alan Bundy
Major criteria

Here are the major criteria against which your proposal will be judged. Read through your case for support repeatedly, and ask whether the answers to the questions below are clear, even to a non-expert.
ท Does the proposal address a well-formulated problem?
ท Is it a research problem, or is it just a routine application of known techniques?
ท Is it an important problem, whose solution will have useful effects?
ท Is special funding necessary to solve the problem, or to solve it quickly enough, or could it be solved using the normal resources of a well-found laboratory?
ท Do the proposers have a good idea on which to base their work? The proposal must explain the idea in sufficient detail to convince the reader that the idea has some substance, and should explain why there is reason to believe that it is indeed a good idea. It is absolutely not enough merely to identify a wish-list of desirable goals (a very common fault). There must be significant technical substance to the proposal.
ท Does the proposal explain clearly what work will be done? Does it explain what results are expected and how they will be evaluated? How would it be possible to judge whether the work was successful?
ท Is there evidence that the proposers know about the work that others have done on the problem? This evidence may take the form of a short review as well as representative references.
ท Do the proposers have a good track record, both of doing good research and of publishing it? A representative selection of relevant publications by the proposers should be cited. Absence of a track record is clearly not a disqualifying characteristic, especially in the case of young researchers, but a consistent failure to publish raises question marks.
Secondary criteria
Some secondary criteria may be applied to separate closely-matched proposals. It is often essentially impossible to distinguish in a truly objective manner among such proposals and it is sad that it is necessary to do so. The criteria are ambiguous and conflict with each other, so the committee simply has to use its best judgement in making its recommendations.
ท An applicant with little existing funding may deserve to be placed ahead of a well- funded one. On the other hand, existing funding provides evidence of a good track record.
ท There is merit in funding a proposal to keep a strong research team together; but it is also important to give priority to new researchers in the field.
ท An attempt is made to maintain a reasonable balance between different research areas, where this is possible.
ท Evidence of industrial interest in a proposal, and of its potential for future exploitation will usually count in its favour. The closer the research is to producing a product the more industrial involvement is required and this should usually include some industrial contribution to the project. The case for support should include some `route to market' plan, ie you should have thought about how the research will eventually become a product --- identifying an industrial partner is usually part of such a plan.
ท A proposal will benefit if it is seen to address recommendations of Technology Foresight. It is worth looking at the relevant Foresight Panel reports and including quotes in your case for support that relate to your proposal.


RESOURCES:
http://www.socialresearchmethods.net/kb/intreval.htm
http://www.library.georgetown.edu/tutorials/research-guides/15-steps
http://research.microsoft.com/en-us/um/people/simonpj/papers/giving-a-talk/writing-a-paper-slides.pdf
http://research.microsoft.com/en-us/um/people/simonpj/papers/Proposal.html

No comments:

Post a Comment