domingo, 5 de abril de 2009

Research evaluation for computer science (ten rules)

1. Computer science is an original discipline combining science and engineering. Researcher evaluation must be adapted to its specificity.
2. A distinctive feature of CS publication is the importance of selective conferences and books. Journals do not necessarily carry more prestige.
3. To assess impact, artifacts such as software can be as important as publications.
4. The order in which a CS publication lists authors is generally not significant. In the absence of specific indications, it should not serve as a factor in researcher evaluation.
5. Numerical measurements such as publication-related counts must never be used as the sole evaluation instrument. They must be filtered through human interpretation, particularly to avoid errors, and complemented by peer review and assessment of outputs other than publications.
6. Publication counts are not adequate indicators of research value. They measure productivity, but neither impact nor quality.
7. Any evaluation criterion, especially quantitative, must be based on clear, published criteria.
8. Numerical indicators must not serve for comparisons across disciplines.
9. In assessing publications and citations, ISI Web of Science is inadequate for most of CS and must not be used. Alternatives include Google Scholar, CiteSeer, and (potentially) ACM's Digital Library.
10. Assessment criteria must themselves undergo assessment and revision.

Bertrand Meyer (ETH Zurich)
Christine Choppy (LIPN, UMR CNRS7030, Université Paris 13)
Jørgen Staunstrup (IT University of Copenhagen)
Jan van Leeuwen (Utrecht University)


Nenhum comentário:

Postar um comentário

Deixe seu comentário! Não uso verificação de palavras.

Receba as postagens deste blog por email