Back to CHI 2009

Collection of Paper Criteria from various sources

From CHI 1997 call, Types of Papers (short and long descriptions)

The CHI community consists of researchers and practitioners from many different disciplines and intellectual traditions. The papers review process tries to rigorously review all submissions in a manner that takes into account the different criteria from different parts of the community. For reviewers to do this effectively, they need an accurate assessment of the type of each paper they read. Please select a type of paper from the following list that best describes your submission and write it in the appropriate place on Cover Page Two. If you feel that your submission does not fit any of these types, or if it seems to match more than one type description, please contact a Papers Co-Chair for help in best classifying your submission.

'Empirical Papers describe the collection and interpretation of data concerning the design or use of an HCI artifact. Data might include interviews, observations, surveys, or experimental manipulations. Both qualitative and quantitative approaches to data collection and analysis are welcome. Quantitative analyses should include appropriate statistical tests. Review criteria include the appropriateness and rationale for the methods of data collection and analysis, and the significance of the conclusions for practice or research in HCI.

  • Empirical papers focus on data collected on the use or design of an HCI artifact. Empirical Papers need to use a sound methodology, so that their conclusions are believable and that readers can understand the situations to which those conclusions apply. CHI has a diverse empirical research tradition ranging from the very qualitative to the very quantitative. Some reviewers will NOT share the same assumptions about what makes good research and you will need to justify the appropriateness of your techniques and the significance of the results (not just statistical significance, but the significance to the CHI community). On the other hand, one or more reviewers WILL be steeped in your research tradition and will expect you to apply the appropriate methodological techniques from that tradition. Try to anticipate where expert reviewers might have specific concerns about your approach and provide enough information to reassure them of your methodological rigor.

Experience Papers describe the application of HCI methods, theory or tools to the design or development of an HCI artifact. Review criteria include the value of the reflections abstracted from the experience and their relevance to other designers or to researchers working on related methods, theory or tools.

  • Experience papers describe how HCI methods, theory, or tools were applied to the design or development of an HCI artifact. Experience Papers are judged on the value of the experience to practitioners -- will someone doing HCI work in a development organization learn something from this paper that they can apply to their practice? If you are describing something that has been done before, you need to have significant added value -- e.g., a combination of methods that is worth more than the individual ones, or ironing out some practical problems in an academic method. It is important to focus on and draw out the information useful to practitioners, rather than just describing your experience.

Systems Papers describe the software and technology associated with a novel interactive application, user interface feature, user interface design or development tool. Review criteria include the originality and relevance to other user interface developers of the system's architecture and behavior. Authors should be clear to what extent the system has been implemented. Authors are encouraged to develop a coordinated demonstration or video submission of the system for CHI 97.

  • Systems papers describe the design and implementation of a tool or application that has a novel UI component. The new idea must be original, of interest to a reasonably broad subset of the CHI community, and be based on sound reasoning. Systems papers should be extremely clear about what has and has not been implemented. They should also provide enough information that an experienced researcher could implement a similar system. It is also important to show at least informal evidence that the system is usable and useful to its intended users. This is particularly important if a reviewer's intuitions about usability and usefulness are not consistent with yours -- its hard to argue against solid data that users liked the system or were more productive with it.

Theory Papers describe principles, concepts, or models on which work in HCI (empirical systems, experience, methodology) might be based; authors of theoretical papers are expected to position their ideas within a broad context of HCI frameworks and theories. Review criteria include the originality or soundness of the analysis provided as well as the relevance of the theoretical content to HCI practice and/or research.

  • Theory papers describe principles, concepts, or models on which HCI work can be based. Theories range from constructs about physical and perceptual behavior to more cognitive and organizational issues. Again, CHI draws from many theoretical traditions. Your paper has to make sense both to those who are highly knowledgeable about your traditions and to those who want to begin using your theoretical constructs and predictions in their own work. You will improve your chances of a fair review if you are very clear about how your paper builds on or contrasts with related work (particularly work in this area that leads to different conclusions than your work). You will also need to make clear the relevance of your results to research and practice in HCI; don't assume that the connections will be obvious to the reviewers.

Methodology Papers describe a novel method for the design or evaluation of an HCI artifact; the method may be intended for use in research or development settings (or both), but the paper should be clear about the intended audience. Review criteria include the originality and soundness of the method and its usefulness for the intended audience.

  • Methodology papers describe a novel method for the design or evaluation of an HCI artifact. Methodology Papers are judged very similarly to Theory Papers. The additional criterion is that the methodology must be described so that a reader can judge its usefulness. One way to help readers make that judgement is to describe the methodology in detail, which may be difficult to do in eight pages. If you cannot do so, you are obliged to tell the reader how to find the appropriate supplementary materials to use this method. Another approach is to include a study of people using the methodology that demonstrates how it improved the process it was intended to address.

Opinion Papers present the author's well-supported opinion about some aspect of HCI. Review criteria include the impact and quality of the argumentation, including the experience (research or practice) used to support the opinion. Authors of opinion papers are urged to contact one of the Papers Co-Chairs in advance of submitting a paper, to get feedback on their idea, since CHI rarely accepts opinion papers.

  • New this year are Opinion papers. Sometimes papers are submitted to CHI that do not meet any of the submission types above, but are provocative essays on some area of interest to the CHI community. Such papers are a challenge to write well, but when a good one comes along, it is often the most discussed paper at the conference. For an Opinion Paper to be accepted, it must cover a topic of interest to a relatively broad segment of the CHI audience, have well supported arguments (including data from research or practice), and be expected to have a stimulating effect on the CHI community. Particularly high standards will be expected of an Opinion Paper -- it may be harder to get an Opinion Paper accepted than any other type of submission.

From Scott Hudson

These are notes around the above: I would really like to put some contribution type-specific questions (probably 3 with numerical scores, but no text) at the top of the review form to try to focus the reviewers' attention on the fact that these things should be evaluated differently. Basically if the authors select "... quantitative..." then the reviewers would provide scores for questions such as:

and for "... qualitative ..." the reviewers would provide scores for questions such as:

(I just made those up without that much thought, so don't take them too seriously.)

I would propose that we compute a "contribution-type specific" score as the average of the 3 questions and that we do something to raise the salience of that score at the meeting. The most extreme version of that would be to sort the papers at the meeting by a weighted combination (50/50, 75/25?) of "overall rating" and "contribution-type specific" score. A less radical version of that would be to do the usual sort by "overall rating" but put a column next to it which indicates how much up or down in the sort the paper would have been placed if it had been sorted by contribution-type specific score.

From our CHI 2009 Planning Meeting - list of contribution types

Dan Olson's original list

  1. Tools and Infrastructure
    1. Has it been implemented?
    2. Can the essentials be implemented reasonably from the paper and prior work?
    3. In what way does this simplify or improve the process of creating new user interfaces?
    4. User study not required
  2. Creativity and Vision
    1. Does this paper present a new innovation in how user interface might appear or behave
    2. Will this stimulate thought on new ways to interact?
  3. Usable techniques
    1. Does this present a new way to interact
    2. Is this technique a measurable improvement over previous techniques for similar tasks
    3. Is the experiment valid and correctly analyzed?
  4. Understanding users (ethnographic studies)
    1. Has an interesting user population been studied?
    2. Is the study appropriately valid
    3. Do we learn something new and important about this set of users.
  5. Usability Science
    1. Is this a new technique for understanding or predicting how usable systems can be developed
    2. What are the advantages of this evaluating technique over previous techniques?