SUBMISSIONS

Reviewer Guidelines

WACV 2024 Reviewer Guidelines (adapted from CVPR 2022)

Thank you for volunteering your time to review for WACV24! We rely very much on the time and expertise of our reviewers to maintain a high-quality technical program. This document explains what is expected of all members of the reviewing committee for WACV24.

The WACV 2024 Reviewing Timeline

Application vs Algorithms 

Blind Reviews 

Check your Papers 

What to Look For 

Check for Reproducibility 

Check for Attribution of Data Assets 

Check for Use of Personal Data and Human Subjects 

Be Specific 

Ethics for Reviewing Papers 

CMT Instructions 

Reviewer FAQ 

The WACV 2024 Reviewing Timeline

Round 1

Paper Submission Deadline June 28, 2023

Supplementary Submission Deadline June 30, 2023

Papers Assigned to Reviewers July 7, 2023

Reviews Due July 28, 2023

Decisions Released to Authors August 11, 2023

Round 2

Paper Submission Deadline August 30, 2023

Supplementary Submission Deadline September 1, 2023

Papers Assigned to Reviewers* September 8, 2023

Reviews Due September 29, 2023 

Decisions Released to Authors October 20, 2023

*Includes revised papers from Round 1, which will be assigned to the same reviewers.

Application vs Algorithms

WACV papers can either be an “application” or “algorithms” paper, and have different reviewing criteria. Please read these instructions carefully and apply the appropriate criteria based on the paper type indicated in the submission. 

Application papers must be evaluated on systems-level innovation, novelty of the domain and comparative assessment. They should not be evaluated solely on the basis of algorithmic novelty (i.e, it is okay to have algorithmic novelty, but it is also okay not to have it). Examples of systems-level innovation are: 1) A new task, i.e., does this paper deliver an interesting application of computer vision that the community has never seen before?  2) A new formulation of an existing application;  3) A new way of benchmarking performance of existing methods. Recent examples of WACV application-track papers:

Algorithms papers must be evaluated according to the standard conference criteria including algorithmic novelty and quantified evaluation against current, alternative approaches. They should not be evaluated solely on the basis of systems-level innovation (i.e, it is okay to have systems-level novelty, but it is also okay not to have it). Algorithms papers will be similar in style to other major computer vision conferences (e.g., CVPR, ICCV, ECCV). Recent examples include:

Blind Reviews

Reviewers should make all efforts to keep their identity invisible to the authors, including omitting names and affiliations.

With the increase in popularity of arXiv preprints, sometimes the authors of a paper may be known to the reviewer. Posting to arXiv is NOT considered a violation of anonymity on the part of the authors, and in most cases, reviewers who happen to know (or suspect) the authors’ identity can still review the paper as long as they feel that they can do an impartial job. An important general principle is to make every effort to treat papers fairly whether or not you know (or suspect) who wrote them. If you do not know the identity of the authors at the start of the process, DO NOT attempt to discover them by searching the Web for preprints.

Please read the FAQ at the end of this document for further guidelines on how arXiv prior work should be handled.

Check your Papers

As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) there is no obvious conflict with you (e.g., a paper authored by your recent collaborator from a different institution in the past 3 years; see more examples under “Avoid Conflict of Interest” below) and (b) you feel comfortable to review the paper assigned. If issues with either of these points arise, please let us know right away by emailing the Program Chairs (wacv2024-pcs@googlegroups.com).

Please read the [Author Guidelines] carefully to familiarize yourself with all official policies (such as dual submission and plagiarism). If you think a paper may be in violation of one of these policies, please contact the Area Chair and Program Chairs. In the meantime, proceed to review the paper assuming no violation has taken place.

What to Look For

Each paper that is accepted should be technically sound and make an application or algorithmic contribution to the field. Look for what is good or stimulating in the paper. In particular, look for what new application or knowledge advancement the paper made. We recommend that you embrace novel, brave applications and concepts, even if they have not been tested on many datasets. For example, the fact that a proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. Rather, it is important to weigh both the novelty and potential impact of the work alongside the reported performance. Minor flaws that can be easily corrected should not be a reason to reject a paper.

Check for Reproducibility

To improve reproducibility in AI research, we highly encourage authors to voluntarily submit their code as part of supplementary material, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper’s results are reproducible and trustworthy, but are not required to. Reviewers are also encouraged to use the Reproducibility Checklist as a guide for assessing whether a paper is reproducible or not. All code/data should be reviewed confidentially and kept private, and deleted after the review process is complete. We expect (but do not require) that the accompanying code will be submitted with accepted papers.

Check for Attribution of Data Assets

Authors are advised that they need to cite data assets used (e.g., datasets or code) much like papers. As a reviewer, please carefully check if a paper has adequately cited data assets used in the paper, and comment in the corresponding field in the review form.

Check for Use of Personal Data and Human Subjects

If a paper is using personal data or data from human subjects, the authors must have an ethics clearance from an institutional review board (IRB, or equivalent) or clearly describe that ethical principles have been followed. If there is no description of how ethical principles were ensured or GLARING violations of ethics (regardless of whether discussed or not), please inform the Area Chairs and the Program Chairs, who will follow on each specific case. Reviewers shall avoid dealing with such issues by themselves directly. 

IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators’ responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus we would like to encourage dataset creators to provide this information to the public. 

In this regard, if a paper uses an existing public dataset that is released by other researchers/research organizations, we encourage, but not require them to include a discussion of IRB related issues in the paper. Reviewers hence should not penalize a paper if such a discussion is NOT included.

Be Specific

Please be specific and detailed in your reviews. Your main critique of the paper should be written in terms of a list of strengths and weaknesses. You can use bullet points here, but also explain your arguments. A single short sentence or a few words do NOT suffice. Your discussion, more than your score, will help the authors, fellow reviewers, and Area Chairs understand the basis for your recommendation, so please be thorough. You should include specific feedback on ways the authors can improve their papers.

In the discussion of related work and references, simply saying “this is well known” or “this has been common practice in the industry for years” is not sufficient: You MUST cite specific publications, including books or public disclosures of techniques. If you do not provide references to support your claim, the Area Chairs are forced to discount it.

Ethics for Reviewing Papers

1. Protect Ideas

As a reviewer for WACV, you have the responsibility to protect the confidentiality of the ideas represented in the papers you review. WACV submissions are not published documents. The work is considered new or proprietary by the authors; otherwise they would not have submitted it. Of course, their intent is to ultimately publish to the world, but most of the submitted papers will not appear in the WACV proceedings. Thus, it is likely that the paper you have in your hands will be refined further and submitted to some other journal or conference. Sometimes the work is still considered confidential by the authors’ employers. These organizations do not consider sending a paper to WACV for review to constitute a public disclosure. Protection of the ideas in the papers you receive means:

  • If you have asked a colleague or student to help in writing a review, you must acknowledge them. There will be a reviewer question allocated for this in CMT.
  • You should not show any results, videos/images, code or any of the supplementary material to non-reviewers.
  • You should not use ideas/code from papers you review to develop your own ideas/code.
  • After the review process, you should destroy all copies of papers and supplementary material and erase any code that the authors submitted as part of the supplementary.

2. Avoid Conflict of Interest

As a reviewer of a WACV paper, it is important for you to avoid any conflict of interest. There should be absolutely no question about the impartiality of any review. Thus, if you are assigned a paper where your review would create a possible conflict of interest, you should return the paper and not submit a review. Conflicts of interest include (but are not limited to) situations in which:

  • You work at the same institution as one of the authors.
  • You have been directly involved in the work and will be receiving credit in some way. If you’re a member of an author’s thesis committee, and the paper is about his or her thesis work, then you were involved.
  • You suspect that others might perceive a conflict of interest in your involvement (e.g., the author is your friend).
  • You have collaborated with one of the authors in the past three years (more or less). Collaboration is usually defined as having written a paper or funded grant proposal together, although you should use your judgment.
  • You were the MS/PhD advisor or advisee of one of the authors. Like most funding agencies and publications, we consider advisees to represent a lifetime conflict of interest. 

While the organizers make every effort to avoid such conflicts in the review assignments, they may nonetheless occasionally arise. If you recognize the work or the author and feel it could present a conflict of interest, email the Program Chairs (wacv2024-pcs@googlegroups.com) as soon as possible so they can find someone else to review it.

3. Be Professional

Belittling or sarcastic comments have no place in the reviewing process. The most valuable comments in a review are those that help the authors understand the shortcomings of their work and how they might improve it. Write a courteous, informative, incisive, and helpful review that you would be proud to sign with your name (were it not anonymous).

4. Large Language Model (LLM) Ethics

Following the ICCV 2023 policy, WACV 2024 does not allow the use of Large Language Models or online chatbots such as ChatGPT in any part of the reviewing process. There are two main reasons: (a) Reviewers must provide comments that faithfully represent their original opinions on the papers being reviewed. It is unethical to resort to Large Language Models (e.g., an offline system) to automatically generate reviewing comments that do not originate from the reviewer’s own opinions; (b) Online chatbots such as ChatGPT collect conversation history to improve their models. Therefore their use in any part of the reviewing process would violate the confidentiality policy.

Additional Tips for Writing Good Reviews

  • Take the time to write good reviews. 
  • Short reviews are unhelpful to authors, other reviewers, and Area Chairs. If you have agreed to review a paper, you should take enough time to write a thoughtful and detailed review. 
  • Be specific when you suggest that the writing needs to be improved. If there is a particular section that is unclear, point it out and give suggestions for how it can be clarified.
  • Be specific about novelty. Claims in a review that the submitted work “has been done before” MUST be backed up with specific references and an explanation of how closely they are related. At the same time, for a positive review, be sure to summarize what novel aspects are most interesting in the Strengths section.
  • Do not reject papers solely because they are missing citations or comparisons to prior work that has only been published without review (e.g., arXiv or technical reports). Refer to the FAQ below for more details on handling arXiv prior art.
  • Do not give away your identity by asking the authors to cite several of your own papers.
  • If you think the paper is out of scope for WACV’s subject areas, clearly explain why in the review. Then suggest other publication possibilities (journals, conferences, workshops) that would be a better match for the paper. However, unless the area mismatch is extreme, you should keep an open mind, because we want a diverse set of good papers at the conference.
  • The tone of your review is important. A harshly written review will be resented by the authors, regardless of whether your criticisms are true. If you take care, it is always possible to word your review constructively while staying true to your thoughts about the paper.
  • Avoid referring to the authors in the second person (“you”). It is best to avoid the term “the authors” as well, because you are reviewing their work and not the person. Instead, use the third person (“the paper”). Referring to the authors as “you” can be perceived as being confrontational, even though you may not mean it this way.
  • Use Neutral Pronouns: If it is necessary to refer to authors or reviewers directly, use neutral pronouns or names, for example, you could say “the authors” or “they” and “R1” or “the reviewer” rather than “he” or “she”.
  • Be generous about giving the authors new ideas for how they can improve their work. You might suggest a new technical tool that could help, a dataset that could be tried, an application area that might benefit from their work, or a way to generalize their idea to increase its impact.

Finally, keep in mind that a thoughtful review not only benefits the authors, but also yourself. Your reviews are read by other reviewers and especially the Area Chairs, in addition to the authors. Unlike the authors, the Area Chairs know your identity. Being a helpful reviewer will generate good will towards you in the research community.

CMT Instructions

Once you’ve been notified by email that papers have been assigned to you, please log into the CMT site (https://cmt3.research.microsoft.com/WACV2024), choose the “Reviewer” role on top, and follow the steps below.

1. Download your papers.

To download individual papers, you can click the links underneath individual paper titles. Or, you can click the “Actions” button in the top right corner and then choose “Download Files”. This allows you to download a ZIP file containing all the papers plus supplementary files (if available).

2. Check for possible conflict or submission rule violations.

Contact the Program Chairs (wacv2024-pcs@googlegroups.com) immediately if:

    1. You think you are conflicted with the paper (see the section entitled “Avoid Conflict of Interest” above).
    2. You think the paper violates submission rules regarding anonymity, double submission, or plagiarism (please refer to the Author Guidelines for precise definitions of what is and isn’t considered acceptable). In the meantime, go ahead and review the paper as if there is no violation. The Program Chairs will follow up, but this may take a bit of time.
    3. Review papers and assign them a preliminary (pre-rebuttal) rating.

For a paper, under the review column, click “Edit Review” to get to the review form. You can hover the mouse over the “?” symbol next to each question for a more detailed explanation. Before you start writing your reviews, make sure you have read the Reviewer Guidelines above.

4. (Optional) Review papers offline.

To enable offline reviewing, go to “Actions – > Import Reviews”. You can select papers and click “Download” to obtain XML review stubs and the update files as needed. Once you are done updating, you can upload the file from the same page. We suggest that you use an XML editor to edit the file. You should always verify the review after uploading by inspecting it online.

Reviewer FAQs

Q. How should code submission be handled?
A. Please read the Author FAQ regarding code submission.
Code submissions should be anonymized (e.g., anonymous_github). If there is any information that may reveal the identity of the authors, please notify the program chairs (wacv2024-pcs@googlegroups.com), but otherwise review the paper as if the code were properly anonymized.

Q. Is there a minimum number of papers I should accept or reject?
A. No. Each paper should be evaluated in its own right. If you feel that most of the papers assigned to you have value, you should accept them. It is unlikely that most papers are bad enough to justify rejecting them all. However, if that is the case, provide clear and very specific comments in each review. Do NOT assume that your stack of papers necessarily should have the same acceptance rate as the entire conference ultimately will.

Q. Can I review a paper I already saw on arXiv and hence know who the authors are?
A. In general, yes, unless you are conflicted with one of the authors. See next question below for guidelines.

Q. How should I treat papers for which I know the authors?
A. Reviewers should make every effort to treat each paper impartially, whether or not they know who wrote the paper. For example: It is not OK for a reviewer to read a paper, think “I know who wrote this; it’s on arXiv; they are usually quite good” and accept the paper based on that reasoning. Conversely, it is also not OK for a reviewer to read a paper, think “I know who wrote this; it’s on arXiv; they are no good” and reject the paper based on that reasoning.

Q. Should authors be expected to cite related arXiv papers or compare to their results?
A. Consistent with good academic practice, the authors should cite all sources that inspired and informed their work. This said, asking authors to thoroughly compare their work with arXiv reports that appeared shortly before the submission deadline imposes an unreasonable burden. We also do not wish to discourage the publication of similar ideas that have been developed independently and concurrently. Reviewers should keep the following guidelines in mind:
Authors are not required to discuss and compare their work with recent arXiv reports, although they should properly acknowledge those that directly and obviously inspired them.
Failing to cite an arXiv paper or failing to beat its performance SHOULD NOT be SOLE grounds for rejection.
Reviewers SHOULD NOT reject a paper solely because another paper with a similar idea has already appeared on arXiv. If the reviewer suspects plagiarism or academic dishonesty, they are encouraged to bring these concerns to the attention of the Program Chairs.
It is acceptable for a reviewer to suggest that an author should acknowledge or be aware of something on arXiv.

Q. How should I treat the supplementary material?
A. The supplementary material is intended to provide details of derivations and results that do not fit within the paper format or space limit. Ideally, the paper should indicate when to refer to the supplementary material, and you need to consult the supplementary material only if you think it is helpful in understanding the paper and its contribution. According to the Author Guidelines, the supplementary material MAY NOT include results obtained with an improved version of the method (e.g., following additional parameter tuning or training), or an updated or corrected version of the submission PDF. If you find that the supplementary material violates these guidelines, please contact the Program Chairs.

Q. What about arXiv papers?
A. The field has decided that dissemination on arXiv facilitates the rapid spread of information within the field. arXiv papers are not “published” but are understood to be “pre-publications.” This open pre-publication process provides a form of community review where problems can be detected (much like formal peer review). arXiv papers are often corrected and modified; the site is set up to support this scientific process of revision.

Q. May the authors build a project website related to their arXiv paper?
A. Yes, the authors may, as long as the project website itself does not contain any information that would otherwise link it to your WACV submission.

Q. A paper is using a withdrawn dataset, such as DukeMTMC-ReID or MS-Celeb-1M. How should I handle this?
A. Reviewers are advised that the choice to use a withdrawn dataset, while not in itself grounds for rejection, should invite very close scrutiny. Reviewers should flag such cases in the review form for further consideration by ACs and PCs. Consider questions such as: Do the authors explain why they had to do this? Is this explanation compelling? Is there really no alternative dataset that could have been used? Remember, authors might simply not know the dataset had been withdrawn. If you believe the paper could be accepted without the authors’ use of a withdrawn dataset, then it is natural to advise the authors to remove the experiments associated with this dataset.

Q. If a paper did not evaluate on a withdrawn dataset, can I request authors that they do?
A. It is a violation of policy for a reviewer or area chair to require comparison on a dataset that has been withdrawn without a detailed consultation with PCs or DEI chairs.