Oct 31st, 2017
The results are now available.

Oct 30th, 2017
The solutions are now available.

Sept 8th, 2017
Update for Challenge 15 available, but will not count in evaluation.

Sept 4th, 2017
Updated mailling list and submission information.

Aug 23rd, 2017
The preliminary results have been sent out to participants, and are now available.

July 09th, 2017
We fixed the intensities in the TSV archive for challenges 046-243.

June 22nd, 2017
We added the Category 4 on a subset of the data files.

May 22nd, 2017
We have improved challenges 29, 42, 71, 89, 105, 106 and 144.

April 26th, 2017
The rules and challenges of CASMI 2017 are public now !

Jan 20th, 2017
Organisation of CASMI 2017 is underway, stay tuned!

Contest Rules

The goal of the participants is to determine the correct molecular structure, based on the data provided by the contest organisers. Spectral data will be available as peak lists in MGF and TSV formats. For challenges 1-45, additional metadata is available in CSV format. The contest has four categories this year, with different aims and restrictions on the information available and allowed to be used. In each category the performance of different approaches can be compared.

The CASMI contest comes with a set of rules that aim to clarify how the submissions will be evaluated and ranked. This ensures that the evaluation criteria are transparent, objective and known in advance. All contributions should follow the principles of reproducible research and accurately describe how the results were achieved. From the submissions, several result pages and tables will be created automatically, so the submissions need to obey a particular format. The details are also given on the Submission and Evaluation tab.

This year, we will introduce a two-stage submission procedure, inspired by the Dream challenges: Teams send in an initial submission, which will undergo the CASMI evaluation. We will resolve any technical issues in the submissions with participants, and publish a summarised and pseudonymised leaderboard showing the current overall performance, where only participants know their own pseudonym. It is possible to send incomplete submissions, although then the summary will not resemble the real potential of the approach on the full contest dataset. All participants can update their submissions for the final deadline.

Participants can enter a maximum of three submissions per approach and category, if they wish to assess the influence of different strategies on the outcomes. Only the best submission will count for declaration of the winners and followup ranking. The rationale behind the different submissions must be detailed clearly in the abstract.