News

Sept 8th, 2017
Update for Challenge 15 available, but will not count in evaluation.

Sept 4th, 2017
Updated mailling list and submission information.

Aug 23rd, 2017
The preliminary results have been sent out to participants, and are now available.

July 09th, 2017
We fixed the intensities in the TSV archive for challenges 046-243.

June 22nd, 2017
We added the Category 4 on a subset of the data files.

May 22nd, 2017
We have improved challenges 29, 42, 71, 89, 105, 106 and 144.

April 26th, 2017
The rules and challenges of CASMI 2017 are public now !

Jan 20th, 2017
Organisation of CASMI 2017 is underway, stay tuned!


Participant summary preliminary evaluation
The contest is still ongoing, with the actual contest submission deadline coming up in september. We have received preliminary submissions from several participants, and provide the results here in a pseudonymised form, where the participants know only their own pseudonyms. The purpose is to resolve any technical issues in the submissions with participants, including the correct classification into the categories.

Category 1: Best Structure Identification on Natural Products

F1 score Mean rank Median rank Top Top3 Top10 Misses TopPos TopNeg Mean RRP Median RRP
b511deec 399 5.61 1.5 8 17 19 21 8 0 0.735 0.848
ed955ba9 469 235.02 4.0 11 19 28 4 9 2 0.965 0.999
cd2a89ef 266 11.48 5.0 3 9 18 14 2 1 0.939 0.963
b224a812 141 4.71 4.5 0 3 12 33 0 0 0.512 0.559
The above table is also available as CSV download.

Category 2: Best Automatic Structural Identification - In Silico Fragmentation Only

F1 score Mean rank Median rank Top Top3 Top10 Misses TopPos TopNeg Mean RRP Median RRP
a9c019c6 449 407.38 64.5 3 17 35 0 2 1 0.942 0.984
33505ce4 526 283.32 43.0 7 15 44 0 4 3 0.880 0.964
3e04b006 577 221.61 34.0 6 20 48 0 4 2 0.916 0.969
The above table is also available as CSV download.

Category 3: Best Automatic Structural Identification - Full Information

F1 score Mean rank Median rank Top Top3 Top10 Misses TopPos TopNeg Mean RRP Median RRP
b1232f13 3707 4.33 2.0 91 148 193 34 52 39 0.644 0.750
e05c630b 1704 15.20 6.0 29 63 107 72 15 14 0.755 0.846
5f5a05e3 1430 11.59 6.0 30 52 92 110 18 12 0.760 0.867
The above table is also available as CSV download.

Category 4: Best Automatic Candidate Ranking

F1 score Mean rank Median rank Top Top3 Top10 Misses TopPos TopNeg Mean RRP Median RRP
fcc22746 1458 738.79 3.0 43 61 69 0 43 0 0.922 0.999
e64c7749 532 4193.25 515.5 14 19 29 0 14 0 0.603 0.858
da808ddf 2306 157.80 3.0 66 91 119 22 46 20 0.955 0.999
b2bebc35 2028 137.37 3.0 57 81 106 41 41 16 0.953 0.999
8cad5559 13 4158.76 2794.5 0 0 3 0 0 0 0.593 0.581
5d2657ec 13 4396.53 4118.8 0 0 0 0 0 0 0.500 0.500
01143d4e 584 4038.04 470.5 14 24 29 0 14 0 0.612 0.879
69ef5021 449 407.38 64.5 3 17 35 0 2 1 0.942 0.984
f4e2a378 526 283.32 43.0 7 15 44 0 4 3 0.880 0.964
c668a8da 577 221.61 34.0 6 20 48 0 4 2 0.916 0.969
The above table is also available as CSV download.

Table legend:

F1 score
The Formula 1 score awards points similar to the scheme in F1 racing for each challenge based on the rank of the correct solution. In the participant table, these are summed over all challenges. Please note that the F1 score is thus not neccessarily comparable across categories.
Mean/Median rank
Mean and median rank of the correct solution. For tied ranks with other candidates, the average rank of the ties is used.
Top, Top3, Top10
Number of challenges where the correct solution is ranked first, among the Top 3 and Top 10
Misses
Number of challenges where the correct solution is missing.
TopPos, TopNeg
Top1 ranked solutions in positive or negative ionization mode.
Mean/Median RRP
The relative ranking position, which is also incorporating the length of candidate list.