A complete and successful submission to this challenge will require the following within deadlines:

  1. Successful run of your algorithm container on final testing phase
  2. Providing editor access to your final submission containers to the username "aneeqzia_isi" (this is required for us to re-run your container for evaluation on follow-up challenges datasets)
  3. Detailed methodology report with all the requirements (e.g GitHub repo link with documentation on how to run and reproduce algorithm container, private dataset used link etc) sent to isi.challenges@intusurg.com
  4. A three minute (max.) recorded presentation detailing your methodology (sent with report). Template for ppt slide: https://www.dropbox.com/scl/fi/nvy5qdm3adzb79twn4roo/VisionChallenge_template-teams.pptx?rlkey=5ggcl86mp00yj5pzp39yfkjvg&e=1&st=8c6mp2ty&dl=0

Please see additional details regarding specifics below:

Please note that these algorithm submission instructions are adapted from SurgToolLoc 2022/23 challenges. 

The challenge submission process will be based on grand-challenge automated docker submission and evaluation protocol. Each team will need to produce an “algorithm” container (different for each category) and submit it on the challenge website. Please check out the official grand-challenge documentation on building your own algorithm at https://grand-challenge.org/documentation/create-your-own-algorithm/

Github repo with sample submission algorithm containers:

Following is a link to github repo's that contains example submission containers along with detailed instructions for the algorithm submission:

Category 1: https://github.com/rgnespolo-intusurg/surgvu2024-category1

Category 2: https://github.com/rgnespolo-intusurg/surgvu2024-category2-submission

Challenge Phases:

The submission to the challenge is divided into two phases which are detailed below. 

Preliminary testing phase:

This phase will allow the teams to test their algorithm containers on a small dataset and debug any issues in making their submissions. Each team will be allowed a maximum of 10 tries to get their algorithm container working within the grand-challenge environment. Teams can post questions regarding their failed submission on the forum to get help from other participating teams and the organizing team. The aim of this phase is to allow the teams to get accustomed to the Grand Challenge submission process and have a working algorithm container.  

Final testing phase:

In this phase all teams who were successful in generating a working algorithm container from the preliminary phase will submit their final algorithm container that will be run on the complete testing dataset. There will only be two submission allowed per team in this phase so teams will have to ensure that they have a working code/container in the preliminary phase.  The best score of the two submissions will be considered in the leaderboard (while any subsequent submission will be discarded).

Prediction format :

Category 1: Surgical tool detection

For bounding box detections detection, the model (packaged into the algorithm container) will need to generate a dictionary (as json file) containing predictions for each frame in the videos. The output needs to be a dictionary containing the set of tools detected in each frame with its correspondent bounding box corners (x, y) generating a single json file for each video like given below:

{ 
    "type": "Multiple 2D bounding boxes", 
    "boxes": [ 
        { 
        "corners": [ 
            [ 54.7, 95.5, 0.5], 
            [ 92.6, 95.5, 0.5], 
            [ 92.6, 136.1, 0.5], 
            [ 54.7, 136.1, 0.5] 
        ], 
        "name": "slice_nr_1_needle_driver",
        "probability": 0.452
        }, 
        { 
        "corners": [ 
            [ 54.7, 95.5, 0.5], 
            [ 92.6, 95.5, 0.5], 
            [ 92.6, 136.1, 0.5], 
            [ 54.7, 136.1, 0.5] 
        ], 
        "name": "slice_nr_2\_monopolar_curved_scissor",  
        "probability": 0.783  
        }     ],     "version": { "major": 1, "minor": 0 } }  

Please note that the third value of each corner coordinate is not necessary for predictions but must be kept 0.5 always to comply with the Grand Challenge automated evaluation system (which was built to also consider datasets of 3D images). To standardize the submissions, the first corner is intended to be the top left corner of the bounding box, with the subsequent corners following the clockwise direction. The “type” and “version” entries are to comply with grand-challenge automated evaluation system. Please use the "probability" entry to include the confidence score for each detected bounding box.

Category 2: Surgical step recognition

Similar to category 1, for surgical task detections detection, the model (packaged into the algorithm container) will need to generate a dictionary (as json file) containing surgical step predictions for each frame in the videos. There needs to be a single json file for each video like that given below:

[
   {
      "frame_nr":0,
      "surgical_step":3
   },
   {
      "frame_nr":1,
      "surgical_step":4
   },
   {
      "frame_nr":2,
      "surgical_step":3
   },
   {
      "frame_nr":3,
      "surgical_step":0
   },
]
where the mapping for surgical steps (from names to index) is given below:
"range_of_motion" -> 0
"rectal_artery_vein" -> 1
"retraction_collision_avoidance" -> 2
"skills_application" -> 3
"suspensory_ligaments" -> 4
"suturing" -> 5
"uterine_horn" -> 6
"other" -> 7

Note: Any unannotated parts in the videos are to be treated as "Other" label as well.

Final Report :

Along with your algorithm submission, all teams are asked to submit a final report explaining their chosen methodology and the results they obtained. Only those individuals listed in the final report will be considered official team members. In the interest of transparency, this report must also contain a link to your code and any data beyond what was made available through this challenge (e.g. public data sets, or additional labels you have created for the training data) that was used to train your model.  This information may be used to verify model results. Your report will be especially important if your team is one of the finalists, in which case your model will be presented at the MICCAI read-out and in our subsequent publication on the results of the challenge. Team submissions will not be eligible for cash prizes without a suitable final report.

To help you with the report, we've created a rough guide for you to follow available here.