DeepGrow is what we have called an interactive segmentation method recently presented at RSNA. It uses clicks along with image data as input and allows corrections, all based on Deep Learning. Trained on 46 of the training cases in Promise12 dataset. Currently running on a laptop with a GPU. Developed and used by a radiologist who has no previous experience with prostate segmentation. Approach/algorithm is to be published, more info to follow. This run is the second try where I allowed myself more time and clicks since the first try yesterday was not enough for 1st place. I also used window leveling this time, which I only used once or twice during the first run. Today's segmentation "procedure" took 44 minutes (~1,5 minute per prostate) and a total of 825 mouseclicks. Number of segmented slices were 451 (~15 slices per prostate). That means a mean of 1,83 mouseclicks used per slice. If anyone is interested in how this looks, feel free to write to me, I have this process recorded :). Info about the first run (done one day earlier): Total time used 27 minutes for all segmentations in one run. Result: 4th place, score 89.1587. No info about amount of clicks. Thanks to RIL! (Radiology Informatics Lab at Mayo Clinic led by dr. Bradley Erickson) T Sakinis, MD; P Kostandy, MD; Z Akkus, PhD; K Philbrick; P Korfiatis, PhD; B J Erickson, MD, PhD Tomas Sakinis, Radiologist at Oslo University Hospital, Rikshospitalet. (Deep Learning + Radiologist) > Deep Learning = True (Deep Learning + Radiologist) > Radiologist = True