Over 200 hackers from all over Northern Ireland and Ireland came together a few days ago to attend the second hackathon organized by the AI Northern Ireland community. Each team had to take on one of five different challenges created by different companies attending. One of these challenges was the Peltarion challenge, which focused on detecting early-stage skin cancer. Here's a little bit more about the challenge and how one of the teams tackled it!
Interview with hackathon team @ AI Northern Ireland Datathon
The Peltarion challenge: Detecting early-stage skin cancer
Every year, we lose nine million people to cancer of different sorts. Skin cancer is one of these. Although skin lesions are visible to the naked eye, early-stage melanomas may be difficult to distinguish from benign skin lesions with similar appearances. We cannot stop cancer just yet. But what we can do is help doctors do their very best to help patients in a better, more effective, more efficient way. Hackers were tasked with creating a deep learning model for image segmentation using the Peltarion Platform, to detect these early signs of skin cancer.
The participants were given the task to build, train and deploy a deep learning model, accurately being able to generate segmentation masks for the images in a test subset, with the aim of beating the model created by our own data scientist team. High stakes, with some of Belfast's best hackers competing against some of Stockholm’s best data scientists!
For the purpose of evaluating the model, our data science team put together a script to generate the models’ Intersection over Union score. Our own data science team received a score of 78% - setting the bar high for the hackers. How did the teams approach the challenge? We had the chance to ask a few questions to Enric Moreu, on how he, together with his team, tackled the challenge. Enjoy!
A few words with Enric Moreu, challenge accepter
Hi Enric! Tell us a little bit about yourself.
My name is Enric Moreu and I’m a PhD student at Dublin City University. My research focuses on the intersection of Generative Adversarial Networks and 3D animation. I was raised in sunny Barcelona where I studied Telecommunications Engineering at the Universitat Politècnica de Catalunya.
So, what was your team’s approach when tackling the challenge?
We started by trying out Peltarion’s pre-processed data, but we quickly realized that we would be able to do a whole lot more if we just pre-processed some more data, to be able to generate even more examples to train the model on. For us to do this, we downloaded an augmentor in Python, flipping the images and the masks. Our new dataset gave us 10,000 images instead of the 2,000 images we started with.
We started with trying out the Tiramisu snippet, but we quickly noticed that the results we got from this weren’t good enough. So, we tried out the U-Net snippet instead, which gave us much better results. Anna from Peltarion ran this model on the Peltarion Platform, to see how it performed. Doing this, we got an Intersection over Union score of 71%. Not too bad, but still far from the 78% score of the Peltarion data scientist. Thus, we added a 2D convolution layer to the model, trained the new model and ran it again. This time we got a score of 74% - a score we were happy enough with, seeing as the submission deadline was fast approaching!
The mask we got out was encoded in base64, so we found a decoder to get the mask out. For the purpose of being able to demo our model, we built a telegram app, to which one can send a picture of a skin lesion and receive the mask back from the app.
Now that the deadline has passed and you’ve submitted your presentation, what do you think you would have done if you had more time?
Interesting question! We would probably have opened the U-Net to add more blocks to it and add some more layers. We might also have tried using different data augmentation methods, like applying different filters to the image data before it was entered into the platform.
Nice! What did you think were some of the highlights of the Peltarion Platform?
Well, for starters, it’s really nice that you can launch multiple experiments at once. For example, we didn’t know which optimizer to use so we just tried several different ones and ran the models alongside each other to figure out what worked best.
Also, it’s also great that you can add comments and name the models different things so that you remember what you did differently on each one of them. Personally, I’m also a big fan of the visualization element in the platform.
Great! And what do you think could be made better?
It would be nice to be able to do data augmentation in the platform. Also, having pre-trained models would be great - for example, if the U-Net was trained based on medical image data beforehand. Maybe there could be a drop-down when selecting the U-Net model and you could choose what type of data it should be pre-trained on. Personally, I would also love it if there was GAN functionality, as this is what I use in my research.