Applied AI & AI in business /

Creating Gordon RamsAI, a robot Ramsay ready to roast the world

April 5 2021/4 min read

Gordon Ramsay is a world-famous chef, renowned as much for his brutal roasts as for his brilliant cooking. Every day, hundreds of people reach out to Gordon on social media, begging him to review their food. But only very few are lucky enough to get a response. After all, he’s only human. Until now. We’ve combined an image-recognition AI with a language AI trained on Gordon Ramsay’s dialogue to create Gordon RamsAI, a robot Ramsay ready to roast the whole world. You’re welcome, chefs...sort of.

Disclaimer: This is a guest blog post. The information and views presented in this blog are those of the author(s). The Gordon RamsAI app-generated texts do not reflect the views or values of Peltarion.

02/ The dataset

We collected images and relied on the folder structure to categorize everything. After starting with Kaggle’s Food 101 we found the number of categories wasn’t covering everything we needed. We followed up with a lot of grunt work pulling various food images and expanding by another 10GB of images.

03/ Building the model

We used the Peltarion platform to build the model specifically because of the built-in patterns we could use to get started. None of us had much experience building models so using the wizard to narrow down options was an incredible benefit.

After a number of tests, we ended up using the data formatting to crop and resized everything to 128x128 and trained on DenseNet. Some of this was arbitrary (we don’t honestly know whether one of the other methods would have been better). The image size was a product of running several tests and either hitting a wall with training time (images too big) or getting inconsistent results when images were too small. 128x128 seemed to be a good middle-ground for speed + results.

04/ Model performance

This being a more entertainment-focused product, we feel like the model did well. In the wild, we expect a lot of intentionally bad food photography so don’t mind that the model gets confused on those. (In fact, it’s somewhat humorous when the model misidentifies something since the point of the tool is to provide scathing critiques)

In our testing of the images we would expect the model to understand, it performs very well. It was very useful to have the Evaluation charts when doing the training and the lowest results performed the best (as we hoped).

The Evaluation charts when training the model. As hoped, the lowest results performed the best.

05/ Deployment

We used a combination of tools to deploy the model. The public-facing elements were built using Bubble which handles image upload and conversion to base64 before sending to Peltarion’s hosted endpoint. The text was pre-generated using gpt2 separately and placed in a random lookup table then manually combined at runtime to have a comment about the returned food type + general critique.

06/ Final thoughs

None of this team had ever done machine learning (although we’re all fans of the space and try to read about it). Peltarion was an incredible boost in helping us understand the actual real-world requirements for gathering data, what constitutes successful training, and allowing us to easily use that model in an API.

02/ More on Business & Applied AI