Transfer Learning: Using pre-trained models for Image Classification (ResNet50) in Keras
The idea here is to use pre-trained models that already are out there available on the Internet for anyone to use and for a lot of common problems we can just import a pre-train model that somebody else did all the hard work of putting together and optimizing it. For example, if we are trying to do an image classification there are pre-trained models out there that we can just import. We can just use these pre-trained models as is, or we can also just use them as a starting point if we want to extend on them or build on them for more specific problems. This is called transfer learning.
Now we can find more of these pre-trained models and what are called Model Zoos. Model Zoo is a machine learning model deployment platform with a focus on ease-of-use. Deploy our model to an HTTP endpoint with a single line of code:
Another way of using these pre-trained models is through Keras. Keras applications are deep learning models that are made available alongside pre-trained weights. These models can be used for prediction, feature extraction, and fine-tuning. Keras is just a layer on top of TensorFlow that makes deep learning a lot easier. All we need to do is start off by importing the stuff that we need to run our model, so we’re going to import the Keras library and some specific modules from it. For our example we’re going to be using the ResNet50 model here using Keras.
So first we are going to import our image:
Then we are going to load up our model. We are going to import the ResNet50 model that is built into Keras along with several other models as well. We’re also going to import some image pre-processing tools both from Keras itself and as part of the ResNet50 package itself. Lastly, we are going to import numpy because we’re going to use numpy to actually manipulate the image data into a numpy array which is ultimately what we need to feed into a neural network.
So when you look up ResNet50 model description, there are some requirements and limitations to it. For example, your input images have to be 224 by 224 resolution, and also it is only limited to 1 of 1,000 possible categories, so that might not sound a lot to some people. For those reasons, we have to scale it down to 224 by 224 while we’re loading it and we will convert that to a numpy array, and then use the model’s preprocess_input function to further normalize the image data before feeding it in as input as shown below:
Now we’ll load up the actual model itself.
The weights, as shown above, represents that it’s going to use weights learned from the Imagenet data set.
Now all we have to do is, call model.predict (x) and see what it comes back with and also call the decode_predictions() function that comes with the ResNet50 model.
And that’s it. We just have to run and test our model on an image and see if its prediction really works.
To simplify this, we can run a little convenient function to reduce our code a little bit:
Now we can use one simple line of code to classify our image and test it:
Let’s try another one:
And another one:
As we can see our model was pretty accurate in classifying the images, and so far ResNet50 worked well for my photos, but there are other models included with Keras including Inception and MobileNet that you might want to try out if you do want to play with them. You do need to know what image dimensions it expects in the input, or else it won’t work at all.
You can refer to the documentation here for ResNet50 or try some different pre-trained models: https://keras.io/applications/ or you can go https://www.tensorflow.org/api_docs/python/tf/keras/applications.
As we can see transfer learning can be very easy, not only for image classifications but also for other types of works like natural language processing. There are also pre-trained models available for that as well, such as word2vec and GloVe, that we can use to teach our computer how to read with just a few lines of code. All we have to do is transferring an existing trained model from somebody else that did all the hard work to our model, which we called it as “transfer learning”.