In our previous tutorial, you learned how to train your first Convolutional Neural Network using the Keras library. However, you might have noticed that each time you wanted to evaluate your network or test it on a set of images, you first needed to train it before you could do any type of evaluation. This requirement can be quite the nuisance.
We are only working with a shallow network on a small dataset that can be trained relatively quickly, but what if our network was deep and we needed to train it on a much larger dataset, thus taking many hours or even days to train? Would we have to invest this amount of time and resources to train our network each and every time? Or is there a way to save our model to disk after training is complete and then simply load it from disk when we want to classify new images?
You bet thereās a way. The process of saving and loading a trained model is called model serialization and is the primary topic of this tutorial.
To learn how to serialize a model, just keep reading.
Looking for the source code to this post?
Jump Right To The Downloads SectionConfiguring your development environment
To follow this guide, you need to have the OpenCV library installed on your system.
Luckily, OpenCV is pip-installable:
$ pip install opencv-contrib-python
If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide ā it will have you up and running in a matter of minutes.
Having problems configuring your development environment?
All that said, are you:
- Short on time?
- Learning on your employerās administratively locked system?
- Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?
- Ready to run the code right now on your Windows, macOS, or Linux system?
Then join PyImageSearch University today!
Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colabās ecosystem right in your web browser! No installation required.
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
Serializing a Model to Disk
Using the Keras library, model serialization is as simple as calling model.save
on a trained model and then loading it via the load_model
function. In the first part of this tutorial, weāll modify a ShallowNet training script from the previous tutorial to serialize the network after itās been trained on the Animals dataset. Weāll then create a second Python script that demonstrates how to load our serialized model from disk.
Letās get started with the training part ā open up a new file, name it shallownet_train.py
, and insert the following code:
# import the necessary packages from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from pyimagesearch.preprocessing import ImageToArrayPreprocessor from pyimagesearch.preprocessing import SimplePreprocessor from pyimagesearch.datasets import SimpleDatasetLoader from pyimagesearch.nn.conv import ShallowNet from tensorflow.keras.optimizers import SGD from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse
Lines 2-13 import our required Python packages. Much of the code in this example is identical to shallownet_animals.py
from “A gentle guide to training your first CNN with Keras and TensorFlow.” Weāll review the entire file, for the sake of completeness, and Iāll be sure to call out the important changes made to accomplish model serialization, but for a detailed review of how to train ShallowNet on the Animals dataset, please refer to the previous tutorial.
Next, letās parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-d", "--dataset", required=True, help="path to input dataset") ap.add_argument("-m", "--model", required=True, help="path to output model") args = vars(ap.parse_args())
Our previous script only required a single switch, --dataset
, which is the path to the input Animals dataset. However, as you can see, weāve added another switch here ā --model
, which is the path to where we would like to save the network after training is complete.
We can now grab the paths to the images in our --dataset
, initialize our preprocessors, and load our image dataset from disk:
# grab the list of images that we'll be describing print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) # initialize the image preprocessors sp = SimplePreprocessor(32, 32) iap = ImageToArrayPreprocessor() # load the dataset from disk then scale the raw pixel intensities # to the range [0, 1] sdl = SimpleDatasetLoader(preprocessors=[sp, iap]) (data, labels) = sdl.load(imagePaths, verbose=500) data = data.astype("float") / 255.0
The next step is to partition our data into training and testing splits, along with encoding our labels as vectors:
# partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.25, random_state=42) # convert the labels from integers to vectors trainY = LabelBinarizer().fit_transform(trainY) testY = LabelBinarizer().fit_transform(testY)
Training ShallowNet is handled via the code block below:
# initialize the optimizer and model print("[INFO] compiling model...") opt = SGD(lr=0.005) model = ShallowNet.build(width=32, height=32, depth=3, classes=3) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the network print("[INFO] training network...") H = model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=32, epochs=100, verbose=1)
Now that our network is trained, we need to save it to disk. This process is as simple as calling model.save
and supplying the path to where our output network should be saved to disk:
# save the network to disk print("[INFO] serializing network...") model.save(args["model"])
The .save
method takes the weights and state of the optimizer and serializes them to disk in HDF5 format. Loading these weights from disk is just as easy as saving them.
From here we evaluate our network:
# evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=32) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=["cat", "dog", "panda"]))
As well as plot our loss and accuracy:
# plot the training loss and accuracy plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, 100), H.history["loss"], label="train_loss") plt.plot(np.arange(0, 100), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, 100), H.history["accuracy"], label="train_acc") plt.plot(np.arange(0, 100), H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend() plt.show()
To run our script, simply execute the following command:
$ python shallownet_train.py --dataset ../datasets/animals \ --model shallownet_weights.hdf5
After the network has finished training, list the contents of your directory:
$ ls shallownet_load.py shallownet_train.py shallownet_weights.hdf5
And you will see a file named shallownet_weights.hdf5
ā this file is our serialized network. The next step is to take this saved network and load it from disk.
Summary
Congratulations, now you know how to save your Keras and TensorFlow model to disk. As you saw, itās as simple as calling model.save
and supplying the path to where our output network should be saved. Now you know how to train a model and save it so you donāt need to start from scratch each and every time you train a model.
What's next? I recommend PyImageSearch University.
30+ total classes • 39h 44m video • Last updated: 12/2021
★★★★★ 4.84 (128 Ratings) • 3,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
Thatās not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And thatās exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here youāll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 30+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 30+ Certificates of Completion
- ✓ 39h 44m on-demand video
- ✓ Brand new courses released every month, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser ā works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Comment section
Hey, Adrian Rosebrock here, author and creator of PyImageSearch. While I love hearing from readers, a couple years ago I made the tough decision to no longer offer 1:1 help over blog post comments.
At the time I was receiving 200+ emails per day and another 100+ blog post comments. I simply did not have the time to moderate and respond to them all, and the sheer volume of requests was taking a toll on me.
Instead, my goal is to do the most good for the computer vision, deep learning, and OpenCV community at large by focusing my time on authoring high-quality blog posts, tutorials, and books/courses.
If you need help learning computer vision and deep learning, I suggest you refer to my full catalog of books and courses ā they have helped tens of thousands of developers, students, and researchers just like yourself learn Computer Vision, Deep Learning, and OpenCV.
Click here to browse my full catalog.