Machine Learning models are powerful tools to make predictions based on available data. To make these models useful, they need to be deployed so that other’s can easily access them through an API (application programming interface) to make predictions. This can be done using Flask and Heroku — Flask is a micro web framework that does not require particular tools or libraries to create web applications and Heroku is a cloud platform that can host web applications.
To successfully deploy a machine learning model with Flask and Heroku, you will need the files: model.pkl, app.py, requirements.txt, and a Procfile. This article will go through how to create each of these required files and finally deploy the app on Heroku. The main sections of this post are as follows:
- Create GitHub Repository (optional)
- Create and Pickle a Model Using Titanic Data
- Create Flask App
- Test Flask App Locally (optional)
- Deploy to Heroku
- Test Working App
Feel free to skip over any of them depending on where you are at in your process!
Create Github Repository (optional)
For easier deploying on Heroku later, you’ll want to create a github repository for this project and clone it for local use. To create a new repository, click on your profile icon in the top right corner, click repositories, and then click new. Give your repository a name, initialize the repository with a README and add a license like my example below:

To clone the repository, go to the repository page, click “clone or download” and copy the link. In your terminal, go to the folder where you want to clone the repository and type the command:
git clone link_to_repository
Running this command will clone the repository to your local machine and you can check to make sure that the folder was created in the right location. Now that the repo is set up, we can create the machine learning model and pickle it.
Create and Pickle a Machine Learning Model
First, let’s create a simple Logistic Regression model using the Titanic dataset. This model will predict whether someone survived the Titanic given information about class, age, number of siblings, and fare. If you’d like to follow along with my examples, you can find the csv file for the dataset here. Create a Jupyter notebook inside your working directory and create the model by running the code below:
import pandas as pd from sklearn.linear_model import LogisticRegression # create df train = pd.read_csv('titanic.csv') # change file path # drop null values train.dropna(inplace=True) # features and target target = 'Survived' features = ['Pclass', 'Age', 'SibSp', 'Fare'] # X matrix, y vector X = train[features] y = train[target] # model model = LogisticRegression() model.fit(X, y) model.score(X, y)
The code above creates a pandas dataframe from the csv data, drops null values, defines the features and target for the model, splits the data into a matrix with just the features and a vector with the target, and fits a logistic regression model, then scores it.
This creates a model that can predict the survivorship of Titanic passengers with ~70% accuracy which can then be pickled using:
import pickle
pickle.dump(model, open(‘model.pkl’, ‘wb’))
The pickle file can be found inside the same directory as your Jupyter notebook.
Write Flask App
Open up your IDE (I use PyCharm) and create a new .py file inside the working directory named app.py

The code for the flask application can be found below (python v3.7).
import pandas as pd from flask import Flask, jsonify, request import pickle # load model model = pickle.load(open('model.pkl','rb')) # app app = Flask(__name__) # routes @app.route('/', methods=['POST']) def predict(): # get data data = request.get_json(force=True) # convert data into dataframe data.update((x, [y]) for x, y in data.items()) data_df = pd.DataFrame.from_dict(data) # predictions result = model.predict(data_df) # send back to browser output = {'results': int(result[0])} # return data return jsonify(results=output) if __name__ == '__main__': app.run(port = 5000, debug=True)
The structure of the code follows:
- Load pickled model
- Name flask app
- Create a route that receives JSON inputs, uses the trained model to make a prediction, and returns that prediction in a JSON format, which can be accessed through the API endpoint.
Inside the route, I converted the JSON data to a pandas dataframe object because I found that this works with most (not all!) types of models that you will want to use to make a prediction. You can choose to convert the inputs using your preferred method as long as it works with the .predict()
method for your model. The order of inputs must match the order of columns in the dataframe that you used to train your model otherwise you will get an error when you try to make a prediction. If the inputs you are receiving are not in the correct order you can easily reorder them after you create the dataframe.
The takeaway here is that you need to convert the JSON data that you get from the request to a data structure that the model can use to make a prediction. However you get there is up to you.
Once you paste this into your app.py file you can run the flask app from the command line (or if you’re using PyCharm just run the code). To run the app from the command line use:
python app.py
If done correctly, you will see something like this:

Note on errors: If this is your first time creating a flask app you might get an error that you need to install flask into your python environment. Use
!pip install flask
and try again.
When your flask app is up and running, click on the link in blue and you should see this — it’s normal:

Optional, but you really should: Test that the app works
Import requests
and json
in your Jupyter notebook, then create a variable with your local server if it’s different from below:
# local url
url = 'http://127.0.0.1:5000' # change to your url
Create sample data and convert to JSON:
# sample data data = {'Pclass': 3 , 'Age': 2 , 'SibSp': 1 , 'Fare': 50} data = json.dumps(data)
Post sample data and check response code using requests.post(url, data)
. You want to get a response code of 200 to make sure that the app is working:

Then you can print the JSON of the request to see the model’s prediction:

The model predicted 1 which means the passenger survived 🙌
Shut down the flask app by typing ctrl+c when you’re done testing.
Create Procfile
A Procfile specifies the commands that are executed by a Heroku app on startup. To create one, open up a new file named Procfile (no extension) in the working directory and paste the following.
web: gunicorn app:app
And that’s it. Save and close. ✅
Create requirements.txt
The requirements.txt file will contain all of the dependencies for the flask app. To create a requirements.txt, run the following in your terminal from the working directory:
pip freeze > requirements.txt
If you’re not working from a new environment, this file will contain all requirements from your current environment. If you get errors later when deploying the app, you can just delete the requirements that give you an error.
At the bare minimum for this project, your requirements.txt should contain:
Flask==1.1.1
gunicorn==19.9.0
pandas==0.25.0
requests==2.22.0
scikit-learn==0.21.2
scipy==1.3.1
Optional: Commit Files to GitHub
Run the following commands to commit the files to git:
# add all files from the working directory
git add .
then commit with your message:
git commit -m 'add flask files'
and finally, push changes to Github using the following command. You may be asked to enter your github username and password. If you have 2FA set up, you will need your key as the password.
git push origin master
At minimum, your Github repo should now contain:
- app.py
- model.pkl
- Procfile
- requirements.txt
Note: all of these files should be at the working directory level and not in another folder
Deploy to Heroku
If you don’t have one already, create a free account at www.heroku.com.
Create a new app simply by choosing a name and clicking “create app”. This name doesn’t matter but it does have to be unique.

You have a few options for the way you can deploy the app. I’ve tried both the Heroku CLI and GitHub and I personally prefer GitHub….but I’ll show both so pick whichever one you want to follow.
Deploying With Github
Connect your github account by clicking the github icon below:

Search for the correct repository and click connect:

And then just scroll to the bottom of the page and click “Deploy Branch”

If everything worked correctly you should see this message 🎉🎉

If something went wrong, check your requirements.txt, delete and dependencies that are giving you problems, and try again ¯\_(ツ)_/¯
Deploying with Heroku CLI
In the Heroku CLI section, you will see these instructions to follow for deployment. Paste each command into your terminal and follow any prompts like logging in. Pay attention to any commands you will need to modify, such as cd my-project/
— where my-project/
should actually be your project directory. The git remote should be set to the app name from Heroku EXACTLY.

If you were successful in following these instructions, you should see build succeeded on the overview page 🎉🎉
If not, you can check to see what went wrong by running heroku logs --tail
from the command line.
Test the Deployed Model & Generate Prediction
If you already tested your flask app, these instructions will be very similar, except now with the Heroku app url.
Import requests
and json
in your Jupyter notebook, then create a variable to store the Heroku app url (you can find this by clicking “open app” in the top right corner of the app page on Heroku). Then create some sample data and convert it to JSON:
# heroku url heroku_url = 'https://titanic-flask-model.herokuapp.com' # change to your app name # sample data data = { 'Pclass': 3 , 'Age': 2 , 'SibSp': 1 , 'Fare': 50}data = json.dumps(data)} data = json.dumps(data)
Check the response code using the following code. A response code of 200 means everything is running correctly.
send_request = requests.post(heroku_url, data)
print(send_request)
Output: <Response [200]>
And finally, look at the model’s prediction:
print(send_request.json())
Output: {‘results’: {‘results’: 1}}
Your output results will vary if you’re using different sample data. The result of 1 in this case means that the model predicts the passenger survived — and more importantly the API works!
Now people can access your API endpoint with the Heroku URL and use your model to make predictions in the real world 🌐
Here is the github repo containing all files and code required to deploy this API.
Find me on twitter @elizabethets or connect with me on LinkedIn!
Sources: