Using Peak for rapid data science model deployment

By Jimmy Stammers on July 14, 2021 - 5 Minute Read

One of the main challenges that faces data science projects is the need to deliver outputs that are reliable, interpretable and accurate. Failing to do this can result in the project itself failing as the stakeholders begin to lose trust in the solution.

The best way to ensure this doesn’t happen is to make sure that the solution can be deployed quickly and safely, using the most appropriate computing infrastructure. Using Peak’s Decision Intelligence platform, it’s possible for data scientists to deploy solutions for a variety of different use cases, ranging from something as straightforward as a spreadsheet of results once per day to a web application that’s able to generate thousands of predictions per second.

In this blog post, I give an overview of some of the commonly-encountered solutions that can be deployed from Peak. By removing the need for a data scientist to worry about the back-end infrastructure of their project, they’re free to spend more time on developing the best possible solution – and spend less time worrying whether or not the model will work once training has finished!

Training a model

As my colleague Vanessa covered in a recent blog post, the first step of every data science project involves ingesting the data and performing some exploratory data analysis, so that it can be cleaned and transformed into a more usable format.

Once this has been done, we’re now ready to start training a machine learning model to predict our target value. This is an iterative process that requires choosing the most appropriate type of model for the current task, splitting the data so that the performance of the model can be evaluated on previously-unseen data and tuning the parameters of the model – so that it is able to learn the maximum amount of information about the target given the input data.

Without delving too deep into the weeds, I’m going to assume that all of our hard work has paid off and we have a model that is ready to be used in production.

Deploying a machine learning model using Docker

Just as training a model requires a degree of planning and forethought, the same is true for deploying it into production. If not done correctly, the model will fail to perform as required and potentially cause further problems down the line. The main challenge comes from the fact that a production environment is not the same as the environment the model was initially developed in. If the production environment does not contain all the additional software that the model depends on, then there’s no chance that the model will work once deployed.

Of course, a simple solution to this problem is to make sure that all the additional software is available in the production environment. Docker is a service designed to do exactly this. Using Docker, all of the required software is installed into an ’image’ that can be run on virtually any platform. This makes it easy to take a machine learning model that has been developed on a workspace and run it elsewhere. This approach also helps to reduce the amount of storage space required, as only what is essential for running the model is deployed in the production environment.

All of the management of Docker images can be done within Peak. From the point-of-view of a data scientist, all they need to do is write a script to install their code within a Docker image, and Peak will handle the rest.

 

After building an image, it will be stored securely within the Peak platform, where it can be used in other solutions – for example, as part of a workflow or to host a web application. One useful feature of the platform’s image management is the ability to specify additional environment variables within a Docker image. These can be used to configure a single Docker image for different use cases, or even across multiple tenants, whilst ensuring that no further development is required to get it to work in a new environment. Taking a solution that has been tested and developed for one application, and applying it to another, is a great way to build upon existing work and discover new use cases.

From the point-of-view of a data scientist, all they need to do is write a script to install their code within a Docker image and Peak will handle the rest.

Understanding model outputs using hosted web apps

After a model has been trained and deployed, the next step requires developing an understanding of the predictive outputs and communicating them back to the relevant project stakeholders. Part of the work of a data scientist involves developing tools that allow these results to be interpretable by subject matter experts, who understand the business needs of the project but not necessarily the technical ins-and-outs. This communication is crucial, because if it’s not possible to convince others of the business value of the project, then it’s almost certainly destined for failure!

When attempting to present complex information, one of the best ways to ensure the right message gets across is to allow a user to explore and visualize it for themselves. In recent years, as the demand for data science has increased, this has led to the development of software specifically designed for interactive data visualization.

When it comes to managing complex software projects, one of the major pain points is making sure that each component can interact correctly with each other. Managing them all using Peak makes this effortless.

Due to their popularity within the data science community, frameworks have been developed in both Python and R for allowing data scientists to create interactive data visualizations. Perhaps the most widespread of these are Dash (Python) and Shiny (R), both of which can be used to create web applications that can be hosted within the Peak platform.

The advantage of using one of these frameworks for visualizing outputs is that it allows code that has already been written for exploratory data analysis to be used again. There’s no point in spending additional time and effort to recreate something in another language if the work has already been done.

One of the benefits of using a web-based platform like Peak to manage the entire data science project life cycle is that it means each component can interact seamlessly with one another. For example, a workflow could be scheduled to forecast upcoming demand at the start of every week. If these forecasts are then stored within a database, a web app could be used to query this database and present an interactive summary to an end user. When it comes to managing complex software projects, one of the major pain points is making sure that each component can interact correctly with each other. Managing them all using Peak makes this effortless.

Real-time decision making

When it comes to delivering accurate state-of-the-art predictions, machine learning models really out-perform other techniques when those predictions are needed in real-time. A familiar example of this is Google’s search bar, which is able to predict what you are searching for and suggest subsequent text as you type. The need for rapid decision-making can be found across many business contexts – retailers may want to recommend items to a customer based off what they currently have in their basket, warehouses may need to re-distribute their stock based on the available supply and manufacturers may need to change their production plans due to a change in upcoming demand.

Whilst the business goals may be different, they all have a common requirement in that the decision needs to be made at a future point in time, using information that might not be known at the present. In order to achieve this, a model needs to be deployed in a way that allows other systems to communicate with it, so that input data can be sent to the model and output data can be retrieved.

Peak’s platform enables this by giving users the ability to deploy models as part of a REST API so that it can be integrated with other systems that communicate using the world wide web. Once an API has been deployed, Peak will register it as a URL endpoint which can be accessed by external services provided they are able to authenticate themselves.

However, before releasing it to the outside world it is a good idea to test the performance of the API first. This can be done within the platform by sending example input data in the form of a JSON payload. Once this has been processed, the API will return a response that contains the model’s prediction. If, for any reason, this is not the expected output, then the data logged by the API can be viewed within Peak to provide additional insight into the error.

In order to make sure that an API meets all of its performance requirements, Peak also manages additional infrastructure behind-the-scenes. For example, during periods of high activity, Peak will deploy multiple instances of the API to handle the increased load and to keep the response time as quick as possible. If a model gets re-trained and the API needs to be updated, then Peak will manage the redeployment process without interrupting the live service. This ensures that the API is always performing as it should be, without requiring any additional work from data scientists or software engineers.

There are many ways in which data science can provide useful outputs for a business. In order to get the most effective use out of them, though, the outputs need to be accessible by stakeholders from different parts of an organization.

Peak’s platform enables this by making it quick and easy for data scientists to productionize their outputs with a minimal amount of hassle, by making use of state-of-the-art cloud infrastructure. In addition to this, these solutions can be tested, monitored and re-deployed to ensure that they continue to deliver reliable and timely predictions from the moment they are first used to deliver value.

Meet Peak's platform

Our platform makes superhuman commercial decision making possible for businesses like Nike, ASOS and KFC.

More from the Peak team

Check out some more blogs to get under the hood of the Peak Decision Intelligence platform

Stay in touch!

Subscribe to our newsletter to find out what’s going on at Peak

Subscribe today!