Peak AI engineering update: 2024 so far

By Stuart Davie on September 3, 2024 - 5 Minute Read

Our AI Engineering team has accomplished a lot in recent months, increasing the size of our team and accelerating the roadmap of our AI products.

You may have seen our recent Peak product update — now it’s our team’s turn! We’re eager to share some of the exciting developments we’ve made so far this year, and provide insight into what’s coming next.

Co:Driver

The star of our recent releases is Co:Driver, using generative AI to drive a leap forward in how customers interact with Peak’s AI products. We wanted Co:Driver to cover three distinct use cases on release: model output explainability (e.g., why is my recommended safety stock X?), domain understanding (e.g., what is the difference between cycle service level and fill rate?) and analytics support (e.g., tell me where product X is overstocked and where it is understocked?)

In order to maximize the performance of AI systems, it’s critical to use the right tools for the right job. Sometimes this means building an AI system that can leverage the strengths of an ensemble of approaches, to deliver the right results for the right problems. For narrow, well-defined spaces like model explainability, tight control over the presentation of critical information is crucial for adoption and trust. Knowing this, our new release uses deterministic, templated outputs as a foundation. This should satisfy 80% of customer needs, and could also surface a need to look deeper into the data.

For broader inquiries relating to domain understanding, we are using Peak’s substantial knowledge base, developed over years of delivering products and custom solutions in these domains, to augment foundational LLMs. Under the hood, we have designed the application to support a range of foundation LLMs, including the suite available on Amazon Bedrock and OpenAI, through straightforward configuration.

SOTA in this space is fast moving, both in performance and cost, and Peak believes it wise to build with an expectation of needing to change foundation models from time to time. Our initial release achieves suitable results through a relatively straight forward RAG implementation, enabling Co:Driver to ground its responses in this Peak-specific knowledge, but we have also been looking at fine-tuning to see if we can push model performance even further. 

Co:Driver uses generative AI to drive a leap forward in how customers interact with Peak's AI products. We wanted Co:Driver to cover three distinct use cases on release: model output explainability, domain understanding and analytics support.

For complex analytical tasks, we’ve implemented an agentic workflow, mostly with LangChain. Agentic workflows define and execute steps in sequence, allowing for multi-step reasoning and data analysis. Our system breaks user queries down into actionable steps, and leverages tools like a restricted SQL engine to navigate our data model and provide answers-on-demand.

The development of viable agentic workflows for solving arbitrary tasks represents one of the most intriguing outcomes of the generative AI boom, and these are likely to completely transform how problems are solved in some industries.

The breadth of scope for Co:Driver is large, and it was no surprise that our tests using a single foundation model to solve all of these problems directly were unsuccessful. In order to manage performance over such a vast space of potential queries, we’ve developed a classification model that routes questions to appropriate response models (or returns templated responses when necessary).

This approach helps us balance the need for accuracy with the flexibility to handle a wide range of user inputs. We’ve also integrated Langfuse for performance monitoring and iteration, making insight into how our AI system is performing in real-world scenarios easier to obtain.

Pricing AI

Our Pricing AI product got off to a fast start in 2024, with significant releases for our Quote Pricing, List Price Optimizer, Markdown and Promotions modules, and re-architecting significant parts of the modules to make the most of powerful new Peak platform features relating to how we manage APIs and product front ends.

In the manufacturing and B2B space, we have incorporated gradient boosted tree models into parts of our Quote Pricing module as an optional configuration. Gradient boosted trees, such as xgBoost and LightGBM, can be particularly attractive for B2B price optimization, for a number of reasons:

  • Manufacturing and B2B price data sets are often significantly smaller than retail datasets, and tree-based machine learning models perform exceptionally well in ‘small’-data regimes.
  • Manufacturing and B2B price data sets can vary significantly in the types and attributes of data they are capturing, which means optimal model features can vary significantly for different businesses. However, tree-based algorithms are flexible and can accommodate such differences out of the box — providing a robust baseline from which we can tune our models during deployment to improve from.
  • Tree-based models are especially effective at capturing the complex, non-linear relationships in price optimization without the need to explicitly account for confounding variables.
  • Modern gradient-boosting algorithms make it easy to incorporate monotonic constraints. Monotonic constraints are restrictions applied to models to enforce a specific relationship between the input features and the predicted outcome. This is incredibly useful for price modeling, because the nature of these relationships are usually known in advance (and usually follows the Law of Demand).
  • From a system’s performance point of view, this choice in models can speed up the computation step given their ability to run in parallel (during tree construction) so the available hardware can be used more efficiently.

We wrapped up the half with the release of our List Price Optimizer module on Press. This was an amazing effort from the team, progressing from design to release in as little as three months. One of the key challenges in trying to optimize list prices in a B2B business is the ubiquity of set-and-forget pricing strategies in that domain. This results in limited historical price data variance, making it difficult to accurately determine price elasticity or extract meaningful pricing signals.

Normally, our Professional Services team will support configuration of our pricing solutions to ensure we can adequately sample the price-demand surface of our customers’ products, but we wanted the new release of List Price Optimizer to include default sampling approaches out of the box. Unlike a classical multi-armed bandit problem, here we are not trying to optimize our exploration to maximize value; rather we are trying to minimize our uncertainty in the price-demand curve, subject to business guardrail and risk appetite constraints.

 

Pricing is a very sensitive part of business, and it’s critical for adoption that pricing teams are comfortable with exploration strategies. To this end, and to get started quickly, we have included the ability to sample from defined distributions, including Kumaraswamy distribution and the trapezoidal distribution, due to how easy they are to parameterize to reflect exploration behaviors that are both intuitive and acceptable for our end users.

Outside of manufacturing and B2B pricing, we have ramped an additional squad onto our retail pricing modules, recognizing the unique challenges and opportunities in this space. This team will be instrumental in driving innovation and accelerating our roadmap. One of the first focus areas for this squad was to re-architect parts of Markdown and Promotions’ backend to harness more of Snowflake’s improved Snowpark capabilities, granting our customers the ability to run more of our solution within their own stack if they prefer.

Snowpark has come a long way in a few short years, and it was impressive to see how much of the application backend could be rearchitected to leverage it. Unfortunately (though perhaps not surprisingly), we found the tools available don’t yet support the full range of packages and libraries we require for solving the sorts of metaheuristic, multi-objective optimization problems common in this space, but we have a good partnership with Snowflake and are keeping a keen eye on their new releases.

Another big achievement this year involved bringing our Pricing and Customer products closer together. We have long had the vision of interconnected AI — where businesses can be optimized holistically, and even AI point solutions can perform more effectively thanks to contextual awareness of their place within a broader AI system. This latest release represents a significant stride towards this vision, unifying the underlying retail pricing and customer intelligence data models, and allowing access to core customer intelligence functionality from within our Pricing AI product.

This will allow merchandising teams to collaborate more effectively with marketing teams, leveraging shared insights to drive targeted strategies. For instance, when marketing initiates a reactivation campaign, merchandising can swiftly design and implement promotions tailored to appeal specifically to the characteristics of the identified population of lapsed customers. Conversely, during a merchandising-led close-out event, marketing gains immediate access to data on customers who are in-market for those products. This allows for the deployment of targeted recommendations through optimal channels, enhancing awareness and driving conversion.

Inventory AI

Similar to pricing, we have scaled up our investment in our inventory product, adding an extra engineering squad to help accelerate the roadmap. In addition to this, we have added dedicated deeper-research R&D capacity to help us make sure we are laying the foundations needed to achieve our vision for 2025 and beyond.

We have a few really exciting projects here, supported by collaborations with the University of Manchester and University College London, including a generic framework for simulating arbitrary supply chains that we call the Multi-Echelon Supply SImulation AlgoritHm (Messiah), which we will tell everyone more about soon. Two things it would be nice to share more details about are introducing Fill Rate to our Dynamic Inventory module, and how we have made the module scale much more gracefully.

As mentioned, we made a significant enhancement to our Dynamic Inventory module by introducing fill rate as an optional service level metric available to be configured out of the box, alongside our existing cycle service level. This addition represents a more nuanced approach to measuring inventory policy performance, catering to the diverse needs of our customers.

Cycle service level, our default metric, measures the probability of avoiding stockouts between order replenishments. However, for certain suppliers, particularly those in the consumer packaged goods space, fill rate is a more relevant and useful metric. Fill rate quantifies the proportion of true demand fulfilled between order cycles, providing a different perspective on inventory performance.  The introduction of fill rate optimization addresses a critical issue we observed: when optimizing for cycle service level, the system often recommends holding excess inventory, resulting in a higher-than-targeted fill rate. This discrepancy can lead to inefficiencies and increased carrying costs for businesses that primarily track fill rate.

To implement this feature, we turned to literature, particularly the extensive body of work by Guijarro & Babiloni at the Universitat Politècnica de València. While the fill rate calculations proved to be computationally more intensive than cycle service level, we found this trade-off acceptable given its targeted application in lower SKU-count scenarios. This dual-metric approach allows our customers to choose the service level definition that best aligns with their business model and industry standards, further enhancing the flexibility and precision of our inventory optimization capabilities. It’s another step towards our goal of providing tailored, intelligent solutions that drive real-world business value.

Another fun engineering challenge we have had relates to how our products scale. As Peak has grown, so has the size of our customers, and we now have requirements to be able to generate on the order of a million daily forecasts, and support hundreds of concurrent end business users, all within a single tenant. This growth is a fantastic problem to have, but it has required us to revisit and refine our product architecture.

This was a full team effort, with significant support from both our Professional Services team and our Platform Engineering teams, and the teams rose to the challenge admirably. We are now more careful with identifying and limiting cases of recomputation, and doing this has also helped us identify areas that can be further parallelized. We are leveraging the flexibility of the Peak platform in a better way to allocate additional compute power to specific workflow bottlenecks, and ensure optimal resource utilization.

We utilize a range of time series models, and many of these (particularly statistical ones) can be made incremental, allowing for much more efficient updates as new data becomes available. In the database we have optimized our range joins and are making better use of cluster keys, improving query performance, enhancing retrieval efficiency and reducing data processing times. Finally, it was our Platform team that unlocked the key to improving our front end scale, rearchitecting how we serve APIs and web apps, and handle resource management in general, through Kubernetes.

The cumulative effect of these efforts has been impactful, with Dynamic Inventory data processing and optimization times more than halved. The improvements to the Peak platform obviously impact all our products and customers, and this has resulted in our infrastructure costs relating to these services to be halved as well. This positions us well to handle this higher level of scale and sets a solid foundation for future growth.

Around the team

Beyond our product delivery milestones, the team has been actively engaged in a variety of initiatives that underscore our commitment to improving representation and diversity in tech, and fostering the next generation of AI talent. I firmly believe that to make meaningful progress in this area, we need to raise awareness of the exciting opportunities a tech career offers, help provide clear pathways to success and start these efforts as early as possible in peoples’ careers or education journeys.

One highlight so far was the Peak hackathon, which we successfully hosted for the third consecutive year. This event brought together 48 bright minds from the University of Manchester and Edge Hill University, providing them with hands-on experience in Github skills, data analysis and forecasting. It’s always great to see the creativity and innovation that emerges from these events, and I’d like to extend a special congratulation to Cyrus, Ashley and Samar for their winning contributions, and a special thanks to Kira and Simona for coordinating such a fantastic event. I’d also like to give a shout out to our Data Science mentoring scheme, which is designed to help students build confidence and take control of their own progress in data science and AI.

 

View this post on Instagram

 

A post shared by Peak (@peak_hq)

On the academic front, we’re currently hosting five MSc students (from three universities) who are working on cutting-edge projects, ranging from global optimization methods for inventory policies to active learning for price discovery. This is our eighth year hosting MSc students from the local community, and we’re proud to be able to provide this opportunity for students to develop real-world data science experience. We also collaborated with UCL PhD students from the Centre of Doctoral Training on a group project, using Messiah to explore meta reinforcement learning for supply chain optimization. This went really well, and we’re looking into more exciting ways to collaborate more with UCL moving forwards.

These initiatives, while separate from our core product development, are integral to our mission of making Peak a company that everyone loves being part of.

Conclusion

It has been a busy year so far, and beyond the delivery highlights listed above, improvements to our team structure and our processes have laid the foundations for a really exciting future. If you want more information about any of the above, don’t hesitate to reach out.

Ready for your next challenge?

If you're passionate about AI, data science, technology and are interested in being part of our team, you can apply via the link below.

Stay in touch!

Subscribe to our newsletter to find out what’s going on at Peak

Subscribe today!