Finetune Overview

Fine-tuning AI Agents

August 2024

Recent advancements in AI agents have resulted in workflow automation across various industries. At Finetune, we envision that this trend will continue, and within the next decade, there will be more AI agents than humans.

Finetune is the easiest way for developers to fine-tune AI agents. Llama Index, Crew AI, Langchain, Autogen, Haystack, and other agent frameworks will all be supported. Nvidia NIMs, Anthropic, Open AI, Mistral, and more models will be integrated from day 0.

Developers can enter a fine-tuning session to create synthetic users that reflect their customers, run their agent against these users, receive session reports and a weighted execution graph available in a Virtual Private Cloud (VPC). After each run, Finetune will judge the execution and add relevant weights depending on the outcome and store them in the graph.

From there, developers can enter a feedback session. Finetune's agent will be able to identify which executions received a low/bad weight during the fine-tuning session and present these to the developer for feedback. Finetune uses this feedback to update the weighted execution graph.

Finetune will also allow developers to easily deploy their execution graph to a VPC with root access to perform CRUD operations while running their agents in production. Each time their agent runs, the first step is to reference the weighted execution graph to retrieve similar executions with positive weights as context for the model to generate the subsequent action lists.

We expect Finetune to be the SOTA service to make agents that cover enterprise-level chains reliable in production.

The Finetune Team 🦙🦾

Contact us

Request a feature