Model Fine-tuning
Last updated
Last updated
Once the hyperparameters are set for fine-tuning, Texti.ai handles all necessary steps automatically, with a streamlined process that includes data preparation, training the model, and displaying evaluation metrics of the fine-tuned model.
Here's a breakdown of what happens during these steps:
This step involves preparing your data for fine-tuning, including splitting it into training and validation sets and uploading the files to Open AI's servers.
Data splitting: In this step, the uploaded data is split into separate training and validation sets with an 80-20% split when not set manually to ensure that the model is trained on a representative sample of data. For classification tasks, the training and test dataset are automatically balanced, and the category that is finally chosen is the one that has the maximum votes. Read more on
Training file upload: As the name suggests, this is where the training data is uploaded to model providers (Open AI) or our servers if you choose open-source models.
Validation file upload: Validation data is uploaded to AI model providers servers.
This step involves actually fine-tuning the base selected model for the tasks selected.
Model training: In this step, the platform trains the model using the data that you have provided, using the hyperparameters that you have selected in the previous step.
Deployment: Once the model is trained, it is deployed to the platform so that you can begin using it for inference.
This step involves evaluating the performance of your model using various metrics.
Evaluation metrics: The platform provides several metrics to help you evaluate the performance of your fine-tuned model, including accuracy, precision, recall, and F1 score. These metrics can be used to determine the effectiveness of your model for the task at hand.