Model Fine-tuning

Once the hyperparameters are set for fine-tuning, Texti.ai handles all necessary steps automatically, with a streamlined process that includes data preparation, training the model, and displaying evaluation metrics of the fine-tuned model.

Here's a breakdown of what happens during these steps:

Data Preparation

This step involves preparing your data for fine-tuning, including splitting it into training and validation sets and uploading the files to Open AI's servers.

  1. Data splitting: In this step, the uploaded data is split into separate training and validation sets with an 80-20% split when not set manually to ensure that the model is trained on a representative sample of data. For classification tasks, the training and test dataset are automatically balanced, and the category that is finally chosen is the one that has the maximum votes. Read more on Automatic Dataset Balancing For Classification Tasks.

  2. Training file upload: As the name suggests, this is where the training data is uploaded to model providers (Open AI) or our servers if you choose open-source models.

  3. Validation file upload: Validation data is uploaded to AI model providers servers.

Model Fine-tuning

This step involves actually fine-tuning the base selected model for the tasks selected.

  1. Model training: In this step, the platform trains the model using the data that you have provided, using the hyperparameters that you have selected in the previous step.

  2. Deployment: Once the model is trained, it is deployed to the platform so that you can begin using it for inference.

Model Evaluation

This step involves evaluating the performance of your model using various metrics.

  • Evaluation metrics: The platform provides several metrics to help you evaluate the performance of your fine-tuned model, including accuracy, precision, recall, and F1 score. These metrics can be used to determine the effectiveness of your model for the task at hand.

Last updated