Texti.ai
  • Overview
  • Task
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Data & Labeling
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Fine-tuning
    • Model Selection
    • Hyperparameters
    • Model Fine-tuning
    • Model Evaluation
  • In-Context Learning
    • Model Selection
    • Hyperparameters
    • Evaluation
  • AI Agent
    • Real-Time Feedback-to-Action AI Agent
  • Extras
  • How to Get an OpenAI API Key & Organization ID
  • Automatic Dataset Balancing For Classification Tasks
Powered by GitBook
On this page
  1. In-Context Learning

Model Selection

You can currently use following models to build AI Co-Pilot using In-Context Learning :

  1. GPT-3.5-turbo (16K Token Size) - The best model in the GPT-3.5 series from Open.

  2. GPT-4 (32K Token Size) - The most capable GPT model series to date. Able to do complex tasks, but slower at giving answers.

  3. LlaMa-65B (8K Token Size) - This model currently has limited availability. LlaMa-65B is a more efficient model, part of a series emphasizing reduced compute demands.

PreviousIn-Context LearningNextHyperparameters

Last updated 1 year ago