Texti.ai
  • Overview
  • Task
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Data & Labeling
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Fine-tuning
    • Model Selection
    • Hyperparameters
    • Model Fine-tuning
    • Model Evaluation
  • In-Context Learning
    • Model Selection
    • Hyperparameters
    • Evaluation
  • AI Agent
    • Real-Time Feedback-to-Action AI Agent
  • Extras
  • How to Get an OpenAI API Key & Organization ID
  • Automatic Dataset Balancing For Classification Tasks
Powered by GitBook
On this page
  1. In-Context Learning

Hyperparameters

Embeddings:

In the context of Large Language Models, embeddings are mathematical representations of words in a high-dimensional space. They capture the semantic relationships between words and serve as a foundation for understanding and generating text. Currently, only "text-embeddings-ada-002" is supported by Texti, but many domain-specific open-source models will soon be available.

Similarity Measure:

The similarity measure defines how the model gauges the closeness or similarity between different pieces of text. Currently, Texti only supports "Cosine" as a similarity measure.

Chunk Size (Tokens):

This refers to the number of tokens (words or subwords) processed together as a single block or 'chunk' when documents are converted into embeddings.

PreviousModel SelectionNextEvaluation

Last updated 1 year ago