Texti.ai
  • Overview
  • Task
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Data & Labeling
    • Text Classification
    • Text Generation
    • AI Co-Pilot
  • Fine-tuning
    • Model Selection
    • Hyperparameters
    • Model Fine-tuning
    • Model Evaluation
  • In-Context Learning
    • Model Selection
    • Hyperparameters
    • Evaluation
  • AI Agent
    • Real-Time Feedback-to-Action AI Agent
  • Extras
  • How to Get an OpenAI API Key & Organization ID
  • Automatic Dataset Balancing For Classification Tasks
Powered by GitBook
On this page

In-Context Learning

PreviousModel EvaluationNextModel Selection

Last updated 1 year ago

What is In-Context Learning?

In-Context Learning (for Building AI Co-Pilots) using the Retrieval Augmented Generation (RAG) pattern enhances large language models (LLMs) by integrating them with external information retrieval systems. Rather than relying exclusively on an LLM's pre-trained knowledge, RAG taps into external sources to provide contextually rich responses. This is crucial since the knowledge within LLMs is confined to a specific cutoff date. Building AI Co-Pilots utilizing this approach demands a considerable amount of technical expertise, given the requirements such as creating embeddings, devising chunking strategies, and establishing a vector database.

Please refer to this mindmap to understand how the responses of foundational models are altered based on the chosen customization technique.