In-Context Learning

What is In-Context Learning?

In-Context Learning (for Building AI Co-Pilots) using the Retrieval Augmented Generation (RAG) pattern enhances large language models (LLMs) by integrating them with external information retrieval systems. Rather than relying exclusively on an LLM's pre-trained knowledge, RAG taps into external sources to provide contextually rich responses. This is crucial since the knowledge within LLMs is confined to a specific cutoff date. Building AI Co-Pilots utilizing this approach demands a considerable amount of technical expertise, given the requirements such as creating embeddings, devising chunking strategies, and establishing a vector database. Please refer to this mindmap to understand how the responses of foundational models are altered based on the chosen customization technique.

Last updated