InDecoding MLbyPaul IusztinAn End-to-End Framework for Production-Ready LLM Systems by Building Your LLM TwinFrom data gathering to productionizing LLMs using LLMOps good practices.Mar 16, 202414Mar 16, 202414
InTDS ArchivebyLeonie MonigattiIntro to DSPy: Goodbye Prompting, Hello Programming!How the DSPy framework solves the fragility problem in LLM-based applications by replacing prompting with programming and compilingFeb 27, 202414Feb 27, 202414
Aman KumarHost LLama 2 for free using Cloudflare AILearn how to host the LLaMA 2 language model for free with Cloudflare Workers. This step-by-step guide covers everything from creating a CloFeb 13, 20241Feb 13, 20241
Alexander SniffinBuilding An OpenAI GPT with Your API: A Step-by-Step GuideOn November 6th at OpenAI’s DevDay, a new product called GPTs was announced. These GPTs offer a quick and easy way to build a ChatGPT…Nov 12, 202310Nov 12, 202310
Claire LongoMeet ClaireBot — a Conversational RAG LLM App with Social Media Context DataA true vanity projectOct 6, 20235Oct 6, 20235
Ali ArsanjaniThe Generative AI Lifecycle PatternsPart 2: Maturing GenAI : Patterns, Cycles and Strategies of Increasing SophisticationSep 18, 202310Sep 18, 202310
InTDS ArchivebyAhmed BesbesWhy Your RAG is Not Reliable in a Production EnvironmentAnd how you should tune it properly 🛠️Oct 12, 202317Oct 12, 202317
InTDS ArchivebySophia Yang, Ph.D.Advanced RAG 01: Small-to-Big RetrievalChild-Parent RecursiveRetriever and Sentence Window Retrieval with LlamaIndexNov 4, 20235Nov 4, 20235
InArtificial Intelligence in Plain EnglishbyWoyeraIs Hosting Your Own LLM Cheaper than OpenAI? Hint: It Could BeDive into some simple heuristics to help you make this decisionSep 12, 202310Sep 12, 202310
Kelvin LuHosting A Text Embedding Model That is Better, Cheaper, and Faster Than OpenAI’s SolutionWith a little bit of technical effort we can get a better text embedding model that is superior to the OpenAI solution.Jul 23, 20235Jul 23, 20235
Karan KakwaniBuild and run llama2 LLM locallyP/S: These instructions are tailored for macOS and have been tested on a Mac with an M1 chip.Aug 15, 20239Aug 15, 20239
Assaf ElovicThe Ultimate Tech Stack for Building AI ProductsA Peek Into the Tech Stack That Powered My Viral AI Web ApplicationJun 20, 202333Jun 20, 202333
InTDS ArchivebyHeiko HotzRAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application?The definitive guide for choosing the right method for your use caseAug 24, 202330Aug 24, 202330
InLevel Up CodingbyAhmed BesbesYou Can Now Build A Chatbot To Talk To Your Internal Knowledge BaseDesigning and implementing this system with LangchainAug 6, 202322Aug 6, 202322
Cobus GreylingEmerging Large Language Model (LLM) Application ArchitectureDue to the highly unstructured nature of Large Language Models (LLMs), there are thought and market shifts taking place on how to implement…Aug 11, 20235Aug 11, 20235
InInnovation EndeavorsbyDavis TreybigThe biggest bottleneck for large language model startups is UXApplied large language model startups have exploded in the past year. Enormous advances in underlying language modeling technology, coupled…Nov 1, 20228Nov 1, 20228
Leonie MonigattiUnderstanding LLMOps: Large Language Model OperationsHow LLMs are changing the way we build AI-powered products and the landscape of MLOpsMay 2, 20238May 2, 20238
Jeremy ArancioFine-tune an LLM on your personal data: create a “The Lord of the Rings” storyteller.You can now fine-tune an LLM on your own private data, and keep control over your personal information without depending on OpenAI GPT-4.May 23, 20235May 23, 20235
Simon AttardBuilding a memory layer for GPT using Function CallingIt is now easy to build a memory store using the new GPT function calling feature in conjunction with a vector store such as Chroma.Jun 20, 202310Jun 20, 202310
Cobus GreylingFlux Is An Open-Source Tool For LLM Prompt Completion, Exploration & MappingFlux is described as a power tool, providing the ability to interface with expansive LLM prompts.Jun 9, 2023Jun 9, 2023