Evaluating and Tracking LLM Apps
How to test and track experiments to iteratively develop LLM apps
There’s a better, more scalable way to get feedback about your LLM apps in development
Building LLM apps that combine powerful LLMs with vector databases, agents and more? If you’re developing with a framework like LlamaIndex or LangChain; an LLM like one from OpenAI or Hugging Face; or using a vector database from Pinecone or Chroma, this workshop is for you.
Learn how to measure the performance and quality of your LLM-based applications using feedback functions. This workshop explores what feedback functions are, how to use them, and why they can make all the difference.
Join us as former Carnegie Mellon Professor and TruEra Chief Scientist, Anupam Datta, gives you an overview of how to quickly improve the LLM apps that you are developing.
During the one-hour workshop, we will cover:
- The challenges with LLM app development today
- What’s a feedback function and how does it work?
- How to put feedback functions to good use as you are developing LLM apps
- Tracking performance and quality across LLM app versions and chains
- Demo of TruLens for LLM Apps, an open source software toolkit that uses feedback functions
To view now, simply fill out the form and click Submit.
Meet the Speakers
President, Chief Scientist,
and Co-founder, TruEra
Co-founder and CTO, TruEra
TruEra provides AI Quality solutions that analyze machine learning, drive model quality improvements, and build trust. Powered by enterprise-class Artificial Intelligence (AI) Explainability technology based on six years of research at Carnegie Mellon University, TruEra’s suite of solutions provides much-needed model transparency and analytics that drive high model quality and overall acceptance, address unfair bias, and ensure governance and compliance.