Decoding LLM Uncertainties for Better Predictability image
S1 E14 · Grounded Truth
Decoding LLM Uncertainties for Better Predictability
117 Plays
7 months ago

Welcome to another riveting episode of "Grounded Truth"!  In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs).

In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/  Repo can be found here: https://github.com/Watchfulio/uncertainty-demo  

💡 What to expect in this episode:  

-  Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth."  

- Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances).  

- Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons.  

🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.  

Recommended