Paragraph divider

Trust Issues Newsletter

The latest in trustworthy AI research and practice

Subscribe to the email newsletter 


Paragraph divider

Do you have Trust Issues?

Welcome to the Trust Issues newsletter! Every month, we’ll tell you about the latest research in the trustworthy and explainable AI space. We hope this newsletter can be the start of a conversation about how to build trust in machine learning systems.

Stand out from data scientists who all too often employ a "set it and forget it" approach to machine learning. With the newsletter, you'll stay up to date on:

  • How interpretability can enhance model performance
  • Different aspects and methods of explanation
  • Where different AI systems fall on the sliding scale of societal impact
  • More ways to advance trust such as how humans can work effectively with AI systems

Who are we?

We’re the research team at TruEra, where we help data scientists and AI practitioners build, debug, and monitor trustworthy ML models. In short, we think about trustworthy AI every day.

If you're interested in joining the conversation, please subscribe to the Trust Issues email newsletter.

Subscribe Here


Paragraph divider

About TruEra

TruEra provides AI Quality solutions that analyze machine learning, drive model quality improvements, and build trust. Powered by enterprise-class Artificial Intelligence (AI) Explainability technology based on six years of research at Carnegie Mellon University, TruEra’s suite of solutions provides much-needed model transparency and analytics that drive high model quality and overall acceptance, address unfair bias, and ensure governance and compliance.