The Latest in Trustworthy ML Research and Practice
Trust Issues
November 10, 2022 • Epoch 2
Welcome to Epoch 2. If you’re new, we’re the research team at TruEra and we use this newsletter to share the latest research in the trustworthy and explainable ML space. If you stumbled upon this newsletter, make sure to subscribe to keep receiving it. We’re hoping that the recommender system that led you here hasn’t led you astray 🤔…
Recently we’ve been thinking a lot about what makes an explainability method good. We read Teach Me to Explain, which proposed a lexicon of different explanation methods for NLP, from highlights to free-text explanations.
Dave's take: Comprehensiveness isn’t necessary for a valid highlight. It is a means to quantify faithfulness.
Interested in participating in our public reading group? Don’t have enough Trustworthy ML hot takes in your life 🔥? Join here.
Stack overflow is going offline to enable a huge population of coders without access to the internet 🌎 from students in Cameroon to remote researchers in Antarctica to students behind bars. This is a big step for tech accessibility that has the potential to lift all boats.
The generative AI train continues to pick up steam with recent funding in text and image generation. Hype abounds on everything this new technology can do in this new wild west. Can we translate some of the ways we think about trustworthy machine learning to generative AI 🧠? How does trustworthy AI in the generative space diverge?
Thanks for reading Trust Issues. Keep the conversation going in our community, the AI Quality Forum on Slack :)