"Pixel Art of a Rabbit on the Moon," generated by DALLE-2
Interested in participating in our public reading group? Don't have enough Trustworthy ML hot takes in your life 🔥? Join the reading group here.
In the news 📰
OpenAI blew everyone away with ChatGPT. Here are some hot takes (that are prompted to get successively hotter) on trustworthy ML and generative AI, from ChatGPT itself:
🌶️ Explainable AI will be a key factor in building trust in machine learning systems. As machine learning systems become more complex and are used in high-stakes decision making, it will be essential for them to provide clear and understandable explanations for their decisions.
🌶️🌶️ The reliance on "black box" machine learning models will eventually erode public trust in technology. As more and more people become aware of the potential flaws and biases in these models, there will be a growing skepticism of their use in society, leading to a loss of confidence in technology as a whole.
🌶️🌶️🌶️ The lack of accountability in many machine learning models will eventually lead to a revolt against their use in society. As people become increasingly aware of the potential consequences of using opaque algorithms to make important decisions, there will be growing calls for greater transparency and accountability in the development and use of these models.
That got dark quick. If you didn’t have Trust Issues with Generative AI, you might now… at least it’s self aware? Many people have noted how confident these responses can be in making factually incorrect statements. Misinformation erodes the trust of humans too – but unlike humans, it could lead to a loss of confidence in the technology as a whole (like ChatGPT suggested).