An overview of the "unintended consequences" of AI, covering why we need rules for a technology that moves faster than our laws.
If you feed a machine data from an unfair world, it becomes an unfair machine; exploring how AI inherits human prejudices in hiring, lending, and law.
A look at the root cause of bias: how incomplete or historical datasets (like 50 years of male-dominated resumes) "teach" the AI to be discriminatory.
A deeper dive into "Proxies"—how an AI can discriminate against a group even if you hide their race or gender (e.g., using a "Zip Code" as a secret stand-in...
The most granular level: The technical process of "Red Teaming" and mathematical fairness checks used to catch these hidden biases before the AI is ever...