Algorithmic Bias
Systematic errors in AI outputs that create unfair outcomes for certain groups of people.
Systematic errors in AI outputs that create unfair outcomes for certain groups of people.
Algorithmic bias occurs when an AI system produces results that are systematically prejudiced due to flawed training data, model design, or deployment context. In enterprise AI usage, bias risks appear in:
Bias can be introduced at any stage: biased training data, biased labeling, biased feature selection, or biased deployment choices.
NYC Local Law 144 already requires bias audits for AI used in hiring. The EU AI Act classifies employment-related AI as high-risk. Organizations using AI in people-affecting decisions need to evaluate vendor bias testing practices as part of their approval process.
We use cookies and similar technologies to improve your experience, analyze traffic, and support marketing. Cookie Policy