What FAIREYE measures and how the score works
AI models are used at massive scale, but many people still have no clear way to tell whether those models treat different groups consistently. FAIREYE makes that behavior visible without requiring technical context.
What does "AI bias" actually mean?
If two people ask effectively the same thing, a fair model should respond the same way. Bias shows up when a name, pronoun, or other group signal changes the answer.
Models learn from huge volumes of human-written text, and those sources contain the same stereotypes and imbalances that exist in society. The result is that a model can absorb patterns it was never explicitly meant to learn.
A real example
We send models sentences that are identical except for one word. Reveal the outputs to see the inconsistency.
How FAIREYE tests models
The process is straightforward: generate controlled pairs, ask the same question for each one, and score how often the model stays consistent.