Skip to main content

Mission

To document how AI systems treat people differently based on names. Systematically. Measurably. Consequentially.

Evidence drives change. This is evidence.

Approach

Evidence Over Rhetoric

We present findings without editorializing. The data speaks for itself. Readers draw their own conclusions.

Rigorous Methodology

Pre-registered protocols. Statistical corrections for multiple comparisons. Effect sizes with confidence intervals. We follow standard scientific practices.

Open Where Possible

Data, findings, and documentation are freely available. We enable independent verification and extended research.

Protected Where Necessary

Structural analysis methodology is proprietary. This protection enables continued research and prevents premature gaming of detection methods.

Ethical Commitments

No Individual Harm

Research does not identify individuals. Name pairs are synthetic or drawn from validated academic sources. No personal health information was used.

Responsible Disclosure

Findings are presented to enable systemic improvement, not to enable exploitation. We do not publish specific prompts that could be used to manipulate AI systems.

Scholarly Respect

This work builds on decades of prior research. Every debt is acknowledged. We stand on shoulders, not claim to replace them.

Public Benefit

Research is shared for public benefit. Core findings and data are freely accessible. The goal is systemic improvement, not commercial advantage.