How we calculate scores

Our Methodology

Transparency is at the core of what we do. This page explains exactly how we collect data, calculate scores, and ensure accuracy across the demographics.fyi platform.

Data Sources

Our scores are derived from two complementary data sources, which we cross-reference to maximize reliability.

Anonymous Crowdsourced Submissions

Verified tech employees submit anonymous data about their workplace demographics, PIP experiences, departure circumstances, and cultural observations. Submissions are tied to verified work email domains but never linked to individual identities. This gives us granular, real-time insight into day-to-day workplace practices that public filings cannot capture.

Federal EEO-1 Filings

Public companies with 100 or more employees are required by federal law to file annual EEO-1 reports with the U.S. Equal Employment Opportunity Commission. These filings break down workforce composition by race, ethnicity, gender, and job category. We incorporate this data as a baseline for validating crowdsourced submissions.

Cross-referencing

Where both sources are available for a company, we compare crowdsourced demographic breakdowns against official EEO-1 data. Significant discrepancies trigger a review. If crowdsourced data aligns within a reasonable margin of the federal filing, it increases our confidence score. This dual-source approach helps us catch both inaccurate submissions and outdated public filings.

Scoring System

Each company receives three independent scores on a 0-100 scale. Scores are calculated using weighted composites of underlying metrics, with each component contributing a defined share.

Diversity Score

0 - 100

Measures how well a company's workforce reflects broad demographic representation. It evaluates racial and ethnic diversity, gender balance across all roles, equitable distribution across seniority levels, and how the company compares to industry-wide benchmarks.

Racial / Ethnic Diversity35%
Gender Balance25%
Level Equity25%
Department Spread15%

Retention Score

0 - 100

Captures how effectively a company retains its employees and whether departure patterns suggest systemic issues. It factors in the ratio of voluntary to involuntary departures, how frequently PIPs are used and their outcomes, average employee tenure, and whether employees are disproportionately terminated before their equity vests (fire-before-vest patterns).

Departure Rates (Voluntary vs. Involuntary)30%
PIP Practices25%
Tenure Data25%
Vest Patterns20%

Culture Score

0 - 100

Reflects the subjective and structural elements of workplace culture as reported by employees. It combines employee satisfaction ratings, inclusion ratings, diversity within management and leadership, and hire-to-fire timeline patterns -- whether new hires are terminated unusually quickly, which can indicate poor hiring practices or hostile onboarding environments.

Employee Satisfaction30%
Inclusion Ratings30%
Management Diversity20%
Hire-to-Fire Timeline Patterns20%

Score Color Ranges

80 - 100High
60 - 79Medium
0 - 59Low

Data Quality & Confidence

Not all company profiles have the same amount of data behind them. To make this clear, we assign a confidence level to each company based on the number of verified submissions we have received.

Low

< 10 submissions

Medium

10 - 50 submissions

High

50 - 200 submissions

Very High

200+ submissions

Companies with fewer than 10 submissions are marked with a low confidence indicator and their scores should be interpreted with caution. We require a minimum of 5 submissions before publishing any score at all. Statistical significance thresholds are applied when comparing subgroup breakdowns -- demographic splits with insufficient sample sizes are not displayed to prevent misleading conclusions.

Verification & Moderation

Every submission goes through a multi-layer review process before it influences company scores.

Automated Anomaly Detection

Incoming submissions are checked against statistical models of expected data ranges. Outliers -- such as an impossibly high PIP rate or demographically implausible distributions -- are flagged for manual review rather than being immediately incorporated.

Manual Review by Moderators

Flagged submissions are reviewed by trained moderators who evaluate context, consistency, and plausibility. Moderators can approve, reject, or request additional information.

Cross-referencing with Public Data

When available, crowdsourced data is compared against EEO-1 filings, published diversity reports, and other public disclosures to validate directional accuracy.

Rate Limiting and Spam Prevention

Submissions are rate-limited per verified email domain to prevent coordinated campaigns from skewing data. Duplicate detection and behavioral analysis further reduce the risk of manipulation.

Limitations & Transparency

No dataset is perfect. We believe in being upfront about the limitations of our approach so that users can interpret scores with appropriate context.

Self-Selection Bias

Employees who choose to submit data may not represent the full workforce. People with strongly positive or negative experiences are often more motivated to respond than those with neutral ones. We account for this by weighting confidence levels and cross-referencing with public data where possible, but the bias cannot be fully eliminated.

Sample Size Limitations

Smaller companies or those newer to the platform may have very few submissions, making their scores less statistically reliable. Confidence levels are displayed alongside every score to signal this. We encourage users to pay close attention to the submission count before drawing conclusions.

Inability to Verify Individual Claims

While we verify that submitters have active work email addresses at the companies they report on, we cannot independently verify every claim made within a submission. Our moderation pipeline catches clear outliers, but some inaccuracies may persist in the data.

How We Minimize These Issues

We continuously invest in improving data quality through better anomaly detection models, expanding our EEO-1 coverage for cross-referencing, increasing submission volumes through outreach, and publishing confidence metrics alongside every score. Our goal is to make the limitations visible, not to hide them.

Methodology Updates

Our methodology is not static. We periodically review and update our scoring models, data validation processes, and confidence thresholds to reflect new research, user feedback, and changes in available data sources. When material changes are made, we document them here and recalculate historical scores for consistency.

Last updated: February 2026. Adjusted PIP practice weighting within Retention Score and expanded EEO-1 cross-referencing coverage.