Overall vs Current Era
What this shows: How the picture changes when you strip away historical backfiles and judge publishers only on content from the last 2 years.
The overall leaderboard averages current and backfile metadata. Publishers with large historical catalogs get dragged down by old content they can't retroactively fix. The current era leaderboard ranks purely on recent content, showing who's doing the best work right now.
| Metric | Overall | Current Era | Change |
|---|---|---|---|
| Average score | 19 | 23 | +4 |
| Grade A publishers | 2 | 11 | +9 |
| Grade B publishers | 41 | 251 | +210 |
| Grade F publishers | 19,547 | 17,665 | -1,882 |
2,844 publishers (12.4%) earn a higher grade on current content than overall. The industry is improving — it's just buried under decades of legacy metadata.
The Biggest Transformations
These large publishers look completely different when judged on recent work:
| Publisher | Current Works | Overall | Current | Jump |
|---|---|---|---|---|
| American Physical Society | 55K | 58 (C) | 81 (A) | C→A |
| American Society for Microbiology | 15K | 67 (B) | 86 (A) | B→A |
| American Chemical Society | 210K | 48 (D) | 70 (B) | D→B |
| American Meteorological Society | 4K | 41 (D) | 66 (B) | D→B |
| IEEE | 883K | 34 (F) | 41 (D) | F→D |
| SAGE Publications | 234K | 48 (D) | 61 (C) | D→C |
| BMJ | 64K | 33 (F) | 47 (D) | F→D |
| Wolters Kluwer | 237K | 26 (F) | 35 (D) | F→D |
APS is the standout — a C-overall publisher producing A-grade metadata right now (score 81, #6 among all active publishers). ASM jumps from B to A (#3 in current era, score 86). Only 135 publishers (0.6%) actually score lower on current content than overall.
By Content Type
What this shows: Aggregate scores mix content types with fundamentally different metadata expectations. The content-type filter on both leaderboards lets you rank publishers by specific types — and the rankings shift dramatically.
The Aggregate Lies
A publisher registering journal articles, peer reviews, components, and corrections gets one blended score — but peer reviews don't have abstracts by design, and components rarely carry funding metadata. The aggregate punishes publishers for depositing more content types.
| Publisher | Aggregate | Journal Articles | The Diluter |
|---|---|---|---|
| eLife | 31 (F) | 97 (A) | Peer Reviews: 13 (F) |
| APS | 81 (A) | 81 (A) | Proceedings: 7 (F) |
| MDPI | 68 (B) | 71 (B) | Consistent across types |
eLife is the most dramatic example. Their aggregate current score is 31/F — but filter to journal articles and they're 97/A, jumping from #2,581 to #2 in the entire leaderboard. The aggregate was being destroyed by peer reviews (13/F) — content that by design doesn't carry abstracts or funding metadata.
Rankings Shift Dramatically
Six of the journal-article top 10 weren't even in the overall top 2,000. The content-type filter doesn't just adjust scores — it tells a completely different story about who's actually doing well.
Pipeline Per Type, Not Per Discipline
The finding is consistent across every publisher analyzed: metadata quality is driven by the deposit pipeline per content type, not by the discipline of the research. When APS invests in their journal article pipeline, it shows immediately at 81/A. Their proceedings pipeline, untouched, sits at 7/F. Same publisher, same era, two completely different investments.
By Dimension
What this shows: The five dimensions of the Nexus Score are not equally adopted. Some are nearly solved, others are essentially empty — even on current content. This is where the gaps are.
| Dimension | What It Measures | Avg (Current) | Status |
|---|---|---|---|
| Access | Licenses, full-text links, abstracts | 47/100 | Improving |
| People | ORCID IDs for authors | 28/100 | Uneven |
| Provenance | References, update policies, similarity check | 25/100 | Uneven |
| Organizations | Affiliations, ROR IDs | 7/100 | Near Empty |
| Funding | Funder registry IDs, award numbers | 2/100 | Near Empty |
Access (47/100) — The Closest to Solved
Licenses are widely deposited. Full-text links are common. Abstracts are the weak spot — some publishers are actively restricting abstract access due to AI concerns, while others are expanding it. This dimension shows the most variation publisher to publisher.
People (28/100) — ORCID Adoption Is Uneven
Some publishers (APS, eLife, MDPI) have near-universal ORCID coverage on current content. Others haven't started. When a publisher turns on ORCID deposits, coverage jumps overnight — this is a pipeline switch, not a gradual adoption curve.
Provenance (25/100) — References Are Strong, Policies Are Not
Reference deposits are high across the industry. But update policies (CrossMark) and similarity checking (Similarity Check / iThenticate) remain low. These are publisher service subscriptions, not metadata deposits — harder to change at scale.
Organizations (7/100) — ROR Is Still Early
Institutional identifiers are the newest metadata field. ROR adoption is growing but from a near-zero base. Affiliations as text strings are more common, but structured ROR IDs are what make the data machine-readable. OJS 3.5 now enables smaller publishers to deposit ROR — expect this to accelerate.
Funding (2/100) — The Biggest Gap
Funder registry IDs and award numbers are essentially absent across the industry. This is the single largest gap in scholarly metadata. Absent funding metadata does not mean absent funding — many funded papers simply lack the deposit. For humanities publishers where research is often unfunded, this dimension is structurally penalizing. Use the content-type filter to see funding coverage for journal articles specifically.
By Publisher Type
What this shows: Who is leading on metadata quality — scholarly societies, commercial publishers, or small independents? The answer depends on how you measure.
Scholarly Societies Are Quietly Leading
US-based scholarly societies dominate the current-era large publisher rankings. ASM, APS, AAS, PNAS, AGU, and ACS all score B or higher — while the commercial giants (Elsevier, Springer, Wiley) remain D's. These societies, not the publishing conglomerates, are setting the standard for metadata quality at scale.
Commercial Publishers: Improved, But Still D's
| Publisher | Current Works | Current Score | Grade |
|---|---|---|---|
| MDPI | 632K | 68 | B |
| SAGE | 234K | 61 | C |
| IOP Publishing | 117K | 55 | C |
| Wiley | 895K | 48 | D |
| Springer Nature | 2.0M | 47 | D |
| Elsevier | 3.0M | 42 | D |
| IEEE | 883K | 41 | D |
| OUP | 451K | 29 | F |
MDPI remains the only commercial-scale publisher to earn a B. OUP is the worst performer among major publishers even on current content — still an F at 29.
South Korea Still Dominates the Top 50
33 of the top 50 current-era publishers are South Korean — nearly identical to the overall leaderboard. The pattern holds across eras and content types.
Small Publishers: High Scores, Different Challenge
Many top-scoring publishers have fewer than 10,000 DOIs. Achieving 100% metadata coverage on 172 articles is a different challenge than on 24 million. Small publishers have a structural advantage in rankings — but the ones actively working to improve (like i-manager Publications at score 25) demonstrate that awareness of the gap is the first step.
The Bottom Line
Every lens reveals something different:
- Overall vs Current: The industry is getting better — 2,844 publishers earn a higher grade on recent content. The backfile drags the picture down.
- By Content Type: Aggregate scores mislead. eLife is an F in aggregate but an A on journal articles. The pipeline per content type, not the discipline, determines quality.
- By Dimension: Access is nearly solved. ORCIDs are unevenly adopted. Organizations and Funding are essentially empty across the board — the two biggest opportunities for the industry.
- By Publisher Type: Scholarly societies lead. Commercial giants are stuck in D territory. Small publishers score high but at low volume.
Use the content-type filter on the leaderboard to explore these patterns for any publisher.