RN Score
Back to Current Era Leaderboard

Leaderboard Insights

Three lenses on the same data — each tells a different story

1

Overall vs Current Era

What this shows: How the picture changes when you strip away historical backfiles and judge publishers only on content from the last 2 years.

The overall leaderboard averages current and backfile metadata. Publishers with large historical catalogs get dragged down by old content they can't retroactively fix. The current era leaderboard ranks purely on recent content, showing who's doing the best work right now.

MetricOverallCurrent EraChange
Average score1923+4
Grade A publishers211+9
Grade B publishers41251+210
Grade F publishers19,54717,665-1,882

2,844 publishers (12.4%) earn a higher grade on current content than overall. The industry is improving — it's just buried under decades of legacy metadata.

The Biggest Transformations

These large publishers look completely different when judged on recent work:

PublisherCurrent WorksOverallCurrentJump
American Physical Society55K58 (C)81 (A)C→A
American Society for Microbiology15K67 (B)86 (A)B→A
American Chemical Society210K48 (D)70 (B)D→B
American Meteorological Society4K41 (D)66 (B)D→B
IEEE883K34 (F)41 (D)F→D
SAGE Publications234K48 (D)61 (C)D→C
BMJ64K33 (F)47 (D)F→D
Wolters Kluwer237K26 (F)35 (D)F→D

APS is the standout — a C-overall publisher producing A-grade metadata right now (score 81, #6 among all active publishers). ASM jumps from B to A (#3 in current era, score 86). Only 135 publishers (0.6%) actually score lower on current content than overall.

2

By Content Type

What this shows: Aggregate scores mix content types with fundamentally different metadata expectations. The content-type filter on both leaderboards lets you rank publishers by specific types — and the rankings shift dramatically.

The Aggregate Lies

A publisher registering journal articles, peer reviews, components, and corrections gets one blended score — but peer reviews don't have abstracts by design, and components rarely carry funding metadata. The aggregate punishes publishers for depositing more content types.

PublisherAggregateJournal ArticlesThe Diluter
eLife31 (F)97 (A)Peer Reviews: 13 (F)
APS81 (A)81 (A)Proceedings: 7 (F)
MDPI68 (B)71 (B)Consistent across types

eLife is the most dramatic example. Their aggregate current score is 31/F — but filter to journal articles and they're 97/A, jumping from #2,581 to #2 in the entire leaderboard. The aggregate was being destroyed by peer reviews (13/F) — content that by design doesn't carry abstracts or funding metadata.

Rankings Shift Dramatically

Six of the journal-article top 10 weren't even in the overall top 2,000. The content-type filter doesn't just adjust scores — it tells a completely different story about who's actually doing well.

Pipeline Per Type, Not Per Discipline

The finding is consistent across every publisher analyzed: metadata quality is driven by the deposit pipeline per content type, not by the discipline of the research. When APS invests in their journal article pipeline, it shows immediately at 81/A. Their proceedings pipeline, untouched, sits at 7/F. Same publisher, same era, two completely different investments.

3

By Dimension

What this shows: The five dimensions of the Nexus Score are not equally adopted. Some are nearly solved, others are essentially empty — even on current content. This is where the gaps are.

DimensionWhat It MeasuresAvg (Current)Status
AccessLicenses, full-text links, abstracts47/100Improving
PeopleORCID IDs for authors28/100Uneven
ProvenanceReferences, update policies, similarity check25/100Uneven
OrganizationsAffiliations, ROR IDs7/100Near Empty
FundingFunder registry IDs, award numbers2/100Near Empty

Access (47/100) — The Closest to Solved

Licenses are widely deposited. Full-text links are common. Abstracts are the weak spot — some publishers are actively restricting abstract access due to AI concerns, while others are expanding it. This dimension shows the most variation publisher to publisher.

People (28/100) — ORCID Adoption Is Uneven

Some publishers (APS, eLife, MDPI) have near-universal ORCID coverage on current content. Others haven't started. When a publisher turns on ORCID deposits, coverage jumps overnight — this is a pipeline switch, not a gradual adoption curve.

Provenance (25/100) — References Are Strong, Policies Are Not

Reference deposits are high across the industry. But update policies (CrossMark) and similarity checking (Similarity Check / iThenticate) remain low. These are publisher service subscriptions, not metadata deposits — harder to change at scale.

Organizations (7/100) — ROR Is Still Early

Institutional identifiers are the newest metadata field. ROR adoption is growing but from a near-zero base. Affiliations as text strings are more common, but structured ROR IDs are what make the data machine-readable. OJS 3.5 now enables smaller publishers to deposit ROR — expect this to accelerate.

Funding (2/100) — The Biggest Gap

Funder registry IDs and award numbers are essentially absent across the industry. This is the single largest gap in scholarly metadata. Absent funding metadata does not mean absent funding — many funded papers simply lack the deposit. For humanities publishers where research is often unfunded, this dimension is structurally penalizing. Use the content-type filter to see funding coverage for journal articles specifically.

4

By Publisher Type

What this shows: Who is leading on metadata quality — scholarly societies, commercial publishers, or small independents? The answer depends on how you measure.

Scholarly Societies Are Quietly Leading

US-based scholarly societies dominate the current-era large publisher rankings. ASM, APS, AAS, PNAS, AGU, and ACS all score B or higher — while the commercial giants (Elsevier, Springer, Wiley) remain D's. These societies, not the publishing conglomerates, are setting the standard for metadata quality at scale.

Commercial Publishers: Improved, But Still D's

PublisherCurrent WorksCurrent ScoreGrade
MDPI632K68B
SAGE234K61C
IOP Publishing117K55C
Wiley895K48D
Springer Nature2.0M47D
Elsevier3.0M42D
IEEE883K41D
OUP451K29F

MDPI remains the only commercial-scale publisher to earn a B. OUP is the worst performer among major publishers even on current content — still an F at 29.

South Korea Still Dominates the Top 50

33 of the top 50 current-era publishers are South Korean — nearly identical to the overall leaderboard. The pattern holds across eras and content types.

Small Publishers: High Scores, Different Challenge

Many top-scoring publishers have fewer than 10,000 DOIs. Achieving 100% metadata coverage on 172 articles is a different challenge than on 24 million. Small publishers have a structural advantage in rankings — but the ones actively working to improve (like i-manager Publications at score 25) demonstrate that awareness of the gap is the first step.

The Bottom Line

Every lens reveals something different:

  • Overall vs Current: The industry is getting better — 2,844 publishers earn a higher grade on recent content. The backfile drags the picture down.
  • By Content Type: Aggregate scores mislead. eLife is an F in aggregate but an A on journal articles. The pipeline per content type, not the discipline, determines quality.
  • By Dimension: Access is nearly solved. ORCIDs are unevenly adopted. Organizations and Funding are essentially empty across the board — the two biggest opportunities for the industry.
  • By Publisher Type: Scholarly societies lead. Commercial giants are stuck in D territory. Small publishers score high but at low volume.

Use the content-type filter on the leaderboard to explore these patterns for any publisher.

See where your publisher stands

Data sourced from the Crossref API. "Current" = last 2 calendar years per Crossref's definition.