Scoring modes
How Publi-Score calculates the score depending on available data. Empirical data on 18 articles.
Comparison table
| β‘ Quick | π¬ Partial AI (abstract) | π Full manual | π€ Full AI (PDF) | |
|---|---|---|---|---|
| Input | PMID/DOI | PMID only | PMID + PDF | PMID + PDF |
| Criteria coverage | ~53/100 | ~85/100 | 100/100 | 100/100 |
| Integrity coverage | ~80% | ~80% | 100% | 100% |
| Duration | ~2β5 sec | ~30β60 sec | ~10β30 min | ~30β60 sec |
| Objectivity | β Auto | β LLM | β οΈ Human bias | β LLM |
| Published in catalogue | β | β | β | β |
| Account required | No | Yes (free) | No | Yes (free) |
| Quota | Unlimited | Shared with Full AI (same quota) | Unlimited | 5/month (free) |
What the quick mode covers (and doesn't cover)
β What it covers
- β’ Β§2.3 Bibliometric impact β 100% (citations, h-index)
- β’ Β§2.6 Freshness β ~69% (publication date)
- β’ Β§2.1 Level of evidence β ~57% (study type, randomisation)
- β’ Retractions β 100% (PubMed API + Retraction Watch)
- β’ Alert signals β 100% (predatory journals, EoC)
Total: ~53% of criteria Β· ~80% integrity
β What quick mode doesn't cover
- β’ Real ITT analysis (intention to treat)
- β’ Compliance with pre-registered protocol
- β’ Raw data sharing
- β’ Clinical benefit/risk ratio
- β’ Β§2.7 Reporting quality β 0% (requires PDF)
~47% of criteria not evaluable without PDF
The Partial AI mode without PDF
Partial AI (abstract) mode uses only the abstract and article metadata β without PDF. AI evaluates all criteria accessible from these sources, covering ~85% of the total score.
~85%
criteria covered
+3 pts
β average vs. full with PDF: 3 pts (measured on 18 corpus articles)
1/12
Only 1 tier change in 12 articles (TOGETHER: BβA)
0/3 pts
criterion not evaluable without PDF
Why ~85% and not 100%?
- β’ Β§2.7 Reporting quality (3 pts) β evaluates the clarity of results, tables and figures. Inaccessible without the full PDF: always 0 pt.
- β’ Β§2.4 Reproducibility & transparency β some sub-criteria (code sharing, raw data) are partially inferable from the abstract, but without certainty. AI scores them conservatively.
This mode produces a detailed score with per-criterion justifications β it is partial, not degraded. It remains significantly more reliable than quick mode (~53%) and activates automatically when the PDF is not open access.
Activation: automatic fallback if OA PDF unavailable. Measured on 18 articles, COVID-19 and Vaccination clusters β Publi-Score calibration corpus.
Empirical data
Measured on 18 articles, COVID-19 and Vaccination clusters β Publi-Score calibration corpus.
4 critical overestimation cases in quick mode
These 4 articles are among the most viewed in the corpus. Quick mode assigns them tier A or B, while Full AI mode reveals tier D.
| Article | β‘ Quick | π¬ Partial AI (abstract) | π€ Full AI (PDF) | Gap QβF | Main reason |
|---|---|---|---|---|---|
| Polack/Pfizer β NEJM 2020 | A | E | D | β51 pts | Major industrial COI + short editorial delay not captured |
| Voysey/AZ β Lancet 2021 | B | D | D | β38 pts | AstraZeneca COI + adaptive design + data not shared |
| Hammond/Paxlovid β NEJM 2022 | B | E | D | β37 pts | Industry-only trial + raw data unavailable |
| Molnupiravir β NEJM 2022 | B | D | D | β33 pts | Merck/Ridgeback trial β non-public data |
Why quick mode overestimates: it normalises on criteria accessible via APIs. Β§2.3 (bibliometric impact) weighs ~14 pts and is maximal for NEJM/Lancet articles β which are often industrial trials (Pfizer, AZ, Merck) whose strong COI is only visible on in-depth analysis.
Which mode to choose?
| Context | Recommended mode |
|---|---|
| First exploration, monitoring | β‘ Quick |
| Clinical decision, citation, teaching | π€ Full AI (PDF) |
| PDF unavailable (NEJM, Lancetβ¦) | π¬ Partial AI (abstract) |
| Personal learning, methodological exploration | π Full manual |
