← Back to Verification

API Documentation

AiCitationChecker REST API — POST /api/verify and supporting endpoints.

Contents

Authentication

All API requests require a Bearer token in the Authorization header.

Get a token

POST /api/token
Content-Type: application/json

{"email": "your@email.com", "password": "your-password"}

Response: {"api_key": "…43-char token…", "username": "…", "tier": "…"}

Rotate your key

POST /api/generate_key
Authorization: Bearer <current-key>

The old key is immediately invalidated.

Pass the key in every subsequent request:

Authorization: Bearer <api-key>

POST /api/verify

Verify a batch of bibliographic references against CrossRef, Semantic Scholar, and OpenAlex.

Request body

{
  "references": [
    "Smith, J.; Jones, A. Title of paper. Journal 2020, 10, 123.",
    "Doe, J. Another title. Journal 2021, 5, 456-470, https://doi.org/10.xxxx/yyyy."
  ],
  "mode": "all",
  "citation_style": "MDPI"
}
FieldTypeRequiredOptions
referencesarray of stringsyesOne reference per string
modestringno (default all)all · doi_only · no_doi_only
citation_stylestringno (default APA)APA · MDPI · Chicago · Harvard · IEEE · Vancouver

Mode values:

Response body

{
  "credits_used": 10.0,
  "credits_remaining": 98432.0,
  "results": [
    {
      "index": 0,
      "input": "Smith, J.; Jones, A. Title of paper. Journal 2020, 10, 123.",
      "status": "OK",
      "confidence": "EXCELLENT",
      "doi": "10.xxxx/yyyy",
      "matched_title": "Title of paper",
      "formatted_citation": "Smith, J.; Jones, A. Title of paper. Journal 2020, 10, 123. https://doi.org/10.xxxx/yyyy",
      "page_mismatch": false,
      "pages_ref": null,
      "pages_crossref": "123-131"
    }
  ]
}

index corresponds to the position in the input references array (0-based).

page_mismatch is true when the page range extracted from the source differs from CrossRef — even for OK results. When true, use pages_crossref (the value in formatted_citation) as the authoritative page range.

Status Codes

StatusMeaning
OKReference verified — DOI resolves and author/title match CrossRef. Page numbers are not checked by the status; always inspect page_mismatch and formatted_citation.
E1DOI cannot be resolved. Common causes: truncated DOI (journal-prefix only), DOI not indexed in CrossRef (some regional/Persian journals), or broken resolver. When a title is available the API attempts a title-based fallback and includes a suggestion.
E2DOI resolves but first author differs significantly — strong hallucination indicator.
E3Year mismatch >1 year. Preprint vs. final publication is tolerated (±1 year).
E4Title similarity below 70%. May also occur when a DOI resolves to a journal landing page rather than a specific article.
WRONG_DOIThe correct article was found by title search, but the DOI in the source points to a different article. Can be a false positive — verify manually before changing the DOI.
WRONG_AUTHORTitle matches a real paper but authors do not — hallucination indicator.
WRONG_JOURNALTitle and authors match but the journal differs.
SUGGESTIONPossible match found for a reference without a DOI. Confidence level (HIGH/MEDIUM/LOW) indicates reliability. LOW confidence for web/government sources usually means the search misfired — ignore it.
SUGGESTION_S2Match found via Semantic Scholar (broader coverage including arXiv and conference papers). Verify manually.
NO_DOINo DOI found and no match in databases. For web, institutional, or legislative references this is expected and correct.

Confidence Levels

ConfidenceMeaning
EXCELLENT≥95% title similarity — high certainty
GOOD85–95% — solid match with minor uncertainty
ACCEPTABLE70–85% — review the formatted citation carefully
WEAK50–70% — treat with caution; verify manually
CRITICAL<50% — requires manual review regardless of status
NOT_FOUNDNo match at all

Reading Status + Confidence Together

CombinationRecommended action
OK + EXCELLENTAccept as-is; also check page_mismatch
OK + GOODAccept; spot-check if the reference is critical
E1 + suggestion presentDOI is broken/truncated but article was found by title — verify suggestion then correct the DOI
E4 + CRITICALCheck both the DOI and the formatted citation carefully
WRONG_DOI + GOODVerify manually — may be a false positive
SUGGESTION + LOWIgnore if source has no DOI (misfired); verify manually otherwise

Known Behaviours & Gotchas

OK status does not guarantee correct page numbers

The status reflects DOI resolution and title match — page numbers are not validated. When the source page range differs from CrossRef the response includes "page_mismatch": true along with pages_ref and pages_crossref. Always apply the page range from formatted_citation (which reflects CrossRef data).

E1 — common causes

CauseExample
Truncated DOI (journal prefix only)10.1680/ensu. instead of 10.1680/ensu.14.00017
DOI valid but not indexed in CrossRefSome Persian journals, IDOSI journals, regional publishers
DOI resolver link brokenJournal hosted on an alternative server
When E1 is returned and the reference contains a title, the API automatically runs a title-based search. If a match is found (≥70% similarity) it is included as a suggestion in the response, even though the status remains E1.

Journal-level DOI vs. article DOI

Some publishers assign a DOI to the journal itself (e.g. the journal prefix without an article suffix). The API detects this case and returns E1 with a note, rather than E4. Check whether the PDF of the article actually contains a DOI before trusting the source.

WRONG_DOI false positives

WRONG_DOI with GOOD confidence can be a false positive. Always verify manually before changing the DOI in the document.

et al. expansion

When a reference contains et al., the API expands the full author list in formatted_citation by looking up the DOI in CrossRef. This works reliably and is particularly useful for MDPI compliance.

Input format flexibility

The API handles mixed input styles. If a reference is in APA style but contains a valid DOI, the output is correctly reformatted to the requested citation style regardless of the input format. Author separators, initials, and journal capitalisation are all normalised.

Ahead-of-print / early-access citations

If a reference was cited from an early-access version, the API may return final published metadata with an updated year, volume, and page range. Always apply these corrections from formatted_citation.

References without DOIs (web, government, legislative)

NO_DOI for these sources is expected and correct, not an error. If such a reference returns SUGGESTION with LOW confidence, the search misfired — ignore the result.

ASCE-style DOIs encode the starting page

ASCE journal DOIs use the format 10.1061/(ASCE)…:vol(startpage). The API derives the full page range from CrossRef automatically, even when the source only lists volume and issue number.

Code Examples

curl

curl -s -X POST https://aicitationchecker.org/api/verify \
  -H "Authorization: Bearer <key>" \
  -H "Content-Type: application/json" \
  -d '{
    "references": [
      "Mealy, P.; Teytelboym, A. Economic complexity and the green economy. Res. Policy 2022, 51, 103948, https://doi.org/10.1016/j.respol.2020.103948."
    ],
    "mode": "all",
    "citation_style": "MDPI"
  }'

Python (no third-party libraries)

import json, urllib.request

KEY  = "your-api-key-here"
REFS = [
    "Mealy, P.; Teytelboym, A. Economic complexity and the green economy. "
    "Res. Policy 2022, 51, 103948, https://doi.org/10.1016/j.respol.2020.103948.",
]

payload = json.dumps({
    "references": REFS,
    "mode": "all",
    "citation_style": "MDPI"
}).encode("utf-8")

req = urllib.request.Request(
    "https://aicitationchecker.org/api/verify",
    data=payload,
    headers={
        "Authorization": f"Bearer {KEY}",
        "Content-Type": "application/json"
    }
)
with urllib.request.urlopen(req) as resp:
    data = json.loads(resp.read().decode("utf-8"))

print(f"Credits used: {data['credits_used']}  Remaining: {data['credits_remaining']}")
for r in data["results"]:
    pm = " ⚠ page mismatch" if r.get("page_mismatch") else ""
    print(f"[{r['index']+1}] {r['status']:12s} {r.get('confidence',''):10s}  DOI: {r.get('doi','—')}{pm}")

Batch from a .docx file

import json, urllib.request
from docx import Document

DOCX  = "path/to/paper.docx"
KEY   = "your-api-key-here"
BATCH = 30   # max per tier (see Tier Limits below)

doc = Document(DOCX)
in_refs = False
refs = []
for para in doc.paragraphs:
    t = para.text.strip()
    if not in_refs:
        if t.lower() == "references":
            in_refs = True
        continue
    if t.lower().startswith(("disclaimer", "appendix", "supplementary")):
        break
    if t:
        refs.append(t)

for i in range(0, len(refs), BATCH):
    batch = refs[i:i+BATCH]
    payload = json.dumps({
        "references": batch,
        "mode": "all",
        "citation_style": "MDPI"
    }).encode("utf-8")
    req = urllib.request.Request(
        "https://aicitationchecker.org/api/verify",
        data=payload,
        headers={"Authorization": f"Bearer {KEY}", "Content-Type": "application/json"}
    )
    with urllib.request.urlopen(req) as resp:
        data = json.loads(resp.read().decode("utf-8"))
    for r in data["results"]:
        global_idx = i + r["index"] + 1
        pm = " ⚠ PAGE MISMATCH" if r.get("page_mismatch") else ""
        print(f"[{global_idx:03d}] {r['status']:12s} {r.get('confidence',''):10s}  {r['input'][:60]}...{pm}")

Tier Limits

TierRuns / hourRefs / run
free2010
regular6030
Credits are deducted before processing (batch size is checked first). The operation order is: auth → parse → daily top-up → rate limit → batch size check → estimate cost → credit check → deduct → verify → record → return JSON.

← Back to Verification