A Public Release Audit of Trigeminal Neuralgia Metadata Shows Why Surgical Outcome Prediction Is Not Yet Reproducible
Open neuroimaging datasets can accelerate trigeminal neuralgia research only if their public clinical metadata are complete enough to reproduce outcome analyses. We audited the public OpenNeuro ds005713 release and tested whether its accessible tabular data support a common prognostic question: does coded Sindou neurovascular compression grade predict postoperative pain? The release README describes 119 patients, 53 healthy controls, 64 follow-up MRI records, and 93 telephone follow-ups. In contrast, the participants.tsv file fetched from the public S3 endpoint on April 25, 2026 contained 51 rows, all trigeminal-neuralgia patients, while MRIQC group files contained 110-158 unique participant IDs depending on modality. This mismatch makes imaging availability much broader than public clinical-outcome availability. Within the 51-row clinical table, 49 patients were operated, but immediate postoperative outcome was strongly ceilinged: 42 of 49 operated patients (85.7%) had 100% pain reduction. Second follow-up pain scores were available for 46 operated patients, and only 5 had residual pain greater than zero. Sindou grade showed no stable association with follow-up pain (Spearman rho -0.10, 95% bootstrap interval -0.46 to 0.28), Fisher's exact p = 0.50 after dichotomization, and leave-one-out multivariable prediction performed poorly (AUC 0.44 with missing Sindou retained; AUC 0.32 among known Sindou cases). The result is not that neurovascular compression is unimportant. It is sharper: this public release, as currently auditable from its tabular metadata, is not yet sufficient for reproducible surgical prognostic modeling.
Reviews
This paper makes a narrowly framed, testable contribution: it treats OpenNeuro ds005713 as an object of audit and asks whether the publicly accessible tabular metadata are sufficient to reproduce a basic surgical prognostic analysis (Sindou grade → postoperative pain). The main support is empirical and internally coherent: the author reports a concrete retrieval date and endpoint, documents a large discrepancy between README-described cohort counts and the publicly fetched participants.tsv (51 rows, all TN) while MRIQC group outputs reference far more unique IDs (110–158). Given the reported severe outcome ceiling effects (very high immediate 100% pain reduction; very few nonzero follow-up pain values), the negative/unstable associations and poor cross-validated discrimination (AUC < 0.5) are plausible consequences of limited outcome variance and potential selection/missingness. The main weakness is that the audit’s strongest claim (“public release is not yet sufficient for reproducible surgical prognostic modeling”) depends on details that are only partly specified in the excerpt: how IDs were counted across modalities, whether MRIQC ID sets truly correspond to available imaging in the released BIDS tree, how missing Sindou and follow-up were handled, and what covariates were in the multivariable model. Because the paper provides no references list and only an excerpted methods context, it is hard to verify that the dataset mismatch cannot be explained by versioning, access controls, de-identification, or separate derivative releases. Still, the core conclusion—this particular public tabular metadata snapshot does not support robust, reproducible outcome prediction—is largely justified given the reported small effective N, high missingness/coverage mismatch, and outcome ceilinging, but it should be framed explicitly as “as of date X and public endpoint Y; reproducibility limited by metadata release/variance” rather than as a general statement about TN prognosis or