← Back to The Index Final Review Submission #15
Final Review

Review Nº 15

Factors influencing trust in AI in the healthcare system
AuthorsFrancesco Sergio Blandino, Giorgia Brusa, Alessandro Russo, Sonia Visco, Erika Zhou, Elena Corio, Sarvar Toshmurodov, Elisa Ciani, Bruno Yael Zambrano Rios and Samin Nour Azaran
27/30
Score
The revision delivers substantial, well-executed improvements on the most-cited weaknesses (inline citations, abstract, conclusion, PRISMA-ScR diagram, interaction section, and a usable checklist), but the still-insufficient practice sources, and the absence of any named legal framework prevent a full recovery from the original rejection-level concerns.

The Pros

8 Items
+
Inline citations are now present for almost every framework claim, transforming the text from essay-like to evidence-anchored.
+
A dedicated abstract and a final conclusion now bracket the paper, fixing two ACM formatting gaps.
+
Section 3.5 (Interaction of Factors) provides the cross-dimensional synthesis multiple reviewers requested.
+
Section 3.6 introduces a practical, dimension-mapped checklist with high-risk indicators and a double-check protocol.
+
PRISMA-ScR is explicitly adopted and visualized in Appendix G with full identification, screening, and inclusion counts.
+
The new coding scheme (Appendix H) classifies all 48 sources by Factor, Theme, and Evidence Type, improving transparency.
+
Concrete bias examples (pediatric scans, underrepresented ethnicities, hallucinated diagnoses) now appear in the dentistry use case.
+
The framework diagram (Appendix I) gives a clean visual summary of the four pillars.

The Cons

8 Items
Only 1 source is labeled "Practice" in the coding scheme; the 3-source minimum is still not visibly met, even though the categorization mechanism now exists.
No specific legal frameworks (EU AI Act, GDPR, FDA regulatory pathway, WHO guidance) are cited in the regulation discussion despite three reviewers requesting this.
The framework's introduction, abstract, and examples remain heavily healthcare-bound; the title is generalized but the body is not.
Iteration/co-design transparency (Reviewer 2) is still not visible — there is no indication of how the framework evolved between submissions.
Repetition persists: the black-box concept is re-explained in 3.1, 3.5, and 4.2, and the "support not replace" point recurs across Sections 3.3, 5, and 6.
The clinical-error narrative requested by Reviewer 4 (a step-by-step misdiagnosis scenario) is still missing.
The reflection on AI-tool limitations in the methodology (hallucination/bias risk in summarization itself) is brief and could be deeper.

Suggested Changes

12 Pointers
01
High
Location
Whole document, length
Issue
Main body still spans 5 pages
Suggested Fix
Requirement relaxed for the final submission
02
High
Location
Section 3.4 (Organizational and contextual factors)
Issue
Regulation discussed in fully abstract language despite three reviewers asking for named frameworks
Suggested Fix
Cite the EU AI Act (high-risk system classification for clinical AI), MDR/FDA SaMD pathway, and at least one WHO guidance document on AI in health; tie each to one calibration consequence
03
High
Location
Appendix H (coding scheme) and References
Issue
Only [25] is classified as "Practice"; the 3-source practice threshold is not visibly satisfied
Suggested Fix
Reclassify or add at least two practice sources (e.g., EU AI Act text, FDA AI/ML SaMD action plan, WHO 2021 ethics & governance guidance) and update the coding scheme accordingly
04
High
Location
Title, Abstract, Section 1
Issue
Framework is still introduced through a healthcare lens, undercutting the generality requirement flagged by Reviewers 7 and 12
Suggested Fix
Rewrite the abstract and the first two paragraphs of the introduction in sector-neutral terms (worker / client / professional task / high-stakes domain), then position healthcare as a motivating example rather than the scope
05
Medium
Location
Section 3.1 vs 3.5 vs 4.2
Issue
The "black box" concept is explained three times with overlapping content
Suggested Fix
Define the black-box problem once in 3.1 with the bias/transparency intersection, then in 3.5 and 4.2 reference it without re-explaining
06
Medium
Location
Section 4 (Worked Use Case)
Issue
Reviewer 4's specific request for a concrete clinical error narrative is still missing
Suggested Fix
Add a 4–5 sentence walkthrough: AI flags a non-existent caries on a pediatric radiograph (training-data bias), the dentist under time pressure accepts it, treatment plan is altered, error is caught only at the second-opinion stage — map each step to a checklist failure
07
Medium
Location
Method section / Appendix E
Issue
The mitigation of AI hallucination risk in the team's own use of ChatGPT/Gemini/Copilot is described briefly; Reviewer 5 asked for explicit human-in-the-loop verification
Suggested Fix
Add 2–3 sentences specifying: (a) which steps used AI, (b) what the human verification gate was at each step, (c) one concrete example of a fabricated reference caught and removed
08
Medium
Location
Section 3.6 (Checklist)
Issue
The checklist is excellent but uses descriptive answers; Reviewer 2 asked for measurable formats
Suggested Fix
Convert each row to a Yes/No or Low/Medium/High response, and add a fifth row aggregating the four answers into a "Delegate / Verify / Do Not Delegate" recommendation
09
Medium
Location
Section 3.2 (Human factors)
Issue
"New skills" healthcare workers need are mentioned but never defined (Reviewer 6)
Suggested Fix
Add one sentence enumerating the specific competencies: AI output interpretation, anomaly detection, calibration awareness, prompt/query literacy
10
Low
Location
No visible iteration trace anywhere in the paper
Issue
Reviewer 2 specifically flagged the missing co-design / iteration narrative
Suggested Fix
Add a one-paragraph "Revision Notes" sub-section (or footnote) summarizing what changed between v1 and v2 in response to peer review — this directly addresses a Type-3 expectation
11
Low
Location
Appendix C (Search Strings)
Issue
Reviewer 6 flagged unprofessional simple keywords ("AI + orthodontist")
Suggested Fix
Replace with formal Boolean strings, e.g., ("artificial intelligence" OR "machine learning") AND ("trust" OR "trust calibration") AND ("clinician" OR "healthcare worker"); list one canonical string per database
12
Low
Location
Section 5 (Gaps)
Issue
Reviewer 5 asked for a prioritized future-work agenda
Suggested Fix
Convert the section's loose list into a short ranked table: Priority 1 (longitudinal trust evolution), Priority 2 (training curricula), Priority 3 (legal accountability frameworks)
Back to The Index
Score · 27/30
Good · Not · Done
Pros / Cons / Pointers
Final Review · Submission #15 The Index Grandi Sfide · 2026