Silverchair has published its first annual report on the Future of Peer Review – have you read it yet? We found their insights and discussion on AI and research integrity particularly interesting, and the assertion that these tools should be considered as an enhancement to our ability to scale and not a replacement for human judgment is something that resonates with us strongly. At PA EDitorial we believe that AI works best when it is combined with human judgement and expert knowledge. Applying a human-in-the-loop model lets AI automation do the heavy lifting, and lets people provide the perspective. This approach isn’t just efficient; it’s also ethically necessary. Researcher careers, reputations, and lives depend on the decisions we make. And while AI can assist, it shouldn’t be the final arbiter. Do you agree? If you’d like to explore how PA EDitorial can support you in navigating this pivotal moment in publishing, get in touch - link in the comments. Access the report here: https://lnkd.in/er-EYS_e #AcademicPublishing #HumanInTheLoop #AIinPublishing
Silverchair's report on the future of peer review and AI's role in publishing.
More Relevant Posts
-
📢 It's here: Silverchair just released its inaugural Future of Peer Review report, a comprehensive look at reviewer behavior, emerging pain points, and how AI will transform the peer review landscape. This report explores what's working, what needs to evolve, and how we can ensure publishing continues to thrive. Check it out! https://lnkd.in/g8gnVh2f
To view or add a comment, sign in
-
GLCE & Peer Review Week 2025 | Panel Discussion on AI in Peer Review We are delighted to share that Prof. Ning Zhang, Editor-in-Chief of Green and Low-Carbon Economy (GLCE), joined the Peer Review Week 2025 Panel Discussion on “AI in Peer Review: Threat or Transformation?” The discussion covered topics such as the value of AI in peer review, as well as its risks and challenges. We are also very pleased to share that another distinguished member of the GLCE editorial board joined as a guest, bringing fresh perspectives to our journal and contributing his ideas reflecting on the event. This panel will explore whether AI represents a threat to traditional peer review — or a transformation that can enhance quality, efficiency, and fairness. This participation reflects GLCE’s commitment to advancing best practices in scholarly publishing and exploring innovative tools that can enhance editorial workflows and peer review quality. We sincerely thank Peer Review Week and Bon View Publishing, and look forward to this meaningful exchange of ideas. #PeerReviewWeek #AI #PeerReview #AcademicPublishing #GreenandLowCarbonEconomy #ScholarlyCommunication #OpenScience
To view or add a comment, sign in
-
-
Building AI that can actually synthesize knowledge? Here's what you need to measure. Most teams building LLM systems focus on basic accuracy metrics, but Knowledge Synthesis and Generation systems need much deeper evaluation. Some of the metrics that matters: 1. Nugget Coverage - Are you capturing all essential facts? 2. Citation Precision - Do your sources actually support your claims? 3. Reference Coverage - Are you finding the authoritative sources experts expect? 4. Document Importance - Are you prioritizing high-impact, well-cited sources? Whether you're building systems for academic research, competitive intelligence, or medical literature reviews, these metrics separate impressive demos from production-ready AI that professionals can trust. You can read more about these in the attached blog. #AI #MachineLearning #KnowledgeManagement #LLM #ResearchAI
To view or add a comment, sign in
-
Peer Review Week 2025: "Rethinking Peer Review in the AI Era" From 15–19 September, the global research community comes together to mark Peer Review Week—a time to recognize the crucial role peer review plays in safeguarding the quality and integrity of science. This year’s theme, “Rethinking Peer Review in the AI Era”, calls on scholars, reviewers, and publishers to reflect on how artificial intelligence is changing research and publishing and what that means for the future of peer review. Through events, webinars, and discussions worldwide, the week highlights why rigorous, fair, and transparent review remains the cornerstone of scholarly communication. Read more about Peer Review Week 2025: https://brnw.ch/21wVLgW #MDPI #PeerReviewWeek #AcademicPublishing #ResearchIntegrity #OpenAccess #AIinScience
To view or add a comment, sign in
-
-
Dr. Kusal Weerasinghe emphasizes that AI should empower and support, not replace, human judgment. A great reminder that the future of research should not just depend on technology, but also on how responsibly we integrate it by balancing innovation with ethics, expertise, and critical thinking.
Generative AI could radically widen who gets to do health research…… but only if we use it wisely…… At The 2025 annual RDF Conference, Kusal Weerasinghe from Medway NHS Foundation Trust made the case that with today’s advances, using AI in research processes could mean opportunities to bring in new voices, ideas, and perspectives. Their AI-powered tool helps researchers navigate designing a protocol, one of the biggest early hurdles in research. But Kusal was clear - this isn’t about replacing human judgment. He pointed out the potential pitfalls of using AI in research: > The hype effect: using AI for everything just because it’s available. > The overtrust effect: assuming AI should make the decisions for us. By constraining the AI’s role, keeping human oversight, and focusing on where it adds genuine value, Medway NHS Foundation Trust’s approach is opening the door for more diverse research without losing the expertise, ethics, and critical thinking that good science depends on. Submit your own work for next year's conference here: https://rdfconference.org Dr Jennifer Teke #HealthcareResearch #UKHealthcare #RDF26
Using AI in Research: Promise, Pitfalls, and Protecting the Human Element
To view or add a comment, sign in
-
⚡ The Future of Medicine Won’t Be About Who Knows More Data—It Will Be About Who Uses It Wisely. In today’s world, AI can already: 🔹 Spot disease patterns invisible to the human eye 🔹 Anticipate deterioration before it’s clinically evident 🔹 Sift through oceans of data in seconds But here’s the paradox— The value of a doctor in the age of AI won’t lie in competing with algorithms. It will lie in: ❤️ Empathy and trust ⚖️ Ethical decision-making 🧠 Translating machine insights into patient-centered care As AI grows more capable, being human becomes our greatest competitive advantage. 👉 What do you believe will define the best clinicians of the AI era—technical expertise, or the human touch? #AIinHealthcare #FutureOfMedicine #MedicalInnovation #DigitalHealth #HumanCenteredCare #ICAIM #DoctorsAI #DoctorsAIGlobalSummit2025
To view or add a comment, sign in
-
-
Peer review is more than a process, it’s the foundation of trustworthy science. As AI transforms scholarly publishing, we must ask: 🔹 What should remain uniquely human in peer review? 🔹 How can AI tools responsibly support reviewers? 🔹 What safeguards ensure integrity? This #PeerReviewWeek, FASEB Publications is proud to prioritize peer review and research integrity in every decision we make. #PRW2025 #ScholarlyPublishing
To view or add a comment, sign in
-
-
* AI isn't just for sci-fi anymore. It's helping us find answers in vast amounts of data, from medical research to figuring out traffic patterns. #Innovation #Future
To view or add a comment, sign in
-
When I look back at where my academic journey began, one of our research paper which was written by my students (Khushi Vora Shrey Verma Aryan Jain) was on AI bias. At that time, I was deeply curious about how something we often call “objective” and “data-driven” could still reflect—and even amplify—human prejudices. That paper wasn’t written with “AI governance” in mind. But in hindsight, We see how strongly the two are connected. Bias in AI isn’t just a technical flaw that can be fixed with better data or algorithms. It’s a governance issue—it raises questions like: 1. Who gets to decide what is fair? 2. How do we ensure accountability when AI systems make impactful decisions? 3. What structures and safeguards are needed so that AI benefits everyone, not just a few? Our early work on bias gave the first glimpse of these bigger questions. And today, as we start thinking and writing more about AI governance, I realize that governance is really about creating the rules, frameworks, and values that prevent problems like bias from taking root in the first place. In many ways, bias is the symptom, while governance is the system of care. If bias exposes the risks of AI, governance is about building trust, transparency, and accountability around it! As We move forward, I want to carry that early curiosity with me—shifting from the micro lens of AI fairness to the broader lens of AI governance. Because ultimately, responsible AI isn’t just about fixing isolated problems, but about building a foundation where innovation and ethics can move together! Link to paper here: https://lnkd.in/dpq3MiS5 #aigovernance #ai #artificialintelligence #dataprivacy #governance
To view or add a comment, sign in
-
Our early work on AI bias is what pulled me into XAI , making models explainable is key to ensuring fairness and governance. Excited to see these ideas become central in today’s discussions. Link to the paper: https://lnkd.in/edx8SrTw #XAI #AIGovernance #ReponsibleAI #AI
Chief of Staff, Information Security | Lead, Data Protection | Certified Data Protection Officer | Researcher (Data Privacy) | Sr. Member, IEEE
When I look back at where my academic journey began, one of our research paper which was written by my students (Khushi Vora Shrey Verma Aryan Jain) was on AI bias. At that time, I was deeply curious about how something we often call “objective” and “data-driven” could still reflect—and even amplify—human prejudices. That paper wasn’t written with “AI governance” in mind. But in hindsight, We see how strongly the two are connected. Bias in AI isn’t just a technical flaw that can be fixed with better data or algorithms. It’s a governance issue—it raises questions like: 1. Who gets to decide what is fair? 2. How do we ensure accountability when AI systems make impactful decisions? 3. What structures and safeguards are needed so that AI benefits everyone, not just a few? Our early work on bias gave the first glimpse of these bigger questions. And today, as we start thinking and writing more about AI governance, I realize that governance is really about creating the rules, frameworks, and values that prevent problems like bias from taking root in the first place. In many ways, bias is the symptom, while governance is the system of care. If bias exposes the risks of AI, governance is about building trust, transparency, and accountability around it! As We move forward, I want to carry that early curiosity with me—shifting from the micro lens of AI fairness to the broader lens of AI governance. Because ultimately, responsible AI isn’t just about fixing isolated problems, but about building a foundation where innovation and ethics can move together! Link to paper here: https://lnkd.in/dpq3MiS5 #aigovernance #ai #artificialintelligence #dataprivacy #governance
To view or add a comment, sign in