🌍 How can deep learning transform medical imaging to deliver sharper results, faster scans, and lower barriers for clinical use? Advanced imaging techniques – like MRI, CT, ultrasound, or photoacoustic imaging – are essential in diagnosis and treatment, but they often demand long scan times, high costs, and large amounts of data. At our Data Science Seminar, Navchetan Awasthi will present on PA-OmniNet, a deep learning framework for image reconstruction from sparse data. PA-OmniNet produces high-quality images without the need for retraining, making imaging more accurate, efficient, and easier to deploy across different clinical settings. 📅 23 September 2025 🕙 10:00 – 11:00 📍 LAB42, Science Park 900, Amsterdam 👤 Speaker: Navchetan Awasthi is Assistant Professor at the Informatics Institute and member of the QurAI team. ➡️ The seminar is free and open to all disciplines. More information and registration: https://lnkd.in/eZ-Nrbji #UvADataScienceCentre #datascience #deeplearning #medicalimaging
UvA / AUAS Library’s Post
More Relevant Posts
-
I’m pleased to present, together with my colleagues Guest Editors, the Special Section on Generative AI for Medical Imaging in the Journal of Medical Imaging. Generative models (GANs, VAEs, diffusion models, vision-language systems) are moving beyond research prototypes and becoming tools with real potential to transform medical image reconstruction, synthesis, and interpretation. We look forward to seeing submissions that address both the opportunities and the challenges of bringing these technologies into clinical practice. 📅 Submissions open September 1st 🔗 Call for Papers: https://lnkd.in/dG2Q289n #SPIE #GenAI #MedicalImaging
Call for Papers: Special Section on Generative AI for Medical Imaging from Journal of Medical Imaging. Generative AI is transforming medical imaging — not only analyzing images, but creating them. From GANs and VAEs to diffusion models and vision-language systems, these technologies are opening new frontiers in image synthesis, reconstruction, multimodal integration, and clinical decision support. Along with 5 other guest editors of this special section (Jon Tamir, Tomaž Vrtovec, Maria Chiara Fiorentino, Robert Martí, Cheng Li), I'm excited to announce a Special Section on Generative AI for Medical Imaging in the Journal of Medical Imaging (JMI) (Editor-in-Chief: Bennett Landman). Submissions open in Q3 (will update when the exact date is announced). We welcome high-quality, original research and critical reviews that push the boundaries of how generative models can advance imaging modalities and clinical practice. Topics of interest include (but not limited to): * Image reconstruction, enhancement, and super-resolution * Synthetic data generation and domain adaptation * Multimodal synthesis, cross-domain generation, and VLMs/LLMs for imaging * Ethical, fairness, and regulatory considerations * Clinical integration, validation, and workflow applications Full call for papers: https://lnkd.in/eARwvH58 We also welcome high-quality submissions from work presented at high-ranking conferences, such as MICCAI, IPMI, MIDL, CVPR, ICLR, ICML, SPIE Medical Imaging, and others. If your research is advancing the frontiers of Generative AI in Medical Imaging, we would love to see your submission! #SPIE #JMI #GenAI #MedicalImaging #Callforpapers
To view or add a comment, sign in
-
-
Excited to see the new Special Issue on "Applications of Computational Medicine" published in the Journal of Informatics and Medical Imaging (@Lidsen Journals). This collection delves into the powerful convergence of data science, AI, and clinical practice. The research covers groundbreaking work in medical imaging informatics, AI-powered diagnostics, computational models for disease progression, and novel drug discovery methods. It's inspiring to see how computational methods are moving from theory to tangible applications that can improve patient outcomes and revolutionize healthcare delivery. A must-read for anyone in health tech, medical research, data science, or clinical practice looking to stay at the forefront of innovation. Check out the issue here: https://lnkd.in/dJxwsuCW #ComputationalMedicine #AIinHealthcare #HealthTech #MedicalImaging #DigitalHealth #AI #MachineLearning #MedTech #Innovation #Research #PrecisionMedicine
To view or add a comment, sign in
-
Does DINOv3 Set a New Medical Vision Standard? https://lnkd.in/eQSCv2vH The advent of large-scale vision foundation models, pre-trained on diverse natural images, has marked a paradigm shift in computer vision. However, how the frontier vision foundation models' efficacies transfer to specialized domains remains such as medical imaging remains an open question. This report investigates whether DINOv3, a state-of-the-art self-supervised vision transformer (ViT) that features strong capability in dense prediction tasks, can directly serve as a powerful, unified encoder for medical vision tasks without domain-specific pre-training. To answer this, we benchmark DINOv3 across common medical vision tasks, including 2D/3D classification and segmentation on a wide range of medical imaging modalities. We systematically analyze its scalability by varying model sizes and input image resolutions. Our findings reveal that DINOv3 shows impressive performance and establishes a formidable new baseline. Remarkably, it can even outperform medical-specific foundation models like BiomedCLIP and CT-Net on several tasks, despite being trained solely on natural images. However, we identify clear limitations: The model's features degrade in scenarios requiring deep domain specialization, such as in Whole-Slide Pathological Images (WSIs), Electron Microscopy (EM), and Positron Emission Tomography (PET). Furthermore, we observe that DINOv3 does not consistently obey scaling law in the medical domain; performance does not reliably increase with larger models or finer feature resolutions, showing diverse scaling behaviors across tasks. Ultimately, our work establishes DINOv3 as a strong baseline, whose powerful visual features can serve as a robust prior for multiple complex medical tasks. This opens promising future directions, such as leveraging its features to enforce multiview consistency in 3D reconstruction. --- Newsletter https://lnkd.in/emCkRuA More story https://lnkd.in/enY7VpM LinkedIn https://lnkd.in/ehrfPYQ6 #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
To view or add a comment, sign in
-
-
Balancing image quality with patient comfort in medical imaging has become one of the major challenges in this field, with scan time sacrificed for diagnostic detail. A just-released paper by Maja Schlereth, Moritz Schillinger, and Katharina Breininger deals with and attempts to resolve this problem. The paper presents a self-supervised neural network for fusing two sets of fast, low-resolution MRI scans from different viewing directions into a single high-resolution image. The ingenuity of their method is that it works in two stages: first, a generic and coarse "offline" training, then an online "fine-tuning" adaptation which is performed extremely fast on specific patient data. The online fine-tuning is very fast (up to 10 times faster than other state-of-the-art methods) and thus, very much needed for real clinical application. The model does away with the use of high-resolution images for training and solves the common bottleneck of unavailability of high-res images in medical AI. The attached figure is an excellent representation of these results. It compares the output of their method ("Ours") against the ground truth HR scan and other methods. Methods such as cubic spline and SMORE tend to produce images that are blurred or noisy, whereas another state-of-the-art method, BISR, introduces blocky artifacts. On the contrary, the proposed technique preserves fine anatomical structures and minimizes noise to a large extent, reaching almost the quality of the original HR scan. I think a critical research path is open in the future. Being that the paper demonstrates very good generalization over different datasets and MR sequences, the next thing might be to study the fairness and robustness of the model. It would be important to investigate if the model can work well across diverse patient demographics, pathologies, and scanner hardware. It is imperative to guarantee that such powerful AI instruments remain free of hidden biases if one wants to build unbiased diagnostic systems that everyone can use. Kudos to the authors at FAU Erlangen-Nürnberg and Julius-Maximilians-Universität Würzburg! This is a great leap towards efficient and high-quality medical imaging. Preprint: https://lnkd.in/e4Z9ghbg #MedicalImaging #AIinHealthcare #DeepLearning #SuperResolution #MRI #ComputerVision #MachineLearning #SelfSupervisedLearning
To view or add a comment, sign in
-
-
✨ Excited to share some of our recent research contributions in the field of medical imaging and AI for intracranial aneurysm management! 📌 Journal Publications – "Computationally efficient dilated residual networks for segmentation of major cerebral vessels in MRA." Network Modeling Analysis in Health Informatics and Bioinformatics 14, no. 1 (2025): 95. – "Machine learning analysis of integrated ABP and PPG signals towards early detection of coronary artery disease." Scientific Reports 15, no. 1 (2025): 1-9. – "Computer-Aided Volumetric Quantification of Pre- and Post-Treatment Intracranial Aneurysms in MRA" IET Image Processing, (2025). These works reflect our ongoing efforts in developing AI-driven tools for diagnosis, quantification, and post-treatment monitoring of intracranial aneurysms. Special thanks to my co-authors, supervisors, and neurointerventional radiologists who contributed their invaluable expertise, clinical insights, and continuous support in making this research possible. Together, we are working towards bridging the gap between AI innovation and clinical practice to improve patient outcomes. #AI #MedicalImaging #Aneurysm #DeepLearning #IEEE #Research #Teamwork
To view or add a comment, sign in
-
📚 From glass slides to digital AI assistants The pathology world is rapidly changing. According to Nature, new AI-powered models are making it possible to analyze slides, order tests, and even generate reports—all while addressing the global pathologist shortage. With EnvisionPath™, Cytovision empowers universities and teaching hospitals to adopt digital slides for teaching, collaboration, and AI readiness. Students and trainees can now access a world-class pathology archive without relying on fragile glass slides. 💡 Digital pathology isn’t just the future—it’s the classroom today. Are you ready for that? https://lnkd.in/dHuQtZDe #NextGenEducation #DigitalPathology #EnvisionPath #Cytovision #WholeSlideImaging
To view or add a comment, sign in
-
🚀 AI is transforming pathology. A recent Nature article highlights how AI models are reshaping diagnostics. From detecting cancer subtypes to generating pathology reports, these tools promise faster, more accurate, and more collaborative workflows. At Cytovision, we believe this evolution validates the importance of whole slide imaging and digital platforms like EnvisionPath™. By enabling local innovation and bridging AI into diagnostic workflows, we are helping pathologists and researchers in Malaysia and beyond prepare for this next frontier. 👉 Are you ready for AI-powered pathology? #DigitalPathology #AI #HealthcareInnovation #Cytovision
📚 From glass slides to digital AI assistants The pathology world is rapidly changing. According to Nature, new AI-powered models are making it possible to analyze slides, order tests, and even generate reports—all while addressing the global pathologist shortage. With EnvisionPath™, Cytovision empowers universities and teaching hospitals to adopt digital slides for teaching, collaboration, and AI readiness. Students and trainees can now access a world-class pathology archive without relying on fragile glass slides. 💡 Digital pathology isn’t just the future—it’s the classroom today. Are you ready for that? https://lnkd.in/dHuQtZDe #NextGenEducation #DigitalPathology #EnvisionPath #Cytovision #WholeSlideImaging
To view or add a comment, sign in
-
🔬 Researchers at Johns Hopkins, Stanford, and Optosurgical trained SRT-H, a dual-transformer controller that enabled an off-the-shelf da Vinci robot to clip and cut pig gallbladders — without human guidance. 🤯 The system autonomously completed all 17 required surgical steps on 8 specimens with 97% accuracy and 95% error detection. 🧑⚕️ Many people think jobs like surgeons are “safe” from AI… 🤖 But this new research shows a surprising truth: no profession is 100% safe from AI disruption. Curious? Here’s the full article worth reading 👇 https://hubs.la/Q03CzYQX0 🔗
To view or add a comment, sign in
-
Introducing KnowledgeBytes: bite-sized insights from DPA members, designed to spark curiosity and deepen your understanding of digital pathology. Our first Byte is live: Dr. Selim Sevim of Oregon Health & Science University explores how 3D whole block imaging (WBI) moves beyond traditional 2D WSI to capture full tissue volume. From pixels to voxels, from ROIs to VOIs, this shift opens the door to deeper diagnostics, improved AI modeling, and a new standard in histopathology. Read his Byte: https://lnkd.in/gdUz-pz2 Have an insight to share? Submit your own: https://lnkd.in/gnKQ8RmG Let’s keep learning from each other, one Byte at a time.
To view or add a comment, sign in
-
Prompt-Guided Patch UNet-VAE with Adversarial Supervision for Adrenal Gland Segmentation in Computed Tomography Medical Images https://lnkd.in/e-KzC38W Segmentation of small and irregularly shaped abdominal organs, such as the adrenal glands in CT imaging, remains a persistent challenge due to severe class imbalance, poor spatial context, and limited annotated data. In this work, we propose a unified framework that combines variational reconstruction, supervised segmentation, and adversarial patch-based feedback to address these limitations in a principled and scalable manner. Our architecture is built upon a VAE-UNet backbone that jointly reconstructs input patches and generates voxel-level segmentation masks, allowing the model to learn disentangled representations of anatomical structure and appearance. We introduce a patch-based training pipeline that selectively injects synthetic patches generated from the learned latent space, and systematically study the effects of varying synthetic-to-real patch ratios during training. To further enhance output fidelity, the framework incorporates perceptual reconstruction loss using VGG features, as well as a PatchGAN-style discriminator for adversarial supervision over spatial realism. Comprehensive experiments on the BTCV dataset demonstrate that our approach improves segmentation accuracy, particularly in boundary-sensitive regions, while maintaining strong reconstruction quality. Our findings highlight the effectiveness of hybrid generative-discriminative training regimes for small-organ segmentation and provide new insights into balancing realism, diversity, and anatomical consistency in data-scarce scenarios. --- Newsletter https://lnkd.in/emCkRuA More story https://lnkd.in/enY7VpM LinkedIn https://lnkd.in/ehrfPYQ6 #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
To view or add a comment, sign in
-