Introducing Giskard-vision, and new integration with NVIDIA
Hi there,
The Giskard team hopes you're having a good week!
Today, we're excited to share some great news for our library users. Our R&D team has been working to help you test and improve your computer vision models.
Our Giskard scan has expanded its capabilities with the release of giskard-vision, our latest module designed specifically for computer vision tasks. This allows you to automatically detect vulnerabilities in image classification, object detection, and landmark detection models.
👉 Read more about Giskard-vision
In addition, we're thrilled to announce our new integration with NVIDIA NeMo Guardrails, bringing robust testing to LLM-based applications.
🎯 Meet us at Big Data & AI Paris
Join us for a live demo of our LLM Evaluation Hub at the upcoming Big Data & AI Paris event. We'd love to show you how our platform can streamline your AI testing process.
🚀 Introducing giskard-vision
Our Giskard scan has expanded its capabilities to computer vision tasks with the release of giskard-vision. This latest module in our open-source library is designed to assess the reliability and safety of machine learning models in computer vision applications.
giskard-vision allows you to:
Identify performance degradation under specific conditions or subsets of data
Detect fairness issues and biases linked to sensitive attributes
Assess robustness against image perturbations like blur or noise
Evaluate model performance across various image attributes such as contrast, brightness, or color
Our scan now supports a wide range of vision tasks, including:
Image Classification
Object Detection
Landmark Detection
How to get started
Install giskard-vision:
2. Wrap your model and dataset:
3. Run the scan:
Giskard-vision documentation 📄
🤝 New Integration: NVIDIA NeMo Guardrails
We're excited to announce our integration with NVIDIA NeMo Guardrails. This collaboration allows developers to create more secure and robust LLM-based applications by combining Giskard's advanced testing capabilities with NeMo Guardrails' control mechanisms.
With this integration, users can now run a Giskard scan on their LLM application and export detected vulnerabilities as Colang rules with a simple Python command. These rules can then be easily incorporated into the NeMo Guardrails configuration, providing immediate protection against identified risks. This streamlined process significantly improves the development and deployment of LLM applications, allowing for better detection of vulnerabilities and automated rail generation.
Read more about the integration ✨
📰 What's the latest news?
Global AI Treaty: EU, UK, US, and Israel sign landmark AI regulation
The Council of Europe has signed the world's first AI treaty marking a significant step towards global AI governance. This Framework Convention on Artificial Intelligence aligns closely with the EU AI Act, adopting a risk-based approach to protect human rights and foster innovation. The treaty impacts businesses by establishing requirements for trustworthy AI, mandating transparency, and emphasizing risk management and compliance.
🔮 What's Next?
We're working hard on improving our LLM Evaluation Hub and our AI Compliance Platform to help you make LLM applications safer and navigate the evolving regulatory landscape. Stay tuned for updates!
Thank you for your continued support! 🫶
Reach out to us today to learn more about how we can help you to ensure your models are safe and reliable.
See you soon!
The Giskard Team 🐢
Directeur Data & IA
10moGreat job !!! Thrilled to test it
VC @Cathay | ex-AWS Startups
10moWoop woop! Super cool!
Co-founder @ Giskard AI | Secure your LLM Agents ⛑️
10moGitHub link: https://github.com/Giskard-AI/giskard-vision Bravo Benoît Malézieux for your excellent #opensource contribution on testing AI #vision models 👏