AI-Human Experiment: Building a Smarter Deposit Model for a Volatile World
Let's be honest: the 2022-2025 rate cycle broke a lot of old models. For those of us in banking, relying on static beta assumptions to predict how deposit rates would react to the Fed's aggressive tightening felt like navigating a hurricane with a paper map. It was clear we needed a fundamentally smarter approach.
This led me to an experiment: What if we paired expert human quants with advanced AI agents to build a next-generation deposit repricing model from the ground up? The goal wasn't just to improve statistical fit, but to build something robust, economically intuitive, and truly fit for purpose in modern asset-liability management (ALM).
The results were fascinating. The AI developed a volatility-adjusted dynamic beta model that not only achieved 97.5% explanatory power and cut forecast errors by 35% during the recent turbulent period but also gave us a powerful new playbook for human-AI collaboration in quantitative finance.
The AI-Powered Workflow: A New Kind of Quant Team
My experiment wasn't about letting an AI run wild. It was a structured collaboration.
The Breakthrough Model: A Look Under the Hood for Practitioners
For banking professionals looking to leverage this work, here is the final functional form of our recommended model. It combines a long-run equilibrium relationship with a short-term error correction mechanism, making it robust for both forecasting and risk management.
1. The Long-Run Cointegrating Relationship: This defines the target equilibrium deposit rate based on market conditions.
ILMDHYLD_t = α + β_t × FEDL01_t + γ₁ × FHLK3MSPRD_t + γ₂ × 1Y_3M_SPRD_t + ε_t
2. The Dynamic Beta Function (with Volatility Adjustment): This is the core innovation, defining how the sensitivity (beta) of the deposit rate changes.
# First, calculate the beta based on the rate level β_t_level = β_min + (β_max - β_min) / (1 + exp(-k × (FEDL01_t - m)))
# Then, adjust it for market volatility β_t = β_t_level × (1 - λ × (σ_t / σ*))
Key Variables:
3. The Short-Run Error Correction Component: This governs how quickly the deposit rate adjusts back to its long-run equilibrium after a shock.
ΔILMDHYLD_t = φ₀ + φ₁ × ΔFEDL01_t + φ₂ × ε̂_{t-1} + ν_t
Key Variables:
This complete structure ensures that the model is not only statistically robust but also grounded in economic theory, capturing both long-run relationships and short-term dynamics.
Where Human Expertise Was Irreplaceable
The AI agents were phenomenal at computational tasks, but the experiment's success hinged on my ability to guide and interpret their work.
The Surprise Finding: Why Volatility Can Dampen Competition
One of the most counterintuitive yet powerful findings was the role of the volatility dampening parameter (λ). I believe, as many would, that volatility should increase repricing betas as depositors shop for yield and liquidity becomes scarce.
However, the model showed that high volatility in the path of interest rates, distinct from high levels, temporarily slows repricing. Why?
The 24-month rolling volatility window captured this perfectly. It created an asymmetric effect: a lag in repricing on the way up during volatile periods and a slightly faster adjustment on the way down as volatility subsided. It’s a sophisticated dynamic that a simpler model would miss entirely.
AI's Next Job: Revolutionizing Model Governance & Maintenance
Developing a great model is only half the battle. The real work, and often the most tedious for quant teams, is the lifecycle management. This is where AI agents are poised to deliver their next big productivity win.
The New Playbook for Quantitative Modeling
My takeaway from this experiment is that the future isn't about replacing human experts with AI; it's about creating a powerful collaborative framework where each plays to its strengths.
Thus, a playbook for moving forward is to build on this synergy:
The institutions that master this collaborative approach will not only build better models but will also build them faster and maintain them more effectively. They will gain a sustainable advantage in risk management, regulatory relations, and strategic agility. The future of quantitative finance belongs to teams that perfect this powerful partnership.
What's your take on using AI for model development and governance?
IRRBB/ ALM Consultant | BA/PM | Market Risk | Data Governance
1moThanks for sharing , Chih. AI agents are a useful tool for quick check and understanding the impact of various factors by generating POCs.
Senior Quantitative Risk Analyst at Handelsbanken - Quantifying the risks of tomorrow
2moSo the central idea of the beta function is that the beta increases with the rate? Hence the fact that one see beta rising during the rate cycle should not be seen as a "catch up" effect, but rather that the banks ability to increase the margin even more is being diminished. What are typical values of k?
Risk Inn | IIT & Tulane Alum | Finance | AI | Quant | ex-Scientist | MS, PhD-ABD | Published Author
2moThanks for sharing, Chih
Team Lead, Software & AI Engineering, Technologist & Futurist
2moInspiring and great experiment!