Quick Tip: Load 10x Faster by Letting Snowflake Scale Out
Snowflake recommends file sizes of 100–250MB for efficient loading—and they’re absolutely right.
But what if you’re batch loading hundreds or even thousands of tables with just a few thousand rows each? They’re never going to hit that size.
Here’s what worked on a migration I helped with:
✔️ 320TB of data
✔️ 60,000+ tables
✔️ Wide variation in file sizes
We ran each COPY command in a separate session, using a multi-cluster warehouse with MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT set. Snowflake took care of the rest—spinning up extra clusters automatically and loading files in parallel.
With MAX_CLUSTER_COUNT = 10, we were loading 80 tables in parallel. No orchestration scripts, no manual coordination. Just raw throughput.
It’s ideal for:
No need to batch or consolidate files. Just let Snowflake scale out.
If you found this useful, I send out one like this every Tuesday.
Want them in your inbox? Click here: https://tinyurl.com/snowflake-011
Snowflake Monitoring / Governance | Snowpro Certification SME
1moHey John, hope your well. Try raising the MAX_CONCURRENCY_LEVEL for the Warehouse to 16 (at your own risk). You might just see 10x faster and 2x cheaper. I've been experimenting with this while loading tables (with 100s of Millions of rows) from SQL Server into Snowflake using the Omnata Snowflake Native App for SQL Server. (Different workload, but possibly have similarly compute/memory intensive?) What we're seeing is the same throughput (~500 million rows in 20 mins) but only half the clusters spin up at 16 vs the default setting (8). YMMV
Principal Data Analyst at Wellington City Council
1moAnd now you can have more than 10 in you cluster it would be even faster.