We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
This post is inspired from a Tweet Christoph Rumpel put out earlier today and it got me thinking about the work I did last year to ensure one of our systems at work could handle massive files that we'd get from Stripe:
What are your best practices for importing huge CSV files?
— Christoph Rumpel 🤠 (@christophrumpel) January 26, 2025
One of our systems at work is a very big consumer of the Stripe API, in fact, there's stuff happening on that pretty much every single minute of the day. Part of that system uses Stripe's API to download files, loop through them and create/update models in the database.
This took some time to get right because just as I thought I'd got it working, a file far bigger than anything I'd had before would come through from Stripe and then I'd have to re-engineer parts of the code to prevent things such as timeouts from happening. These days, it can handle absolutely massive files, some of which are close to 4,000,000 rows of data and I suspect these files will keep getting bigger and bigger.
So how does it work?
continue reading on jonathanpurvis.co.uk
⚠️ This post links to an external website. ⚠️
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.