Lost $300 due to an API key leak from "vibe coding" – Learn from my mistake

3 points by liulanggoukk a day ago

I just learned an expensive lesson and wanted to share it here so others don’t make the same mistake.

I recently lost $300 because of an API key leak. It started with a surprise $200 charge from Google Cloud, and when I looked into it, I found another $100 charge from the day before. Both were for Gemini API usage that I never intentionally set up.

After digging, I discovered the issue: I had hard-coded an API key in a script that was part of a feature I ended up deprecating. The file was only in the codebase for two days, but that was enough for the key to leak. Google actually sent me alerts about unusual activity, but I missed them because they went to a less-frequently-checked email account.

Here’s what I learned:

Never hardcode API keys - Use environment variables or a .env file, even for temporary code.

Set up billing alerts - Google Cloud (and other providers) let you set up alerts for unexpected charges.

Check all linked emails - Don’t ignore notifications, even if they’re sent to secondary accounts.

Don’t rely solely on GitHub’s secret scanning - It’s useful, but renaming variables can bypass it.

This happened while I was experimenting with "vibe coding" (letting AI generate code quickly), but I realized too late that human oversight is still crucial, especially for security.

Hope this helps someone avoid the same costly mistake!

TL;DR: Hard-coded an API key in a deprecated script, key leaked, and I got charged $300. Always use environment variables and set up billing alerts!

tkiolp4 an hour ago

Talking about hardcoded api keys, what’s the usual approach when dealing with a mobile app that talks to an api? Users don’t need auth to use the app (they do login via an alphanumeric code they get via marketing). I only know how to do this properly via auth flows (user inputs username + passwd, then app calls the api for a user jwt, the app then uses the jwt in subsequent calls). I don’t think using this flow makes sense when the user “logins” via a simple alphanumeric code (which is of length 5 and anyone could guess)

fiftyacorn 10 hours ago

I always wish you could kill switch a billing alert on any cloud service - so if it goes above my prescribed limit just take it offline

  • Someone1234 10 hours ago

    Most support this (e.g. AWS's free tier, Microsoft's Monthly Student Credit), but intentionally won't let customers manually set it. It isn't an oversight that they don't offer this, it is an intentional choice.

    I think this bad-choice backfires though. I spend less time learning Cloud Services because the risks without a hard-limit are too high.

    • scarface_74 3 hours ago

      There was no such thing as a “free tier” on AWS before July of this year. There were some services that allowed free usage either for the first $x months or free up to a certain amount every month indefinitely.

      Now there is an actual free tier that won’t let you go over $250 on AWS.

objcts 13 hours ago

> human oversight is still crucial, especially for security

always always always: code review everything AI makes (CREAM)

it also helps if you understand what it’s writing. the only way to do that is to… review the code

scarface_74 3 hours ago

You learned the wrong lesson.

You should never specify API keys anywhere in your code or env files for GCP or AWS.

https://cloud.google.com/docs/authentication/application-def...

You still risk checking in your env file.

Doing it the correct way, your config is in your home directory locally far away from your repo and it finds the configuration automatically when running on GCP.

Even better when developing locally is assign environment variables to temporary access keys.

I’m being handwavy because I’m not a GCP guy. But on AWS, you do something similar by using “aws config” locally and using the IAM role attached to the VM, Lambda, etc so you never need to deploy with access keys.

This isn’t meant to be an “AWS does it better comment”. It looks like from my brief research, something similar is also best practice with GCP.

giveita 12 hours ago

I hate API keys. We need to get rid of them. Everyone who can influence this ... please do.

The alternative? JWT or suchlike. Authenticate each session with zero trust.

At big corp work everything is Okta / JWT / Yubikey etc. Very very occasionally an API key.

  • scarface_74 3 hours ago

    So exactly how would you suggest using a YubiKey in a script that runs automatically and is meant to run unsupervised?

    Wouldn’t it be logical that Google knew about zero trust? The problem wasn’t the API Key, the problem was that the poster didn’t use best practices - see my other comment.

    Even if it wasn’t a built in facility like the three or four ways to authenticate with GCP or AWS programmatically and you did have to use long live API keys, you could still piggy back off the cloud providers access I mentioned and read from a secure cloud hosted vault using your temporary keys from your script.

    In the case of AWS read your third party API key from secrets manager and read secret manager based on your keys in your home directory or better yet your short lived local keys in your environment variables - not a local environment file that you will probably forget to use .gitignore for

    • giveita an hour ago

      Ideally an unauthorised script e.g. CI/CD is authenticated via a session initially. Yes under the hood a secret is stored and you could argue its morally an API key - however the UX wouldnt be developer logs in, copies a key to their clipboard then pastes it hopefully in the CI secrets section but also likely in the code.

      • scarface_74 an hour ago

        I know more about AWS, but GCP from what I read is similar, best practice is that you have a web page that you authenticate to via SSO and get temporary access keys that you assign to environment variables. The SDK automatically knows how to read from the environment variables locally.

        When you run your code on the cloud platform, you attach privileges to the run time environment (VM, Lambda, docker runtime, etc) that are properly scoped for least privilege. The SDK also knows how to get your permissions from it automatically. You never need to worry about your code getting the proper access keys.

        I’ve done most of my CI/CD using AWS native services that you also attach the role to the runtime. For instance CodeBuild is really just a Linux or Windows Docker runtime that you can run anything in and you attach permissions to your CodeBuild project. Of course your AWS access is controlled ideally via your SSO or 2FA.

        I have done some work with Azure DevOps - which doesn’t have anything to do with Azure. You can also use it to deploy your AWS and you store your access keys in an Azure controlled vault and your pipeline gives AWS permissions to your scripts. I think the same thing works with GitHub Actions.