Skip to main content

Secrets in Code: Why Hardcoded Credentials Are Your Biggest Security Risk

March 2026 | 7 min read

In 2023, researchers scanning public GitHub repositories found over 10 million exposed secrets — API keys, database passwords, private keys, and OAuth tokens committed directly into version control. The number was not a spike; it has grown consistently year over year. Despite widespread awareness that hardcoded credentials are dangerous, they remain one of the most commonly found critical vulnerabilities in production codebases.

Understanding why this keeps happening, and how to actually stop it, requires going beyond the advice to "use environment variables."

How Secrets End Up in Code

The most common path is convenience under deadline pressure. A developer needs to test an integration quickly, pastes a real API key directly into the code, and plans to replace it before committing. The replacement does not happen. The key commits.

A second common path is configuration migration. A team moves from one secrets management approach to another and leaves deprecated config files with real credentials in place. The files are not actively used, so nobody notices them until a scanner finds them six months later.

The third path is copy-paste from documentation or examples. Developers copy working example code that includes placeholder credentials and replace the placeholder with a real value without moving it to a proper secrets store.

In each case, the credential enters version control. And version control is permanent.

Why Git History Makes This Worse

Deleting a file or removing a line from source code does not remove it from git history. A credential committed in any past commit remains retrievable indefinitely by anyone with read access to the repository. On a public repository, that means anyone on the internet.

Git history scrubbing tools exist, but they rewrite history in ways that require force pushes, disrupt all open branches, and cannot retroactively protect the window between the initial commit and the remediation. If a secret was ever committed to a public repository, the only safe assumption is that it was harvested during that window. The credential must be revoked and rotated, not just removed from the codebase.

For private repositories, the exposure window extends to anyone who has ever had read access: former employees, contractors, third-party integrations, and CI/CD systems that clone the repository.

How Attackers Find Exposed Secrets

Automated secret harvesting is a solved problem for attackers. Tools that continuously scan GitHub, GitLab, and Bitbucket for high-entropy strings, known secret patterns (AWS access key prefixes, Stripe key formats, Firebase service account JSON structures), and credential-adjacent keywords commit output to attacker-controlled databases in real time.

The gap between a credential appearing in a public repository and it appearing in an attacker's database is measured in seconds to minutes. Some attackers operate scanning infrastructure that indexes new commits faster than the GitHub search API can return them.

For private repositories, insider threat and credential phishing are the primary vectors, but automated scanning of developer machines, CI logs, and package registries also captures secrets that leak through indirect paths.

What Attackers Do With Exposed Credentials

The value of an exposed credential depends on what it accesses. Exposed database credentials enable direct data exfiltration. Exposed cloud provider credentials enable resource abuse — cryptocurrency mining, data extraction, and lateral movement to other services. Exposed payment processor keys enable fraudulent transactions.

Exposed API keys for third-party services (email providers, SMS gateways, analytics platforms) enable impersonation attacks and abuse that generates costs charged to the legitimate account holder. The average cost of an exposed cloud credential from cryptocurrency mining alone runs into thousands of dollars per day.

Timing matters: many exposed credentials are exploited within hours of exposure. Rotation after the fact does not undo the damage from the exposure window.

The Environment Variable Misconception

"Use environment variables" is correct advice but incomplete. Environment variables move the secret out of the source file, but they introduce their own exposure vectors that teams frequently underestimate.

.env files committed to version control are one of the most commonly found secret exposure patterns in code audits. Developers add .env to.gitignore but forget that .env.example contains real values, or that a CI script echoes environment variables to build logs, or that a framework's debug mode prints environment context to error pages.

Runtime environment variable exposure through application logs, crash reporters, and debugging tools is a persistent risk that environment-variable-only approaches do not address.

A Mature Secrets Management Program

Pre-commit Secret Scanning

Secret scanning at the pre-commit hook stage stops credentials before they enter version control. Tools like git-secrets, detect-secrets, and truffleHog can run locally as a git hook and block commits containing high-entropy strings or known secret patterns. This is the highest-value intervention because it prevents the exposure entirely rather than detecting it after the fact.

CI/CD Pipeline Scanning

Pre-commit hooks can be bypassed or disabled on individual machines. CI-level secret scanning provides a second layer that catches what pre-commit misses. GitHub Advanced Security, GitLab Secret Detection, and similar tools scan every push and pull request for known secret patterns and alert on matches before code merges.

Dedicated Secrets Management Infrastructure

For secrets that need to be available at runtime, dedicated secret stores (HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) provide fine-grained access control, audit logging, automatic rotation, and revocation capabilities that environment variables cannot match. Applications retrieve secrets at runtime rather than having them injected at deploy time, reducing the exposure window from the full lifetime of the deployment to the duration of each request.

Regular Secret Rotation

Even without a known exposure event, credentials should be rotated on a scheduled basis. Regular rotation limits the damage from undetected exposures and reduces the value of credentials harvested in historical repository scans. Automated rotation, where the secrets management infrastructure rotates credentials and distributes new values without human intervention, is the mature pattern.

Detecting Existing Exposures in Your Codebase

If your team has not run a comprehensive historical secret scan, there is a meaningful probability that credentials exist in your git history or in configuration files that were once committed and later removed from the working tree. A full history scan examining every commit, not just the current HEAD, is necessary to establish a clean baseline.

The scan should cover high-entropy string detection, pattern matching for known service credential formats, keyword proximity analysis (variables named password, secret, key, or token with adjacent high-entropy values), and configuration file formats commonly used to store credentials.

Find Exposed Secrets in Your Codebase

MergeProof secret scanning audits examine your full git history and current codebase for exposed credentials, hardcoded API keys, and misconfigured secret handling. Snapshot reports delivered in 48 hours starting at $500.

View Pricing