Starting a Vulnerability Disclosure Program
A vulnerability disclosure program gives security researchers a safe way to report bugs. Here's how to set one up that protects your users and your reputation.
Strategic Systems Architect & Enterprise Software Developer
Starting a Vulnerability Disclosure Program
Security researchers will find vulnerabilities in your software. This is not a question of if but when. The question you control is what happens next. Without a disclosure program, a researcher who finds a critical vulnerability has three options: report it through your generic support channel and hope someone takes it seriously, post it publicly, or sell it. None of these outcomes are good for you.
A vulnerability disclosure program provides a clear, documented channel for researchers to report security issues, sets expectations for both parties, and ensures that vulnerabilities are fixed before they are exploited.
Why You Need a Disclosure Program
The argument against disclosure programs is usually some version of "we don't want to invite hackers." This misunderstands the situation. Security researchers are already looking at your application. Penetration testing tools are freely available. Automated scanners probe every internet-facing service continuously. Your choice is not between being tested and not being tested. Your choice is between having a process for handling what researchers find and not having one.
Companies without disclosure programs face a specific risk: a researcher finds a critical vulnerability, emails your support address, and the email is routed to a support agent who does not understand its significance. The researcher waits thirty days with no meaningful response, concludes you do not care, and publishes the vulnerability. Your users are now exposed, and your brand takes a hit that is entirely preventable.
A formal program avoids this by providing a security-specific reporting channel, committing to acknowledgment and response timelines, and establishing safe harbor protections that encourage responsible disclosure. For the security testing that researchers perform, having a program signals that you take security seriously and appreciate their contribution.
Setting Up the Program
Start with a security policy page — typically at /.well-known/security.txt and linked from your website footer. This page should include four things.
Scope. Define which assets are in scope for testing. Your production web application, your API, your mobile apps — list them explicitly. Exclude assets you do not control, like third-party widgets or infrastructure managed by partners. If you have staging environments, state whether testing against them is permitted.
Rules of engagement. Describe what researchers are allowed to do and what is prohibited. Testing for SQL injection is permitted. Exfiltrating customer data is not. Denial-of-service testing is usually prohibited. Social engineering attacks against your employees are usually out of scope. Be specific so researchers understand the boundaries.
Reporting channel. Provide a dedicated email address — security@yourdomain.com — or a reporting form. If you use encrypted communication, publish your PGP key. Some companies use platforms like HackerOne or Bugcrowd to manage reports, which provides structured submission, triage workflow, and researcher reputation tracking.
Safe harbor. Commit in writing that you will not pursue legal action against researchers who follow your rules of engagement. This is the single most important element of your program. Without safe harbor, many skilled researchers will not report vulnerabilities because the legal risk is not worth it.
# security.txt example
Contact: security@example.com
Encryption: https://example.com/.well-known/pgp-key.txt
Preferred-Languages: en
Policy: https://example.com/security-policy
Hiring: https://example.com/careers
Expires: 2027-01-01T00:00:00.000Z
Triage and Response
When a report arrives, acknowledge it within 24 hours. This does not mean you have assessed it in 24 hours — it means you have confirmed receipt and provided a timeline for initial assessment. Researchers who submit reports and hear nothing become frustrated quickly. A simple "we received your report and will have an initial assessment within five business days" sets expectations and demonstrates professionalism.
Assess the report against your severity framework. Not every report is a critical vulnerability. Some are informational findings, some are low-severity issues, and some are duplicates or non-issues. Have a consistent framework for evaluating severity — CVSS is the industry standard — and communicate the assessed severity to the researcher.
Fix critical and high-severity vulnerabilities before public disclosure. Coordinate a disclosure timeline with the researcher — 90 days is the industry standard established by Google's Project Zero. If you need more time for a complex fix, communicate proactively. Most researchers are reasonable about timeline extensions when the vendor is demonstrably working on a fix.
Bounties: To Pay or Not to Pay
Bug bounty programs — where you pay researchers for valid vulnerability reports — are a natural extension of a disclosure program. They attract more researcher attention and incentivize finding and reporting issues rather than exploiting them.
You do not need to start with a bounty program. A disclosure program with no financial rewards still provides significant value. Many researchers report vulnerabilities out of professional ethics or to build their reputation. Start with a basic disclosure program, build the triage and response muscle, and add bounties when your process is mature enough to handle increased volume.
If you do implement bounties, set reward amounts based on severity and impact. A critical remote code execution vulnerability is worth significantly more than an informational disclosure of server version headers. Be transparent about reward amounts so researchers can assess whether testing your application is worth their time.
Ensure your incident management process is mature before launching a public bounty program. The increased volume of reports — including duplicate and low-quality submissions — requires a functioning triage process, clear escalation paths, and engineers with allocated time for security fixes.
A vulnerability disclosure program is one of the highest-value, lowest-cost security investments you can make. It costs nothing to publish a security policy and provide a reporting channel. It costs very little to acknowledge reports promptly and fix what is found. And it prevents the scenario every security team dreads: learning about a critical vulnerability from a public blog post rather than a private report.