Lakera AI AG SHALL deal in good faith with Reporters who discover, test, and report vulnerabilities or indicators of vulnerabilities in accordance with these guidelines. If you make a good faith effort to comply with this policy during your security research, we will consider your research to be authorized. We will work with you to understand and resolve the issue quickly, and Lakera AI AG will not recommend or pursue legal action related to your research. Should legal action be initiated by a third party against you for activities that were conducted in accordance with this policy, we will make this authorization known.
General
- Lakera AI AG MAY modify the terms of this policy or terminate the policy at any time.
- Lakera AI AG SHALL use information reported to this program for defensive purposes only; to mitigate or remediate vulnerabilities in our networks or applications, or the applications of our vendors.
Responsible Research
- Reporters SHALL Notify Lakera AI AG as soon as possible after you discover a real or potential security issue.
- Reporters SHALL make every effort to avoid privacy violations, degradation of user experience, disruption to production systems, and destruction or manipulation of data.
- Reports SHALL only use exploits to the extent necessary to confirm a vulnerability’s presence.Â
- Reporters SHALL NOT use an exploit to compromise or exfiltrate data, establish persistent command line access, or use the exploit to pivot to other systems.
- Reporters SHALL NOT submit a high volume of low-quality reports.
- Once Reporter has established that a vulnerability exists or encounters any sensitive data (including personally identifiable information, financial information, or proprietary information or trade secrets of any party), Report MUST stop testing and notify LAKERA AI AG immediately, and SHALL NOT disclose this data to anyone else.
In scope
Lakera AI AG is primarily interested in hearing about the following vulnerability categories:
- Sensitive data exposure—cross-site scripting (XSS), SQL injection (SQLi), etc.
- Authentication- or session management-related issues
- Remote code execution
- Particularly clever vulnerabilities or unique issues that don’t fall into explicit categories—show us your fancy footwork!
Out of scope
Reports MUST avoid the following vulnerability categories, which are outside the scope of our responsible disclosure program:
- Denial of service (DoS)—through network traffic, resources exhaustion, or other methods
- User enumeration
- Issues only present in old browsers/plugins or end-of-life software browsers
- Phishing or social engineering of Lakera AI AG employees, users, or clients
- Disclosure of known public files and other information disclosures that aren’t a material risk (e.g., robots.txt)
- Any attack or vulnerability that hinges on a user’s computer first being compromised
- Any potential configuration oversight associated with https://www.lakera.ai including DNS and certificates which do not lead directly to an RCE.Â
Case handling
- Lakera AI AG MAY, at our discretion, decline to coordinate or publish a vulnerability report. This decision is generally based on the scope and severity of the vulnerability and our ability to add value to the coordination and disclosure process.
- In the event that Lakera AI AG declines to coordinate a vulnerability report, the Reporter SHOULD proceed to coordinate with any other affected vendor(s). Additionally, the Reporter MAY proceed with public disclosure at their discretion.
- Lakera AI AG SHALL investigate every reported vulnerability and strive to ensure that appropriate steps are taken to mitigate risk and remediate reported vulnerabilities.
- Lakera AI AG SHALL, to the best of our ability, validate the existence of the vulnerability.
- Lakera AI AG SHALL determine an appropriate timeframe for mitigation development and deployment for vulnerabilities reported in systems it controls.
Coordination with reporters
- Lakera AI AG SHALL acknowledge receipt of vulnerability reports via email within 5 business days.
- Lakera AI AG MAY contact the Reporter for further information.
- Lakera AI AG SHALL inform the Reporter of the results of our validation, as appropriate, and MAY accordingly provide status updates as remediation of the vulnerability is underway.
- Lakera AI AG SHALL include credit to the reporter in any published vulnerability report unless otherwise requested by the reporter.
- In the event that Lakera AI AG chooses to publicly disclose the reported vulnerability, Lakera AI AG SHALL recognize your contribution to improving our security if you are the first to report a unique vulnerability, and your report triggers a code or configuration change.
- Lakera AI AG MAY forward the name and contact information of the Reporter to any affected vendors unless otherwise requested by the reporter.
- Lakera AI AG SHALL advise the reporter of significant changes in the status of any vulnerability he or she reported to the extent possible without revealing information provided to us in confidence.
- Lakera AI AG MAY adjust its publication timeframe to accommodate reporter constraints if that timing is otherwise compatible with this policy. In most cases such an adjustment would be expected to represent a delay rather than an acceleration of the publication schedule. Examples include delaying publication to coincide with conference presentations.
- Lakera AI AG SHALL NOT require Reporters to enter into a customer relationship, non-disclosure agreement (NDA) or any other contractual or financial obligation as a condition of receiving or coordinating vulnerability reports.
Coordination with vendors
- In the event that Lakera AI AG determines the reported vulnerability is consequent to a vulnerability in a generally available product or service, Lakera AI AG MAY report the vulnerability to the affected vendor(s), service provider(s), or third party vulnerability coordination service(s) in order to enable the product or service to be fixed.
- Lakera AI AG SHALL make a good faith effort to inform vendors of reported vulnerabilities prior to public disclosure.
- Lakera AI AG SHALL forward vulnerability reports to the affected vendor(s) as soon as practical after we receive the report.
- Lakera AI AG SHALL apprise any affected vendors of our publication plans and negotiate alternate publication schedules with the affected vendors when required.
- Lakera AI AG SHALL provide the vendor the opportunity to include a vendor statement within our public disclosure document.
- Lakera AI AG SHALL NOT withhold vendor-supplied information simply because it disagrees with our assessment of the problem.
- Lakera AI AG SHALL notify affected vendors of any public disclosure plans.
- Lakera AI AG SHALL NOT reveal information provided in confidence by any vendor.
- Lakera AI AG SHALL act in accordance with the expectations of Reporters set forth in this policy when acting as a Reporter to other Lakera AI AGs (vendors, coordinators, etc.).
Coordination with others
- Lakera AI AG MAY engage the services of a third party coordination service (e.g., CERT/CC, DHS CISA) to assist in resolving any conflicts that cannot be resolved between the Reporter and Lakera AI AG.
- Lakera AI AG MAY, at our discretion, provide reported vulnerability information to anyone who can contribute to the solution and with whom we have a trusted relationship, including vendors (often including vendors whose products are not vulnerable), service providers, community experts, sponsors, and sites that are part of a national critical infrastructure, if we believe those sites to be at risk.
Public disclosure
- Lakera AI AG SHALL determine the type and schedule of our public disclosure of the vulnerability.
- Lakera AI AG MAY disclose reported vulnerabilities reported to the public 90 days after the initial report, regardless of the existence or availability of patches or workarounds from affected vendors.
- Lakera AI AG MAY disclose vulnerabilities to the public earlier or later than 90 days due to extenuating circumstances, including but not limited to active exploitation, threats of an especially serious (or trivial) nature, or situations that require changes to an established standard.
- Lakera AI AG MAY consult with the Reporter and any affected vendor(s) to determine the appropriate public disclosure timing and details.
- Lakera AI AG SHALL balance the need of the public to be informed of security vulnerabilities with vendors' need for time to respond effectively.
- Lakera AI AG's final determination of a publication schedule SHALL be based on the best interests of the community overall.
- Lakera AI AG SHALL publish public disclosures via our Blog page.
- Lakera AI AG MAY disclose to the public the prior existence of vulnerabilities already fixed by Lakera AI AG, including potentially details of the vulnerability, indicators of vulnerability, or the nature (but not content) of information rendered available by the vulnerability.
- Lakera AI AG SHALL make our disclosure determinations based on relevant factors such as but not limited to: whether the vulnerability has already been publicly disclosed, the severity of the vulnerability, potential impact to critical infrastructure, possible threat to public health and safety, immediate mitigations available, vendor responsiveness and feasibility for creating an upgrade or patch, and vendor estimate of time required for customers to obtain, test, and apply the patch. Active exploitation, threats of an especially serious nature, or situations that require changes to an established standard may result in earlier or later disclosure.
- Lakera AI AG MAY disclose product vulnerabilities 90 days after the initial contact is made, regardless of the existence or availability of patches or workarounds from affected vendors in cases where a product is affected and the vendor is unresponsive, or fails to establish a reasonable timeframe for remediation.