Vulnerability Handling Process (draft)
Oniro Project aims to build a secure system from the foundation, applying the best industry practices in terms of development quality. However, as in every software project, bugs do happen. Some of them will offer a possibility to be exploited by an attacker and are called security vulnerabilities. This process explains how we handle security issues and extends the more generic bug handling process.
We work in the open, including the process of handling security issues. To protect deployed products, sometimes we need to delay releasing information related to security issues, following the industry best practices. However, all information about vulnerabilities is becoming publicly available at the end.
How to Report a Vulnerability?
If you think you have found a security issue in our distribution, please contact us immediatelly by posting a confidential issue in our bug tracker in a dedicated security project.
To do so, login into our issue tracker or create a new account if you do not have one yet. Click on New issue, then make sure to check the checkbox at the bottom ‘This issue is confidential and should only be visible to team members with at least Reporter access’. Please use the ‘Issue’ type of ticket and the associated template. Fill in the title, answer the questions in the ‘Description’ field. Then click ‘Create issue’.
Your report should contain a description of the issue, the steps you took to reproduce the issue (including the image name), affected versions, and, if known, any mitigations for the issue.
We plan to add a security-related mailing list and a possibility to send GPG-encrypted email in the near future.
We aim to acknowledge the reception within one working day, and responding with a first assessment within three working days. We follow a 90 days disclosure timeline.
We will be happy to acknowledge your work in the vulnerability announcement, and will do so if you do not object.
This first section is included in the
SECURITY.md file in our high-level
We use responsible vulnerability disclosure, and you can read more about this kind of disclosures in the Vulnerability Disclosure Cheat Sheet from OWASP or the detailed CERT Guide to Coordinated Vulnerability Disclosure .
Security Response Team (SRT)
Our Security Response Team (SRT) is reviewing reported security issues and updating the security policies. Members of the team are chosen by the project partners and elected by and from the project developers. Ideally, they should have security experience. The SRT has a minimum of two members.
The SRT may decide the reported issue is indeed a security vulnerability (with assigned severity), a non-confidential bug, a feature request, or the feature is working as expected. The team notifies the reporter of the decision and provides explanations. If the issue is classified as a bug, the team converts it to a normal bug. If it is a feature request, the team asks the reporter to create a feature request and closes the issue. If the feature is working as expected, the team closes the security issue. The SRT also sets up the issue domain (for example compiler, base system etc).
The SRT also makes an initial decision if the issue is in the code maintained by the projects (issues where we are upstream) or maintained outside the project (issues where we are downstream). This decision can be changed later if new information becomes available.
The SRT meets weekly on a status meeting and participates in the general Bug triage/prioritization meeting.
Classification of Issues
Security issues are classified as high, medium, and low severity. As a rule of thumb, we map the Base CVSS score from v3.1 in the following way:
0 to 3.9 - low severity
4.0 to 6.9 - medium severity
7.0 and above - high severity
When the Issue is in the Code Maintained by the Project
When the source code where the issue comes from is maintained by the Project, the SRT creates a confidential ticket about the issue and assigns it to the relevant developer. The security team also verifies which versions are affected.
If the security team judges it could be exploited, they request a CVE number for the issue and set up the embargo duration. It is by default 90 days, and may be different if necessary (for example, if the fix will be complicated to deploy, or the issue will be known earlier for some reasons).
The CVE number is mentioned in the confidential ticket, but should not be used in any other communication until the end of the embargo. The commit messages and documentation should be stating what was fixed (a NULL pointer, a missing lock, etc).
The fix should be developed in a private repository and the reporter may be taking part in the development if they wish so.
When the fix is available, it should be included in the main branch and backported to the release branches. If the issue is of ‘high’ severity, an immediate bugfix release should be produced. If it is a ‘medium’ or ‘low’ severity, the fix waits until the next regular bugfix release. In the case of a critical issue, the security team together with the release team may decide in distributing patches to the affected users.
Handling Upstream Security Issues
If the issue was identified in upstream code, we do report an upstream security issue using the upstream project’s process. We track the investigation status and the fix in our bug tracking system. When a fix is available, we do an update of the affected source, with backporting if necessary.
If the upstream project does not respond, or does respond very slowly, we may decide to develop a patch on our own. In this case, the vulnerability is using the process for issues where we are upstream.
Our process contains four phases: monitoring, assessment, remedy, and notification.
We actively monitor the ecosystem for potential security issues in the code developed by us, and in the code we distribute. This includes monitoring the official CVE list and other vulnerability databases, running code analysis tools, monitoring related blog posts or conference presentations. In addition to that, a regular bug might be marked as a potential security issue. If a potential issue appears, any project member (or an external observer) may fill in a security issue.
As we depend on much upstream code, we also monitor specific mailing lists informing about security issues in those projects, including special notification lists for issues under embargo.
This step has no equivalent in our Bug policy.
When we learn about a potential security issue, we start by acknowledging the information.
If the issue comes from a CVE database, we verify if we are affected by the vulnerability at all (for example, we are not affected by the software we do not include directly, nor by a dependency).
The SRT reproduces the issue during the assessment process and documents the needed steps, including configuration details (like package versions), system (like the processor architecture), and commands used.
The SRT declares a security issue if it compromises one or more of the three features: avaliability, integrity, or confidentiality.
When assessing an issue, the SRT may confirm it is a security issue or decide it is a regular bug. The team may also decide that a feature is missing or it behaves as intentionally designed and specified.
In all cases, the SRT notifies the reporter of the assessment.
Our aim is to acknowledge the reception within one working day, and respond with a first assessment within three working days.
This step is an equivalent of the Triage and Prioritize steps of the Bug process.
When the issue is confirmed as a security issue, the process of developing a fix begins. The reporter may be included in the process if they wish so. The SRT also applies for a CVE issue number and decides if there will be an embargoed notification before the public release.
The SRT notifies the developers who should know about the issue and who should develop the fix. The communication happens over a private channel.
Developers create a patch and associated test cases in a private branch. They also backport the fix to supported releases. In the case of non-public issues, the developer should mention in the patch description only what is fixed, not include any reference to the CVE. A fix might have a title like ‘fix a crash in module X’ or ‘add a missing unlock in module Y’.
They also prepare the release for issues with ‘high’ severity. ‘Medium’ and ‘low’ severity issues are fixed in regular bugfix releases.
We follow the rules of the upstream projects, if applicable.
This step is an equivalent of the Fix step of the Bug process.
If an embargoed notification happens, it is sent between 5 to 30 days before the expected publication date. The actual timeframe depends on the situation and affected parties. For example, if deployed devices are affected, the SRT may choose a longer time to allow patching of the vulnerable devices. The embargoed notification includes the CVE identification number, description of the issue, affected versions, the patch itself and the way it will be distributed, the public disclosure date, and the reporter credits. The SRT monitors the responses to the notification messages to fix any outstanding issues.
When the issue enters this phase, all documentation of the issue needs to be ready. The SRT and developers prepare a security advisory (if appropriate), the information for the release notes and the release announcement.
This step (with the Publish one described below) is an equivalent of the Release step of the Bug process.
The publication step consits of releasing the information about the issue publicly. The information prepared earlier is published on the public disclosure date. The SRT updates the CVE information.
The release notes contains a list of all vulnerabilities fixed in the release. For issues with important impact, the SRT might decide on a dedicated advisory.
This step (with the Notify one described above) is an equivalent of the Release step of the Bug process.
CVE (Common Vulnerabilities and Exposures) - a common system for vulnerability naming and referencing. https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
CVSS (Common Vulnerability Score System) - a score standard for security vulnerabilities, ranging from 0.0 (no impact) to 10.0 (critical impact). https://en.wikipedia.org/wiki/Common_Vulnerability_Scoring_System
This process was inspired by the OSS vulnerability guide, the OpenSSF Vulnerability Disclosure WG guide to disclosure for OSS projects, other work from the OpenSSF vulnerability-disclosures WG, Zephyr project security policy.