This document explains the workflow for anyone working with issues in GitLab Inc.
This document explains the workflow for anyone working with issues in GitLab Inc. For the workflow that applies to the wider community see the contributing guide.
GitLab FlowProducts at GitLab are built using the GitLab Flow.
We have specific rules around code review.
Reverting a merge requestIn line with our values of short toes, making two-way-door decisions and bias for action, anyone can propose to revert a merge request. When deciding whether an MR should be reverted, the following should be true:
~severity::1
or ~severity::2
. See severity labelsReverting merge requests that add non-functional changes and don’t remove any existing capabilities should be avoided in order to prevent designing by committee.
The intent of a revert is never to place blame on the original author. Additionally, it is helpful to inform the original author so they can participate as a DRI on any necessary follow up actions.
The pipeline::expedited
label, and master:broken
or master:foss-broken
label must be set on merge requests that fix master
to skip some non-essential jobs in order to speed up the MR pipelines.
master
If you notice that pipelines for the master
branch of GitLab or GitLab FOSS are failing, returning the build to a passing state takes priority over everything else development related, since everything we do while tests are broken may:
master
?
A broken master is an event where a pipeline in master
is failing.
The cost to fix test failures increases exponentially as time passes due to merged results pipelines used. Auto-deploys, as well as monthly releases and security releases, depend on gitlab-org/gitlab
master being green for tagging and merging of backports.
Our aim should be to keep master
free from failures, not to fix master
only after it breaks.
Any question or suggestion is welcome in the #g_development_analytics
channel who owns the broken master
automation proceess.
master
service level objectives
There are two phases for fixing a broken master
incident which have a target SLO to clarify the urgency. The resolution phase is dependent on the completion of the triage phase.
master
incident creation until assignment group labeled on the incident Resolution 4 hours from assignment to DRI until incident is resolved Merge request author or team of merge request author or dev on-call engineer
Note: Recurring incidents are negatively impacting master pipeline stability and development velocity. Any untriaged, recurring incident will be automatically escalated to #dev-escalation
following this timeline:
timeline title Pipeline incident escalation section Pipeline failure incident #1 not recurring in 24 hours and no human activity : Auto closed any human update on incident #1 : labels incident #1 with escalation skipped : does not trigger any group ping or escalation same job failures recurring in incident #2 : closes incident #2 as duplicate of incident #1 : labels incident #1 with escalation needed : pings attributed group channel after 10 minutes of inactivity : 2nd ping to group channel after 30 minutes of inactivity : pings stage channel in after 3 hours 40 minutes of inactivity : escalates to dev-escalation after 4 hours of inactivity : labels incident #1 is escalated
If an incident becomes a blocker for MRs and deployments before being auto-escalated, the team member being impacted should refer to the broken master
escalation steps to request help from the current engineer on-call as early as needed.
Additional details about the phases are listed below.
Brokenmaster
escalation
Recurring broken master
incidents are automatically escalated to #dev-escalation
unless it is triaged within 4 hours.
If a broken master
is blocking your team before auto-escalation (such as creating a security release) then you should:
master
incident with a DRI assigned and check discussions there.master
incident.Master broken incidents must be manually escalated to #dev-escalation
on weekends and holidays if necessary. Without a manual escalation, the service level objective can extend to the next working day; that is, triage DRI is expected to triage the incident on the next working day. Regardless of when the label was applied, we always consider an incident to be in an escalated
state as long as it has the ~“escalation::escalated” label, until the incident is resolved.
master
branch.If a failed test can be traced to a group through its feature_category
metadata, the broken master
incident associated with that test will be automatically labeled with this group as the triage DRI through this line of code. In addition, Slack notifications will be posted to the group’s Slack channel to notify them about ongoing incidents. The triage DRI is responsible for monitoring, identifying, and communicating the incident.
A notification will be sent to the attributed group’s Slack channel and #master-broken
.
Monitor
Pipeline failures are sent to the triage DRI’s group channel, if one is identified, and will be reviewed by its group members. The failures will also be sent to #master-broken
for extra communication. If an incident is announced in a DRI group’s Slack channel, the channel member should acknowledge it and assume the triage DRI responsibilities.
If the incident is a duplicate of an existing incident, use the following quick actions to close the duplicate incident:
/assign me
/duplicate #<original_issue_id>
/copy_metadata #<original_issue_id>
If the incident is not a duplicate, and needs some investigation:
/assign me
Acknowledged
(in the right-side menu).:ack:
emoji reaction should be applied by the triage DRI to signal the linked incident status has been changed to Acknowledged
and the incident is actively being triaged.Identification
Review non-resolved broken master
incidents for the same failure. If the broken master
is related to a test failure, search the spec file in the issue search to see if there’s a known failure::flaky-test
issue.
If this incident is due to non-flaky reasons, communicate in #development
, #backend
, and #frontend
using the Slack Workflow.
master
is fixed by enter /broadcast master fixed
in the chat bar of the #master-broken
channel to invoke this workflow, and then click Continue the broadcast
.#releases
channel and discuss whether it’s appropriate to create another migration to roll back the first migration or turn the migration into a no-op by following Disabling a data migration steps.If you identified that master
fails for a flaky reason, and it cannot be reliably reproduced (i.e. running the failing spec locally or retrying the failing job):
Quarantine the failing test to restore pipeline stability within 30 minutes if the flakiness is continuously causing master pipeline incidents.
Alternatively, if the failure does not seem disruptive, and you have a fix that you are confident with, submit the fix MR with the ~“master:broken” label to ensure your pipeline is expedited.
If a flaky test issue already exists, add a comment in it with a link to the failed broken master incident and/or failed job. We have automation in place to create test failure issues automatically. The issue is named after the spec path, which can be a search keyword.
If a flaky test issue doesn’t exist, create an issue from the New issue
button in top-right of the failing job page (that will automatically add a link to the job in the issue), and apply the Broken Master - Flaky
description template.
Add the appropriate labels to the main incident:
# Add those labels
/label ~"master-broken::flaky-test"
/label ~"failure::flaky-test"
# Pick one of those labels
/label ~"flaky-test::dataset-specific"
/label ~"flaky-test::datetime-sensitive"
/label ~"flaky-test::state leak"
/label ~"flaky-test::random input"
/label ~"flaky-test::transient bug"
/label ~"flaky-test::unreliable dom selector"
/label ~"flaky-test::unstable infrastructure"
/label ~"flaky-test::too-many-sql-queries"
Close the incident
Add the stacktrace of the error to the incident (if it is not already posted by gitlab-bot), as well as Capybara screenshots if available in the job artifacts.
artifacts/tmp/capybara
to the incident if one is available.Identify the merge request that introduced the failures. There are a few possible approaches to try:
geo
spec file is failing, specifically the shard
spec, search for those keywords in the commit history).
Merge branch
text to only see merge commits.History
or Blame
button at the top of a file in the file explorer, e.g. at https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/backup.rb.If you identified a merge request, assign the incident to its author if they are available at the moment. If they are not available, assign to the maintainer that approved/merged the MR. If none are available, mention the team Engineering Manager and seek assistance in the #development
Slack channel.
If no merge request was identified, ask for assistance in the #development
Slack channel.
Please set the appropriate ~master-broken:*
label from the list below:
/label ~"master-broken::caching"
/label ~"master-broken::ci-config"
/label ~"master-broken::dependency-upgrade"
/label ~"master-broken::external-dependency-unavailable"
/label ~"master-broken::flaky-test"
/label ~"master-broken::fork-repo-test-gap"
/label ~"master-broken::pipeline-skipped-before-merge"
/label ~"master-broken::test-selection-gap"
/label ~"master-broken::need-merge-train"
/label ~"master-broken::gitaly"
/label ~"master-broken::state leak"
/label ~"master-broken::infrastructure"
/label ~"master-broken::infrastructure::failed-to-pull-image"
/label ~"master-broken::infrastructure::frunner-disk-full"
/label ~"master-broken::infrastructure::gitlab-com-overloaded"
/label ~"master-broken::job-timeout"
/label ~"master-broken::multi-version-db-upgrade"
/label ~"master-broken::missing-test-coverage"
/label ~"master-broken::undetermined"
(Optional) Pre-resolution
If the triage DRI believes that there’s an easy resolution by either:
The triage DRI can create a merge request, assign to any available maintainer, and ping the resolution DRI with a @username FYI
message. Additionally, a message can be posted in #backend_maintainers
or #frontend_maintainers
to get a maintainer take a look at the fix ASAP.
If the failures occur only in test-on-gdk
jobs, it’s possible to stop those jobs from being added to new pipelines while the cause is being fixed. See the runbook for details.
For an initial assessment of what might have contributed to the failure, we can try the experimental AI-assisted root cause analysis feature following this documentation.
To confirm flakiness, you can use the @gitlab-bot retry_job <job_id>
or the @gitlab-bot retry_pipeline <pipeline_id>
command to retry the failed job(s), even if you are not a project maintainer.
retry_job
command can fail for the following reasons:
retry_job
command will result in a failure message because each failed job can only be retried once.retry
commands, you are likely invoking them in non-supported projects. If you’d like to request for the commands to be added to your project, please make an issue and inform #g_development_anallytics
. You are encouraged to self-serve the MR following this example and submit it for review for maximum efficiency.The merge request author of the change that broke master
is the resolution DRI. In the event the merge request author is not available, the team of the merge request author will assume the resolution DRI responsibilities. If a DRI has not acknowledged or signaled working on a fix, any developer can take assume the resolution DRI responsibilities by assigning themselves to the incident.
master
incidents over new bug/feature work. Resolution options include:
master
. If a revert is performed, create an issue to reinstate the merge request and assign it to the author of the reverted merge request.
pipeline::expedited
label, and master:broken
or master:foss-broken
label must be set on merge requests that fix master
to skip some non-essential jobs in order to speed up the MR pipelines.quarantined test
label to the failure::flaky-test
issue you previously created during the identification phase.priority::1
severity::1
issue.
Pick into auto-deploy
label (along with the needed severity::1
and priority::1
) to make sure deployments are unblocked.master
incident to the maintained stable branches. See stable branches#master-broken
when the fix was mergedBroadcast Master Fixed
workflow in the #master-broken
channel, and click Continue the broadcast
to communicate it.master
build was failing and the underlying problem was quarantined / reverted / temporary workaround created but the root cause still needs to be discovered, the investigation should continue directly in the incident.master
incident could have been prevented in the Merge Request pipeline.Once the resolution DRI announces that master
is fixed:
master
has been fixed since we use merged results pipelines.Merge requests can not be merged to master
until the incident status is changed to Resolved
.
This is because we need to try hard to avoid introducing new failures, since it’s easy to lose confidence if it stays red for a long time.
In the rare case where a merge request is urgent and must be merged immediately, team members can follow the process below to have a merge request merged during a broken master
.
Merging while master
is broken can only be done for:
master
issues (we can have multiple broken master
issues ongoing).master
First, ensure the latest pipeline has completed less than 2 hours ago (although it is likely to have failed due to gitlab-org/gitlab
using merged results pipelines).
Next, make a request on Slack:
#frontend_maintainers
or #backend_maintainers
Slack channels (whichever one is more relevant).master
, optionally add a link to this page in your request.A maintainer who sees a request to merge during a broken master
must follow this process.
Note, if any part of the process below disqualifies a merge request from being merged during a broken master
then the maintainer must inform the requestor as to why in the merge request (and optionally in the Slack thread of the request).
First, assess the request:
:eyes:
emoji to the Slack post so other maintainers know it is being assessed. We do not want multiple maintainers to work on fulfilling the request.Next, ensure that all the following conditions are met:
gitlab-org/gitlab
using merged results pipelines).master
.master
incidents. See the “Triage DRI Responsibilities” steps above for more details.Next, add a comment to the merge request mentioning that the merge request will be merged during a broken master
, and link to the broken master
incident. For example:
Merge request will be merged while `master` is broken.
Failure in <JOB_URL> happens in `master` and is being worked on in <INCIDENT_URL>.
Next, merge the merge request:
gitlab-org/gitlab
project.master
mirrors
#master-broken-mirrors
was created to remove duplicative notifications from the #master-broken
channel which provides a space for Release Managers and the Developer Experience teams to monitor failures for the following projects:
The #master-broken-mirrors
channel is to be used to identify unique failures for those projects and flaky failures are not expected to be retried/reacted to in the same way as #master-broken
.
We run JiHu validation pipelines in some of the merge requests, and it can be broken at times. When this happens, check What to do when the validation pipeline failed for more details.
Stable branchesTo guarantee the readiness of any GitLab release, it is fundamental that failures on stable branches are addressed with priority, similar to master branch failures. It is the merge request author’s responsibility to backport the following to the maintained stable branches:
Follow the engineering runbook to backport changes to stable branches.
Security IssuesSecurity issues are managed and prioritized by the security team. If you are assigned to work on a security issue in a milestone, you need to follow the Security Release process.
If you find a security issue in GitLab, create a confidential issue mentioning the relevant security and engineering managers, and post about it in #security
.
If you accidentally push security commits to gitlab-org/gitlab
, we recommend that you:
#releases
. It may be possible to execute a garbage collection (via the Housekeeping task in the repository settings) to remove the commits.For more information on how the entire process works for security releases, see the documentation on security releases.
RegressionsA ~regression
implies that a previously verified working functionality no longer works. Regressions are a subset of bugs. The ~regression
label is used to imply that the defect caused the functionality to regress. The label tells us that something worked before and it needs extra attention from Engineering and Product Managers to schedule/reschedule.
The regression label does not apply to bugs for new features for which functionality was never verified as working. These, by definition, are not regressions.
A regression should always have the ~regression:xx.x
label on it to designate when it was introduced. If it’s unclear when it was introduced, the latest released version should be added.
Regressions should be considered high priority issues that should be solved as soon as possible, especially if they have severe impact on users. When identified in time, for example in a SaaS deployment, fixing them within the same milestone avoids their being included with that release.
Use of the ~regression label on MRsFor better efficiency, it’s common for a regression to be fixed in an MR without an issue being created, either through reversion of the original MR or a code change. Regardless of whether there is an issue or not, the MR should also have the ~regression
and ~regression:xx.x
labels. This allows for trends to be accurately measured.
Start working on an issue you’re assigned to. If you’re not assigned to any issue, find the issue with the highest priority and relevant label you can work on, and assign it to yourself. You can use this query, which sorts by priority for the started milestones, and filter by the label for your team.
If you need to schedule something or prioritize it, apply the appropriate labels (see Scheduling issues).
If you are working on an issue that touches on areas outside of your expertise, be sure to mention someone in the other group(s) as soon as you start working on it. This allows others to give you early feedback, which should save you time in the long run.
If you are working on an issue that requires access to specific features, systems, or groups, open an access request to obtain access on staging and production for testing your changes after they are merged.
When you start working on an issue:
workflow::in dev
label to the issue.workflow::in review
. If multiple people are working on the issue or multiple workflow labels might apply, consider breaking the issue up. Otherwise, default to the workflow label farthest away from completion.workflow::verification
, to indicate all the development work for the issue has been done and it is waiting to be deployed and verified. We will use this label in cases where the work was requested to be verified by product OR we determined we need to perform this verification in production.workflow::complete
and close the issue.You are responsible for the issues assigned to you. This means it has to ship with the milestone it’s associated with. If you are not able to do this, you have to communicate it early to your manager and other stakeholders (e.g. the product manager, other engineers working on dependent issues). In teams, the team is responsible for this (see Working in Teams). If you are uncertain, err on the side of overcommunication. It’s always better to communicate doubts than to wait.
You (and your team, if applicable) are responsible for:
Once a release candidate has been deployed to the staging environment, please verify that your changes work as intended. We have seen issues where bugs did not appear in development but showed in production (e.g. due to CE-EE merge issues).
Be sure to read general guidelines about issues and merge requests.
Updating Workflow Labels Throughout DevelopmentTeam members use labels to track issues throughout development. This gives visibility to other developers, product managers, and designers, so that they can adjust their plans during a monthly iteration. An issue should follow these stages:
workflow::in dev
: A developer indicates they are developing an issue by applying the in dev
label.workflow::in review
: A developer indicates the issue is in code review and UX review by replacing the in dev
label with the in review
label.workflow::verification
: A developer indicates that all the development work for the issue has been done and is waiting to be deployed, then verified.workflow::complete
: A developer indicates the issue has been verified and everything is working by adding the workflow::complete
label and closing the issue.Workflow labels are described in our Development Documentation and Product Development Flow.
Working in TeamsFor larger issues or issues that contain many different moving parts, you’ll be likely working in a team. This team will typically consist of a backend engineer, a frontend engineer, a Product Designer and a product manager.
In the spirit of collaboration and efficiency, members of teams should feel free to discuss issues directly with one another while being respectful of others’ time.
Convention over ConfigurationAvoid adding configuration values in the application settings or in gitlab.yml
. Only add configuration if it is absolutely necessary. If you find yourself adding parameters to tune specific features, stop and consider how this can be avoided. Are the values really necessary? Could constants be used that work across the board? Could values be determined automatically? See Convention over Configuration for more discussion.
Start working on things with the highest priority in the current milestone. The priority of items are defined under labels in the repository, but you are able to sort by priority.
After sorting by priority, choose something that you’re able to tackle and falls under your responsibility. That means that if you’re a frontend developer, you work on something with the label frontend
.
To filter very precisely, you could filter all issues for:
CI/CD
, Discussion
, Quality
, frontend
, or Platform
Use this link to quickly set the above parameters. You’ll still need to filter by the label for your own team.
If you’re in doubt about what to work on, ask your lead. They will be able to tell you.
Triaging and Reviewing Code from the rest of the CommunityIt’s every developers’ responsibilities to triage and review code contributed by the rest of the community, and work with them to get it ready for production.
Merge requests from the rest of the community should be labeled with the Community contribution
label.
When evaluating a merge request from the community, please ensure that a relevant PM is aware of the pending MR by mentioning them.
This should be to be part of your daily routine. For instance, every morning you could triage new merge requests from the rest of the community that are not yet labeled Community contribution
and either review them or ask a relevant person to review it.
Make sure to follow our Code Review Guidelines.
Working with GitLab.comGitLab.com is a very large instance of GitLab Enterprise Edition. It runs release candidates for new releases, and sees a lot of issues because of the amount of traffic it gets. There are several internal tools available for developers at GitLab to get data about what’s happening in the production system:
Performance DataThere is extensive monitoring publicly available for GitLab.com. For more on this and related tools, see the monitoring handbook.
Error ReportingGitLab Inc has to be selective in working on particular issues. We have a limited capacity to work on new things. Therefore, we have to schedule issues carefully.
Product Managers are responsible for scheduling all issues in their respective product areas, including features, bugs, and tech debt. Product managers alone determine the prioritization, but others are encouraged to influence the PMs decisions. The UX Lead and Engineering Leads are responsible for allocating people making sure things are done on time. Product Managers are not responsible for these activities, they are not project managers.
Direction issues are the big, prioritized new features for each release. They are limited to a small number per release so that we have plenty of capacity to work on other important issues, bug fixes, etc.
If you want to schedule an issue with the Seeking community contributions
label, please remove the label first.
Any scheduled issue should have a team label assigned, and at least one type label.
Requesting Something to be ScheduledTo request scheduling an issue, ask the responsible product manager
We have many more requests for great features than we have capacity to work on. There is a good chance we’ll not be able to work on something. Make sure the appropriate labels (such as customer
) are applied so every issue is given the priority it deserves.
Teams (Product, UX, Development, Quality) continually work on issues according to their respective workflows. There is no specified process whereby a particular person should be working on a set of issues in a given time period. However, there are specific deadlines that should inform team workflows and prioritization.
With the monthly release date being the third Thursday of the release month, the code cut-off is the Friday prior.
The next milestone begins the Saturday after code cut-off.
All other important dates for a milestone are relative to the release date:
~type::maintenance
issues per cross-functional prioritization~type::bug
issues per cross-functional prioritization%x.y
; label ~deliverable
applied.%x.y
; label ~deliverable
applied.%x.y
is expired.~security
issues.Refer to release post content reviews for additional deadlines.
Note that deployments to GitLab.com are more frequent than monthly major/minor releases. See auto deploy transition guidance for details.
KickoffAt the beginning of each release, we have a kickoff meeting, publicly livestreamed to YouTube. In the call, the Product Development team (PMs, Product Designers, and Engineers) communicate with the rest of the organization which issues are in scope for the upcoming release. The call is structured by product area with each PM leading their part of the call.
The Product Kickoff page is updated each month, which follows the content on the livestream.
Milestone CleanupEngineering Managers are responsible for capacity planning and scheduling for their respective teams with guidance from their counterpart Product Managers.
To ensure hygiene across Engineering, we run scheduled pipelines to move unfinished work (open issues and merge requests) with the expired milestone to the next milestone, and label ~"missed:x.y"
for the expired milestone. Additionally, label ~"missed-deliverable"
whenever ~"Deliverable"
is presented.
This is currently implemented as part of our automated triage operations. Additionally, issues with the ~Deliverable
label which have a milestone beyond current +1, will have the ~Deliverable
label removed.
We keep the milestone open for 3 months after it’s expired, based on the release and maintenance policy.
The milestone cleanup is currently applied to the following groups and projects:
Milestones closure is in the remit of the Delivery team. At any point in time a release might need to be created for an active milestone,and once that is no longer the case, the Delivery team closes the milestone.
Milestone cleanup scheduleThe milestone cleanup will happen on the milestone due date.
These actions will be applied to open issues:
~"missed:x.y"
.~"missed-deliverable"
will also be added whenever ~"Deliverable"
is presented.Milestones are closed when the Delivery team no longer needs to create a backport release for a specific milestone.
Use Group Labels and Group MilestonesWhen working in GitLab (and in particular, the GitLab.org group), use group labels and group milestones as much as you can. It is easier to plan issues and merge requests at the group level, and exposes ideas across projects more naturally. If you have a project label, you can promote it to a group milestone. This will merge all project labels with the same name into the one group label. The same is true for promoting group milestones.
Technical debtWe definitely don’t want our technical debt to grow faster than our code base. To prevent this from happening we should consider not only the impact of the technical debt but also consider the impacts spreading like a contagion. How big and how fast is this problem going to be over time? Is it likely a bad piece of code will be copy-pasted for a future feature? In the end, the amount of resources available is always less than amount of technical debt to address.
As we innovate our platform, we will have situations where a strategic decision will be made to incur technical debt in order to preserve a higher feature velocity to meet market and customer demand. This accrual of technical debt presents risks due to the long-term implications for the usability, security, reliability, scalability, accessibility, and/or availability of our product and platform.
Therefore technical debt may be accrued but must be:
~"backlog::prospective"
if it is expected to be worked on in the next 18 months, ~"backlog::no-commitment"
otherwise.If it is to be deferred by more than 6 months, it should be considered for the product and engineering roadmap. Technical debt issues you wish to be closed must not affect the “*abilities” and have a justification included on why it cannot be remediated within the next 18 months (meaning, there is an action plan on the Engineering roadmap).
To help with prioritization and decision-making process here, we recommend thinking about contagion as an interest rate of the technical debt. There is a great comment from the internet about it:
You wouldn’t pay off your $50k student loan before first paying off your $5k credit card and it’s because of the high interest rate. The best debt to pay off first is one that has the highest loan payment to recurring payment reduction ratio, i.e. the one that reduces your overall debt payments the most, and that is usually the loan with the highest interest rate.
Technical debt is prioritized like other technical decisions in product groups by product management.
For technical debt which might span, or fall in gaps between groups they should be brought up for a globally optimized prioritization in retrospectives or directly with the appropriate member of the Product Leadership team. Additional avenues for addressing technical debt outside of product groups are Strategic Priority Codes and working groups.
Deferred UXSometimes there is an intentional decision to deviate from the agreed-upon MVC, which sacrifices the user experience. When this occurs, the Product Designer creates a follow-up issue and labels it Deferred UX
to address the UX gap in subsequent releases.
For the same reasons as technical debt, we don’t want Deferred UX to grow faster than our code base.
These issues are prioritized like other technical decisions in product groups by product management.
As with technical debt, Deferred UX should be brought up for globally optimized prioritization in retrospectives or directly with the appropriate member of the Product Leadership team.
UI polishUI polish issues are visual improvements to the existing user interface, touching mainly aesthetic aspects of the UI that are guided by Pajamas foundations. UI polish issues generally capture improvements related to color, typography, iconography, and spacing. We apply the UI polish
label to these issues. UI polish issues don’t introduce functionality or behavior changes to a feature.
Open merge requests sometimes become idle (not updated by a human in more than a month). Once a month, engineering managers will receive an Merge requests requiring attention triage issue
that includes all (non-WIP/Draft) MRs for their group and use it to determine if any action should be taken (such as nudging the author/reviewer/maintainer). This assists in getting merge requests merged in a reasonable amount of time which we track with the Open MR Review Time (OMRT) and Open MR Age (OMA) performance indicators.
Open merge requests may also have other properties that indicate that the engineering manager should research them and potentially take action to improve efficiency. One key property is the number of threads, which, when high, may indicate a need to update the plan for the MR or that a synchronous discussion should be considered. Another property is the number of pipelines, which, when high, may indicate a need to revisit the plan for the MR. These metrics are not yet included in an automatically created a triage issue.
Security is everyone’s responsibilitySecurity is our top priority. Our Security Team is raising the bar on security every day to protect users’ data and make GitLab a safe place for everyone to contribute. There are many lines of code, and Security Teams need to scale. That means shifting security left in the Software Development LifeCycle (SDLC). Each team has an Application Security Stable Counterpart who can help you, and you can find more secure development help in the #sec-appsec
Slack channel.
Being able to start the security review process earlier in the software development lifecycle means we will catch vulnerabilities earlier, and mitigate identified vulnerabilities before the code is merged. You should know when and how to proactively seek an Application Security Review. You should also be familiar with our Secure Coding Guidelines.
We are fixing the obvious security issues before every merge, and therefore, scaling the security review process. Our workflow includes a check and validation by the reviewers of every merge request, thereby enabling developers to act on identified vulnerabilities before merging. As part of that process, developers are also encouraged to reach out to the Security Team to discuss the issue at that stage, rather than later on, when mitigating vulnerabilities becomes more expensive. After all, security is everyone’s job. See also our Security Paradigm.
Rapid Engineering ResponseFrom time to time, there are occasions that engineering team must act quickly in response to urgent issues. This section describes how the engineering team handles certain kinds of such issues.
ScopeNot everything is urgent. See below for a non-exclusive list of things that are in-scope and not in-scope. As always, use your experience and judgment, and communicate with others.
A bi-weekly performance refinement session is held by the Development and QE teams jointly to raise awareness and foster wider collaboration about high-impact performance issues. A high impact issue has a direct measurable impact on GitLab.com service levels or error budgets.
ScopeThe Performance Refinement issue board is reviewed in this refinement exercise.
Processbug::performance
.Milestone
or the label workflow::ready for development
is missing.Milestone
and the label workflow::ready for development
.The infradev process is established to identify issues requiring priority attention in support of SaaS availability and reliability. These escalations are intended to primarily be asyncronous as timely triage and attention is required. In addition to primary management through the Issues, any gaps, concerns, or critical triage is handled in the SaaS Availability weekly standup.
ScopeThe infradev issue board is the primary focus of this process.
Roles and Responsibilities InfrastructureInfradev
label.Priority
and apply the corresponding label as appropriate.Infradev
label to the new issues.Severity
and Priority
labels to the new issues. The labels should correspond to the importance of the follow-on work.(To be completed primarily by Development Engineering Management)
Issues are nominated to the board through the inclusion of the label infradev
and will appear on the infradev board.
Milestone
or the label workflow::ready for development
is missing.
Milestone
and the label workflow::ready for development
.Issues with ~infradev ~severity::1 ~priority::1 ~production request
labels applied require immediate resolution.
~infradev
issues requiring a ~“breaking change” should not exist. If a current ~infradev
issue requires a breaking change then it should split into two issues. The first issue should be the immediate ~infradev
work that can be done under current SLOs. The second issue should be ~“breaking change” work that needs to be completed at the next major release in accordance with deprecation guidance. Agreement from development DRI as well as the infrastructure DRI should be documented on the issue.
Infradev issues are also shown in the monthly Error Budget Report.
A Guide to Creating Effective Infradev IssuesTriage of infradev Issues is desired to occur asynchronously. These points below with endure that your infradev issues gain maximum traction.
infradev
label to architectural problems, vague solutions, or requests to investigate an unknown root-cause.Code reviews are mandatory for every merge request, you should get familiar with and follow our Code Review Guidelines.
GitLab team members' code review values
In order to provide changes in an iterative and incremental manner, we should always seek to create …
At GitLab we have a number of engineering processes that we use on a daily basis.
This document explains the workflow for determining if a feature will be included in a milestone …
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4