A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://handbook.gitlab.com/handbook/engineering/workflow/ below:

Engineering Workflow | The GitLab Handbook

Engineering Workflow

This document explains the workflow for anyone working with issues in GitLab Inc.

This document explains the workflow for anyone working with issues in GitLab Inc. For the workflow that applies to the wider community see the contributing guide.

GitLab Flow

Products at GitLab are built using the GitLab Flow.

We have specific rules around code review.

Reverting a merge request

In line with our values of short toes, making two-way-door decisions and bias for action, anyone can propose to revert a merge request. When deciding whether an MR should be reverted, the following should be true:

Reverting merge requests that add non-functional changes and don’t remove any existing capabilities should be avoided in order to prevent designing by committee.

The intent of a revert is never to place blame on the original author. Additionally, it is helpful to inform the original author so they can participate as a DRI on any necessary follow up actions.

The pipeline::expedited label, and master:broken or master:foss-broken label must be set on merge requests that fix master to skip some non-essential jobs in order to speed up the MR pipelines.

Broken master

If you notice that pipelines for the master branch of GitLab or GitLab FOSS are failing, returning the build to a passing state takes priority over everything else development related, since everything we do while tests are broken may:

What is a broken master?

A broken master is an event where a pipeline in master is failing.

The cost to fix test failures increases exponentially as time passes due to merged results pipelines used. Auto-deploys, as well as monthly releases and security releases, depend on gitlab-org/gitlab master being green for tagging and merging of backports.

Our aim should be to keep master free from failures, not to fix master only after it breaks.

Any question or suggestion is welcome in the #g_development_analytics channel who owns the broken master automation proceess.

Broken master service level objectives

There are two phases for fixing a broken master incident which have a target SLO to clarify the urgency. The resolution phase is dependent on the completion of the triage phase.

Phase Service level objective DRI Triage 4 hours from the 2nd occurrence of a broken master incident creation until assignment group labeled on the incident Resolution 4 hours from assignment to DRI until incident is resolved Merge request author or team of merge request author or dev on-call engineer

Note: Recurring incidents are negatively impacting master pipeline stability and development velocity. Any untriaged, recurring incident will be automatically escalated to #dev-escalation following this timeline:

timeline
  title Pipeline incident escalation
    section Pipeline failure incident #1
      not recurring in 24 hours and no human activity : Auto closed
      any human update on incident #1
        : labels incident #1 with escalation skipped
        : does not trigger any group ping or escalation
      same job failures recurring in incident #2
        : closes incident #2 as duplicate of incident #1
        : labels incident #1 with escalation needed
        : pings attributed group channel after 10 minutes of inactivity
        : 2nd ping to group channel after 30 minutes of inactivity
        : pings stage channel in after 3 hours 40 minutes of inactivity
        : escalates to dev-escalation after 4 hours of inactivity
        : labels incident #1 is escalated

If an incident becomes a blocker for MRs and deployments before being auto-escalated, the team member being impacted should refer to the broken master escalation steps to request help from the current engineer on-call as early as needed.

Additional details about the phases are listed below.

Broken master escalation

Recurring broken master incidents are automatically escalated to #dev-escalation unless it is triaged within 4 hours.

If a broken master is blocking your team before auto-escalation (such as creating a security release) then you should:

  1. See if there is a non-resolved broken master incident with a DRI assigned and check discussions there.
  2. Check discussions on the failure notification in the triage DRI’s group Slack channel to see if anyone is investigating the incident you are looking at. See Triage broken master for information on who the triage DRI is.
  3. If there is not a clear DRI or action to resolve, use the dev escalation process to solicit help in the broken master incident.
Escalation on weekends and holidays

Master broken incidents must be manually escalated to #dev-escalation on weekends and holidays if necessary. Without a manual escalation, the service level objective can extend to the next working day; that is, triage DRI is expected to triage the incident on the next working day. Regardless of when the label was applied, we always consider an incident to be in an escalated state as long as it has the ~“escalation::escalated” label, until the incident is resolved.

Triage broken master Definitions Attribution

If a failed test can be traced to a group through its feature_category metadata, the broken master incident associated with that test will be automatically labeled with this group as the triage DRI through this line of code. In addition, Slack notifications will be posted to the group’s Slack channel to notify them about ongoing incidents. The triage DRI is responsible for monitoring, identifying, and communicating the incident.

A notification will be sent to the attributed group’s Slack channel and #master-broken.

Triage DRI Responsibilities
  1. Monitor

  2. Identification

  3. (Optional) Pre-resolution

Pro-tips for Triage DRI
  1. For an initial assessment of what might have contributed to the failure, we can try the experimental AI-assisted root cause analysis feature following this documentation.

  2. To confirm flakiness, you can use the @gitlab-bot retry_job <job_id> or the @gitlab-bot retry_pipeline <pipeline_id> command to retry the failed job(s), even if you are not a project maintainer.

Resolution of broken master

The merge request author of the change that broke master is the resolution DRI. In the event the merge request author is not available, the team of the merge request author will assume the resolution DRI responsibilities. If a DRI has not acknowledged or signaled working on a fix, any developer can take assume the resolution DRI responsibilities by assigning themselves to the incident.

Responsibilities of the resolution DRI
  1. Prioritize resolving recurring broken master incidents over new bug/feature work. Resolution options include:
  2. The resolution DRI must address all failures in the pipeline. Be mindful that the initial opened issue for the incident will only announce the jobs that failed so far. But after you fix those jobs, other subsequent jobs could fail on the same pipeline that you’re triaging. The triage DRI is responsible for this whole pipeline, and not only for the initial failed jobs.
  3. Apply the Pick into auto-deploy label (along with the needed severity::1 and priority::1) to make sure deployments are unblocked.
  4. Backport the broken master incident to the maintained stable branches. See stable branches
  5. Communicate in #master-broken when the fix was merged
  6. Once the incident is resolved, select the Broadcast Master Fixed workflow in the #master-broken channel, and click Continue the broadcast to communicate it.
  7. When master build was failing and the underlying problem was quarantined / reverted / temporary workaround created but the root cause still needs to be discovered, the investigation should continue directly in the incident.
  8. Create an issue for the Development Analytics group describing how the broken master incident could have been prevented in the Merge Request pipeline.
  9. When resolution steps are completed and all of the required fixes are merged, close the incident.

Once the resolution DRI announces that master is fixed:

Merging during broken master

Merge requests can not be merged to master until the incident status is changed to Resolved.

This is because we need to try hard to avoid introducing new failures, since it’s easy to lose confidence if it stays red for a long time.

In the rare case where a merge request is urgent and must be merged immediately, team members can follow the process below to have a merge request merged during a broken master.

Criteria for merging during broken master

Merging while master is broken can only be done for:

How to request a merge during a broken master

First, ensure the latest pipeline has completed less than 2 hours ago (although it is likely to have failed due to gitlab-org/gitlab using merged results pipelines).

Next, make a request on Slack:

  1. Post to either the #frontend_maintainers or #backend_maintainers Slack channels (whichever one is more relevant).
  2. In your post outline why the merge request is urgent.
  3. Make it clear that this would be a merge during a broken master, optionally add a link to this page in your request.
Instructions for the maintainer

A maintainer who sees a request to merge during a broken master must follow this process.

Note, if any part of the process below disqualifies a merge request from being merged during a broken master then the maintainer must inform the requestor as to why in the merge request (and optionally in the Slack thread of the request).

First, assess the request:

  1. Add the :eyes: emoji to the Slack post so other maintainers know it is being assessed. We do not want multiple maintainers to work on fulfilling the request.
  2. Assess whether the merge request is urgent or not. If in doubt, ask the requestor for more details in the merge request about why it is urgent.

Next, ensure that all the following conditions are met:

  1. The latest pipeline has completed less than 2 hours ago (although it is likely to have failed due to gitlab-org/gitlab using merged results pipelines).
  2. All of the latest pipeline failures also happen on master.
  3. There is a corresponding non-resolved broken master incidents. See the “Triage DRI Responsibilities” steps above for more details.

Next, add a comment to the merge request mentioning that the merge request will be merged during a broken master, and link to the broken master incident. For example:

Merge request will be merged while `master` is broken.

Failure in <JOB_URL> happens in `master` and is being worked on in <INCIDENT_URL>.

Next, merge the merge request:

Broken master mirrors

#master-broken-mirrors was created to remove duplicative notifications from the #master-broken channel which provides a space for Release Managers and the Developer Experience teams to monitor failures for the following projects:

The #master-broken-mirrors channel is to be used to identify unique failures for those projects and flaky failures are not expected to be retried/reacted to in the same way as #master-broken.

Broken JiHu validation pipelines

We run JiHu validation pipelines in some of the merge requests, and it can be broken at times. When this happens, check What to do when the validation pipeline failed for more details.

Stable branches

To guarantee the readiness of any GitLab release, it is fundamental that failures on stable branches are addressed with priority, similar to master branch failures. It is the merge request author’s responsibility to backport the following to the maintained stable branches:

Follow the engineering runbook to backport changes to stable branches.

Security Issues

Security issues are managed and prioritized by the security team. If you are assigned to work on a security issue in a milestone, you need to follow the Security Release process.

If you find a security issue in GitLab, create a confidential issue mentioning the relevant security and engineering managers, and post about it in #security.

If you accidentally push security commits to gitlab-org/gitlab, we recommend that you:

  1. Delete the relevant branch ASAP
  2. Inform a release manager in #releases. It may be possible to execute a garbage collection (via the Housekeeping task in the repository settings) to remove the commits.

For more information on how the entire process works for security releases, see the documentation on security releases.

Regressions

A ~regression implies that a previously verified working functionality no longer works. Regressions are a subset of bugs. The ~regression label is used to imply that the defect caused the functionality to regress. The label tells us that something worked before and it needs extra attention from Engineering and Product Managers to schedule/reschedule.

The regression label does not apply to bugs for new features for which functionality was never verified as working. These, by definition, are not regressions.

A regression should always have the ~regression:xx.x label on it to designate when it was introduced. If it’s unclear when it was introduced, the latest released version should be added.

Regressions should be considered high priority issues that should be solved as soon as possible, especially if they have severe impact on users. When identified in time, for example in a SaaS deployment, fixing them within the same milestone avoids their being included with that release.

Use of the ~regression label on MRs

For better efficiency, it’s common for a regression to be fixed in an MR without an issue being created, either through reversion of the original MR or a code change. Regardless of whether there is an issue or not, the MR should also have the ~regression and ~regression:xx.x labels. This allows for trends to be accurately measured.

Basics
  1. Start working on an issue you’re assigned to. If you’re not assigned to any issue, find the issue with the highest priority and relevant label you can work on, and assign it to yourself. You can use this query, which sorts by priority for the started milestones, and filter by the label for your team.

  2. If you need to schedule something or prioritize it, apply the appropriate labels (see Scheduling issues).

  3. If you are working on an issue that touches on areas outside of your expertise, be sure to mention someone in the other group(s) as soon as you start working on it. This allows others to give you early feedback, which should save you time in the long run.

  4. If you are working on an issue that requires access to specific features, systems, or groups, open an access request to obtain access on staging and production for testing your changes after they are merged.

  5. When you start working on an issue:

  6. You are responsible for the issues assigned to you. This means it has to ship with the milestone it’s associated with. If you are not able to do this, you have to communicate it early to your manager and other stakeholders (e.g. the product manager, other engineers working on dependent issues). In teams, the team is responsible for this (see Working in Teams). If you are uncertain, err on the side of overcommunication. It’s always better to communicate doubts than to wait.

  7. You (and your team, if applicable) are responsible for:

  8. Once a release candidate has been deployed to the staging environment, please verify that your changes work as intended. We have seen issues where bugs did not appear in development but showed in production (e.g. due to CE-EE merge issues).

Be sure to read general guidelines about issues and merge requests.

Updating Workflow Labels Throughout Development

Team members use labels to track issues throughout development. This gives visibility to other developers, product managers, and designers, so that they can adjust their plans during a monthly iteration. An issue should follow these stages:

Workflow labels are described in our Development Documentation and Product Development Flow.

Working in Teams

For larger issues or issues that contain many different moving parts, you’ll be likely working in a team. This team will typically consist of a backend engineer, a frontend engineer, a Product Designer and a product manager.

  1. Teams have a shared responsibility to ship the issue in the planned release.
    1. If the team suspects that they might not be able to ship something in time, the team should escalate / inform others as soon as possible. A good start is informing your manager.
    2. It’s generally preferable to ship a smaller iteration of an issue, than ship something a release later.
  2. Consider starting a Slack channel for a new team, but remember to write all relevant information in the related issue(s). You don’t want to have to read up on two threads, rather than only one, and Slack channels are not open to the greater GitLab community.
  3. If an issue entails frontend and backend work, consider separating the frontend and backend code into separate MRs and merge them independently under feature flags. This will ensure frontend/backend engineers can work and deliver independently.
    1. It’s important to note that even though the code is merged behind a feature flag, it should still be production ready and continue to hold our definition of done.
    2. A separate MR containing the integration, documentation (if applicable) and removal of the feature flags should be completed in parallel with the backend and frontend MRs, but should only be merged when both the frontend and backend MRs are on the master branch.

In the spirit of collaboration and efficiency, members of teams should feel free to discuss issues directly with one another while being respectful of others’ time.

Convention over Configuration

Avoid adding configuration values in the application settings or in gitlab.yml. Only add configuration if it is absolutely necessary. If you find yourself adding parameters to tune specific features, stop and consider how this can be avoided. Are the values really necessary? Could constants be used that work across the board? Could values be determined automatically? See Convention over Configuration for more discussion.

Choosing Something to Work On

Start working on things with the highest priority in the current milestone. The priority of items are defined under labels in the repository, but you are able to sort by priority.

After sorting by priority, choose something that you’re able to tackle and falls under your responsibility. That means that if you’re a frontend developer, you work on something with the label frontend.

To filter very precisely, you could filter all issues for:

Use this link to quickly set the above parameters. You’ll still need to filter by the label for your own team.

If you’re in doubt about what to work on, ask your lead. They will be able to tell you.

Triaging and Reviewing Code from the rest of the Community

It’s every developers’ responsibilities to triage and review code contributed by the rest of the community, and work with them to get it ready for production.

Merge requests from the rest of the community should be labeled with the Community contribution label.

When evaluating a merge request from the community, please ensure that a relevant PM is aware of the pending MR by mentioning them.

This should be to be part of your daily routine. For instance, every morning you could triage new merge requests from the rest of the community that are not yet labeled Community contribution and either review them or ask a relevant person to review it.

Make sure to follow our Code Review Guidelines.

Working with GitLab.com

GitLab.com is a very large instance of GitLab Enterprise Edition. It runs release candidates for new releases, and sees a lot of issues because of the amount of traffic it gets. There are several internal tools available for developers at GitLab to get data about what’s happening in the production system:

Performance Data

There is extensive monitoring publicly available for GitLab.com. For more on this and related tools, see the monitoring handbook.

Error Reporting Scheduling Issues

GitLab Inc has to be selective in working on particular issues. We have a limited capacity to work on new things. Therefore, we have to schedule issues carefully.

Product Managers are responsible for scheduling all issues in their respective product areas, including features, bugs, and tech debt. Product managers alone determine the prioritization, but others are encouraged to influence the PMs decisions. The UX Lead and Engineering Leads are responsible for allocating people making sure things are done on time. Product Managers are not responsible for these activities, they are not project managers.

Direction issues are the big, prioritized new features for each release. They are limited to a small number per release so that we have plenty of capacity to work on other important issues, bug fixes, etc.

If you want to schedule an issue with the Seeking community contributions label, please remove the label first.

Any scheduled issue should have a team label assigned, and at least one type label.

Requesting Something to be Scheduled

To request scheduling an issue, ask the responsible product manager

We have many more requests for great features than we have capacity to work on. There is a good chance we’ll not be able to work on something. Make sure the appropriate labels (such as customer) are applied so every issue is given the priority it deserves.

Product Development Timeline

Teams (Product, UX, Development, Quality) continually work on issues according to their respective workflows. There is no specified process whereby a particular person should be working on a set of issues in a given time period. However, there are specific deadlines that should inform team workflows and prioritization.

With the monthly release date being the third Thursday of the release month, the code cut-off is the Friday prior.

The next milestone begins the Saturday after code cut-off.

All other important dates for a milestone are relative to the release date:

Refer to release post content reviews for additional deadlines.

Note that deployments to GitLab.com are more frequent than monthly major/minor releases. See auto deploy transition guidance for details.

Kickoff

At the beginning of each release, we have a kickoff meeting, publicly livestreamed to YouTube. In the call, the Product Development team (PMs, Product Designers, and Engineers) communicate with the rest of the organization which issues are in scope for the upcoming release. The call is structured by product area with each PM leading their part of the call.

The Product Kickoff page is updated each month, which follows the content on the livestream.

Milestone Cleanup

Engineering Managers are responsible for capacity planning and scheduling for their respective teams with guidance from their counterpart Product Managers.

To ensure hygiene across Engineering, we run scheduled pipelines to move unfinished work (open issues and merge requests) with the expired milestone to the next milestone, and label ~"missed:x.y" for the expired milestone. Additionally, label ~"missed-deliverable" whenever ~"Deliverable" is presented.

This is currently implemented as part of our automated triage operations. Additionally, issues with the ~Deliverable label which have a milestone beyond current +1, will have the ~Deliverable label removed.

We keep the milestone open for 3 months after it’s expired, based on the release and maintenance policy.

The milestone cleanup is currently applied to the following groups and projects:

Milestones closure is in the remit of the Delivery team. At any point in time a release might need to be created for an active milestone,and once that is no longer the case, the Delivery team closes the milestone.

Milestone cleanup schedule

The milestone cleanup will happen on the milestone due date.

These actions will be applied to open issues:

Milestones are closed when the Delivery team no longer needs to create a backport release for a specific milestone.

Use Group Labels and Group Milestones

When working in GitLab (and in particular, the GitLab.org group), use group labels and group milestones as much as you can. It is easier to plan issues and merge requests at the group level, and exposes ideas across projects more naturally. If you have a project label, you can promote it to a group milestone. This will merge all project labels with the same name into the one group label. The same is true for promoting group milestones.

Technical debt

We definitely don’t want our technical debt to grow faster than our code base. To prevent this from happening we should consider not only the impact of the technical debt but also consider the impacts spreading like a contagion. How big and how fast is this problem going to be over time? Is it likely a bad piece of code will be copy-pasted for a future feature? In the end, the amount of resources available is always less than amount of technical debt to address.

As we innovate our platform, we will have situations where a strategic decision will be made to incur technical debt in order to preserve a higher feature velocity to meet market and customer demand. This accrual of technical debt presents risks due to the long-term implications for the usability, security, reliability, scalability, accessibility, and/or availability of our product and platform.

Therefore technical debt may be accrued but must be:

  1. triaged,
  2. assigned a priority and severity, and
  3. have a plan to remediate within the assigned SLA for all instances of severities S1 and S2, and
  4. assigned a backlog label, ~"backlog::prospective" if it is expected to be worked on in the next 18 months, ~"backlog::no-commitment" otherwise.

If it is to be deferred by more than 6 months, it should be considered for the product and engineering roadmap. Technical debt issues you wish to be closed must not affect the “*abilities” and have a justification included on why it cannot be remediated within the next 18 months (meaning, there is an action plan on the Engineering roadmap).

To help with prioritization and decision-making process here, we recommend thinking about contagion as an interest rate of the technical debt. There is a great comment from the internet about it:

You wouldn’t pay off your $50k student loan before first paying off your $5k credit card and it’s because of the high interest rate. The best debt to pay off first is one that has the highest loan payment to recurring payment reduction ratio, i.e. the one that reduces your overall debt payments the most, and that is usually the loan with the highest interest rate.

Technical debt is prioritized like other technical decisions in product groups by product management.

For technical debt which might span, or fall in gaps between groups they should be brought up for a globally optimized prioritization in retrospectives or directly with the appropriate member of the Product Leadership team. Additional avenues for addressing technical debt outside of product groups are Strategic Priority Codes and working groups.

Deferred UX

Sometimes there is an intentional decision to deviate from the agreed-upon MVC, which sacrifices the user experience. When this occurs, the Product Designer creates a follow-up issue and labels it Deferred UX to address the UX gap in subsequent releases.

For the same reasons as technical debt, we don’t want Deferred UX to grow faster than our code base.

These issues are prioritized like other technical decisions in product groups by product management.

As with technical debt, Deferred UX should be brought up for globally optimized prioritization in retrospectives or directly with the appropriate member of the Product Leadership team.

UI polish

UI polish issues are visual improvements to the existing user interface, touching mainly aesthetic aspects of the UI that are guided by Pajamas foundations. UI polish issues generally capture improvements related to color, typography, iconography, and spacing. We apply the UI polish label to these issues. UI polish issues don’t introduce functionality or behavior changes to a feature.

Examples of UI polish What is not UI polish Monitor Merge Request Trends

Open merge requests sometimes become idle (not updated by a human in more than a month). Once a month, engineering managers will receive an Merge requests requiring attention triage issue that includes all (non-WIP/Draft) MRs for their group and use it to determine if any action should be taken (such as nudging the author/reviewer/maintainer). This assists in getting merge requests merged in a reasonable amount of time which we track with the Open MR Review Time (OMRT) and Open MR Age (OMA) performance indicators.

Open merge requests may also have other properties that indicate that the engineering manager should research them and potentially take action to improve efficiency. One key property is the number of threads, which, when high, may indicate a need to update the plan for the MR or that a synchronous discussion should be considered. Another property is the number of pipelines, which, when high, may indicate a need to revisit the plan for the MR. These metrics are not yet included in an automatically created a triage issue.

Security is everyone’s responsibility

Security is our top priority. Our Security Team is raising the bar on security every day to protect users’ data and make GitLab a safe place for everyone to contribute. There are many lines of code, and Security Teams need to scale. That means shifting security left in the Software Development LifeCycle (SDLC). Each team has an Application Security Stable Counterpart who can help you, and you can find more secure development help in the #sec-appsec Slack channel.

Being able to start the security review process earlier in the software development lifecycle means we will catch vulnerabilities earlier, and mitigate identified vulnerabilities before the code is merged. You should know when and how to proactively seek an Application Security Review. You should also be familiar with our Secure Coding Guidelines.

We are fixing the obvious security issues before every merge, and therefore, scaling the security review process. Our workflow includes a check and validation by the reviewers of every merge request, thereby enabling developers to act on identified vulnerabilities before merging. As part of that process, developers are also encouraged to reach out to the Security Team to discuss the issue at that stage, rather than later on, when mitigating vulnerabilities becomes more expensive. After all, security is everyone’s job. See also our Security Paradigm.

Rapid Engineering Response

From time to time, there are occasions that engineering team must act quickly in response to urgent issues. This section describes how the engineering team handles certain kinds of such issues.

Scope

Not everything is urgent. See below for a non-exclusive list of things that are in-scope and not in-scope. As always, use your experience and judgment, and communicate with others.

Process
  1. Person requesting Rapid Engineering Response creates an issue supplying all known information and applies priority and severity (or security severity and priority) to the best of their ability.
  2. Person requesting Rapid Engineering Response raises the issue to their own manager and the subject matter domain engineering manager (or the delegation if OOO).
    1. In case a specific group cannot be determined, raise the issue to the Director of Engineering (or the delegation if OOO) of the section.
    2. In case a specific section cannot be determined, raise the issue to the Sr. Director of Development (or the delegation if OOO).
  3. The engineering sponsor (subject matter Manager, Director, and/or Sr. Director) invokes all stakeholders of the subject matter as a rapid response task force to determine the best route of resolution:
    1. Engineering manager(s)
    2. Product Management
    3. QE
    4. UX
    5. Docs
    6. Security
    7. Support
    8. Distribution engineering manager
    9. Delivery engineering manager (Release Management)
  4. Adjust priority and severity or security severity and priority if necessary, and work collaboratively on the determined resolution.
Performance Refinement

A bi-weekly performance refinement session is held by the Development and QE teams jointly to raise awareness and foster wider collaboration about high-impact performance issues. A high impact issue has a direct measurable impact on GitLab.com service levels or error budgets.

Scope

The Performance Refinement issue board is reviewed in this refinement exercise.

Process
  1. To participate in the bi-weekly refinement, ask your engineering director to forward the invite of the Performance Refinement meeting which is at 15:00 UTC every other Thursday. Here is the meeting agenda.
  2. To nominate issues to the board:
    1. Assign a performance severity on the issue to help asses the priority assignment for the refinement session.
    2. Ensure that the issue clearly explains the problem, the (potential) impact on GitLab.com’s availability, and ideally, clearly defines a proposed solution to the problem.
    3. Use the label bug::performance.
  3. For the issues under the Open column:
    1. An engineering manager will be assigned if either the Milestone or the label workflow::ready for development is missing.
    2. Engineering manager brings assigned issue(s) to the Product Manager for prioritization and planning.
    3. Engineering manager unassigns themselves once the issue is planned for an iteration, i.e. associated with a Milestone and the label workflow::ready for development.
  4. To highlight high-impact issues which need additional discussion, please add an agenda item.
  5. If a guest attendee would be helpful for collaboration, please forward the invite. For example, a CSM or Support Engineer may have information that would be helpful for an upcoming topic.
Infradev

The infradev process is established to identify issues requiring priority attention in support of SaaS availability and reliability. These escalations are intended to primarily be asyncronous as timely triage and attention is required. In addition to primary management through the Issues, any gaps, concerns, or critical triage is handled in the SaaS Availability weekly standup.

Scope

The infradev issue board is the primary focus of this process.

Roles and Responsibilities Infrastructure
  1. Nominate issues by adding Infradev label.
  2. Assess Severity and Priority and apply the corresponding label as appropriate.
  3. Provide as much information as possible to assist development engineering troubleshooting.
Development
  1. Development directors are responsible for triaging Infradev issues regularly by following the triage process below.
  2. Development managers are encouraged to triage issues regularly as well.
  3. Development managers collaborate with their counterpart Product Managers to refine, schedule, and resolve Infradev issues.
  4. Usually, issues are nominated as Infradev issues by SREs or Managers in the Infrastructure Department. Development engineers/managers are not expected to nominate Infradev issues.
    1. However, when it’s necessary to spin off new issues from an existing Infradev issue, development engineers and managers may also apply Infradev label to the new issues.
    2. When development engineers and managers split off new Infradev issues, they must have a Severity and Priority labels to the new issues. The labels should correspond to the importance of the follow-on work.
Product Management
  1. Product Managers perform holistic prioritization of both product roadmap and Infradev issues as one unified backlog.
  2. Product Managers collaborate with their counterpart Development Managers to refine, schedule, and resolve Infradev issues.
Triage Process

(To be completed primarily by Development Engineering Management)

Issues are nominated to the board through the inclusion of the label infradev and will appear on the infradev board.

  1. Review issues in the Open column. Look for issues within your Stage/Group/Category, but also for those which lack a clear assignment or where the assignment may need correction.
  2. Review the severity on the issue to validate appropriate prioritization.
  3. Ensure that the issue clearly explains the problem, the (potential) impact on GitLab.com’s availability, and ideally, clearly defines a proposed solution to the problem.
  4. Assign a Development Manager and a Product Manager to any issue where the Milestone or the label workflow::ready for development is missing.
    1. Development Manager and Product Manager collaborate on the assigned issue(s) for prioritization and planning.
    2. Development Manager and Product Manager unassign themselves once the issue is planned for an iteration, i.e. associated with a Milestone and the label workflow::ready for development.
  5. All Issues should be prioritized into the appropriate workflow stage. It is the intent to maintain no Open (un-triaged) items.

Issues with ~infradev ~severity::1 ~priority::1 ~production request labels applied require immediate resolution.

~infradev issues requiring a ~“breaking change” should not exist. If a current ~infradev issue requires a breaking change then it should split into two issues. The first issue should be the immediate ~infradev work that can be done under current SLOs. The second issue should be ~“breaking change” work that needs to be completed at the next major release in accordance with deprecation guidance. Agreement from development DRI as well as the infrastructure DRI should be documented on the issue.

Infradev issues are also shown in the monthly Error Budget Report.

A Guide to Creating Effective Infradev Issues

Triage of infradev Issues is desired to occur asynchronously. These points below with endure that your infradev issues gain maximum traction.

  1. Use the InfraDev issue template to create the issue on the gitlab-org/gitlab issue tracker.
  2. Clearly state the scope of the problem, and how it affects GitLab SaaS Platforms. Examples could include:
    1. Reliability issues: the problem could cause a widespread outage or degradation on GitLab.com. example
    2. Saturation issues: the problem could leave to increased saturation, latency issues due to resource over-utilization. example
    3. Service-level degradation: the problem is causing our service-level monitoring to degrade, impacting the overall SLA of GitLab.com and potentially leaving to SLA violations. example
    4. Unnecessary alerts: the problem does not have a major impact on users, but is leading to extraneous alerts, impacting the ability of SREs to effectively triage incidents due to alerting noise. example
    5. Problems which extend the time to diagnosis of incidents: for example, issues which degrade the observability of GitLab.com, swallow user-impacting errors or logs, etc. These could lead to incidents taking much longer to clear, and impacting availability. example
    6. Deficiencies in our public APIs which lead to customers compensating by generating substantially more traffic to get the required results. example
  3. Quantify the effect of the problem to help ensure that correct prioritization occurs.
    1. Include costs to availability. The Incident Budget Explorer dashboard can help here.
    2. Include the number of times alerts have fired owing to the problem, how much time was spent dealing with the problem, and how many people were involved.
    3. Include screenshots of visualization from Grafana or Kibana.
    4. Always include a permalink to the source of the screenshot so that others can investigate further.
  4. Provide a clear, unambiguous, self-contained solution to the problem. Do not add the infradev label to architectural problems, vague solutions, or requests to investigate an unknown root-cause.
  5. Ensure scope is limited. Each issue should be able to be owned by a single stage group team and should not need to be broken down further. Single task solutions are best.
  6. Ensure a realistic severity is applied: review the availability severity label guidelines and ensure that applied severity matches. Always ensure all issues have a severity, even if you are unsure.
  7. If possible, include ownership labels for more effective triage. The product categories can help determine the appropriate stage group to assign the issue to.
  8. Cross-reference links to Production Incidents, PagerDuty Alerts, Slack Alerts and Slack Discussions. To help ensure that the team performing the triage have all the available data.
    1. By adding “Related” links on the infradev issue, the Infradev Status Report will display a count of the number of production incidents related to each infradev issue, for easier and clearer prioritization.
  9. Ensure that the issue title is accurate, brief and clear. Change the title over time if you need to keep it accurate.
  10. By adding an infradev label to an issue, you are assuming responsibility and becoming the sponsor/champion of the issue.
  11. Provide a method for validating that the original issue still exists
    1. Sometimes infradev issues will resolve on their own, or are resolved as a side-effect of an unrelated change.
    2. In the infradev issue description, provide a clear way of checking whether the problem still exists.
    3. Having a way of checking validity can save on a great deal of back-and-forth discussion between Infradev Triage participants including Engineering Managers, Directors and Product Managers and make space for other non-resolved issues to get scheduled sooner.
    4. Ideally, provide a link to a Grafana query or an ELK query and clear instructions on how to interpret the results to determine whether the problem is still occurring. Check the “Verfication” section in this issue as an example of this.
    5. Alternatively, provide clear instructions on how to recreate or validate the problem.
    6. If an issue has been resolved, use the following process:
      1. Reassign the issue back to the author, or an appropriate owner, requesting that they confirm the resolution, and close the issue if they concur. If not, they should follow up with a note and unassign themselves.

Code reviews are mandatory for every merge request, you should get familiar with and follow our Code Review Guidelines.

GitLab team members' code review values

In order to provide changes in an iterative and incremental manner, we should always seek to create …

At GitLab we have a number of engineering processes that we use on a daily basis.

This document explains the workflow for determining if a feature will be included in a milestone …


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4