A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.gitlab.com/development/database/layout_and_access_patterns/ below:

Best practices for data layout and access patterns

Certain patterns of data access, and especially data updates, can exacerbate strain on the database. Avoid them if possible.

This document lists some patterns to avoid, with recommendations for alternatives.

High-frequency updates, especially to the same row

Avoid single database rows that are updated by many transactions at the same time.

This pattern often happens when an aggregate is too expensive to compute for each request, so a running tally is kept in the database. If you need such an aggregate, consider keeping a running total in a single row, plus a small working set of recently added data, such as individual increments:

Wide tables

PostgreSQL organizes rows into 8 KB pages, and operates on one page at a time. By minimizing the width of rows in a table, we improve the following:

Mitigating wide tables is one part of the database team’s 100 GB table initiative, as wider tables can fit fewer rows in 100 GB.

When adding columns to a table, consider if you intend to access the data in the new columns by itself, in a one-to-one relationship with the other columns of the table. If so, the new columns could be a good candidate for splitting to a new table.

Several tables have already been split in this way. For example:

Data model trade-offs

Certain tables, like users, namespaces, and projects, can get very wide. These tables are usually central to the application, and used very often.

Why is this a problem?

A possible solution to this problem is to keep only the most important columns on the main table, and extract the rest into different tables, having one-to-one relationship with the main table. Good candidates are columns that are either very frequently updated, for example last_activity_at, or columns that are rarely updated and/or used, like activation tokens.

The trade-off that comes with such extraction is that index-only scans are no longer possible. Instead, the application must either join to the new table or execute an additional query. The performance impacts of this should be weighed against the benefits of the vertical table split.

There is a very good episode on this topic on the PostgresFM podcast, where @NikolayS of PostgresAI and @michristofides of PgMustard discuss this topic in more depth - https://postgres.fm/episodes/data-model-trade-offs.

Example

Lets look at the users table, which at of the time of writing has 75 columns. We can see a few groups of columns that match the above criteria, and are good candidates for extraction:

Let’s focus on users.incoming_email_token - every user on GitLab.com has one set, and this token is rarely updated.

In order to extract it from users into a new table, we’ll have to do the following:

  1. Release M example
  2. Release N example
  3. Release N + 1 example
  4. Release N + 2 example
  5. Release N + 3 example

While this is a lengthy process, it’s needed in order to do the extraction without disrupting the application. Once completed, the original column and the related index will no longer exists on the users table, which will result in improved performance.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4