A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.apollographql.com/docs/react/caching/cache-field-behavior below:

Website Navigation


Customizing the behavior of cached fields

You can customize how a particular field in your Apollo Client cache is read and written. To do so, you define a field policy for the field. A field policy can include:

You provide field policies to the constructor of InMemoryCache. Each field policy is defined inside whichever TypePolicy object corresponds to the field's parent type.

The following example defines a field policy for the name field of a Person type:

This field policy defines a read function that specifies what the cache returns whenever Person.name is queried.

The read function

If you define a read function for a field, the cache calls that function whenever your client queries for the field. In the query response, the field is populated with the read function's return value, instead of the field's cached value.

Every read function is passed two parameters:

The following read function returns a default value of UNKNOWN NAME for the name field of a Person type whenever a value isn't available in the cache. If a cached value is available, it's returned unmodified.

Handling field arguments

If a field accepts arguments, the read function's second parameter includes an args object that contains the values provided for those arguments.

For example, the following read function checks whether the maxLength argument was provided for the name field. If it was provided, the function returns only the first maxLength characters of the person's name. Otherwise, the person's full name is returned.

If a field requires numerous parameters then each parameter must be wrapped in a variable that is then destructured and returned. Each parameter will be available as individual subfields.

The following read function assigns a default value of UNKNOWN FIRST NAME to the firstName subfield of a fullName field and a UNKNOWN LAST NAME to the lastName of a fullName field.

The following query returns the firstName and lastName subfields from the fullName field:

You can define a read function for a field that isn't even defined in your schema. For example, the following read function enables you to query a userId field that is always populated with locally stored data:

Note that to query for a field that is only defined locally, your query should include the @client directive on that field so that Apollo Client doesn't include it in requests to your GraphQL server.

Other use cases for a read function include:

For a full list of the options provided to the read function, see the API reference. You will almost never need to use all of these options, but each one has an important role when reading fields from the cache.

The merge function

If you define a merge function for a field, the cache calls that function whenever the field is about to be written with an incoming value (such as from your GraphQL server). When the write occurs, the field's new value is set to the merge function's return value, instead of the original incoming value.

Merging arrays

A common use case for a merge function is to define how to write to a field that holds an array. By default, the field's existing array is completely replaced by the incoming array. In many cases, it's preferable to concatenate the two arrays instead, like so:

This pattern is especially common when working with paginated lists.

Note that existing is undefined the very first time this function is called for a given instance of the field, because the cache does not yet contain any data for the field. Providing the existing = [] default parameter is a convenient way to handle this case.

Your merge function cannot push the incoming array directly onto the existing array. It must instead return a new array to prevent potential errors. In development mode, Apollo Client prevents unintended modification of the existing data with Object.freeze.

Merging non-normalized objects

You can use a merge function to intelligently combine nested objects that are not normalized in your cache, assuming those objects are nested within the same normalized parent.

Example

Let's say our graph's schema includes the following types:

With this schema, our cache can normalize Book objects because they have an id field. However, Author objects have no id field, and they also have no other fields that can uniquely identify a particular instance. Therefore, the cache can't normalize Author objects, and it can't tell when two different Author objects actually represent the same author.

Now, let's say our client executes the following two queries, in order:

When the first query returns, Apollo Client writes a Book object like the following to the cache:

Remember that because Author objects can't be normalized, they're nested directly within their parent object.

Now, when the second query returns, the cached Book object is updated to the following:

The Author's name field has been removed! This is because Apollo Client can't be sure that the Author objects returned by the two queries actually refer to the same author. So instead of merging fields of the two objects, Apollo Client completely overwrites the object (and logs a warning).

However, we are confident that these two objects represent the same author, because a book's author virtually never changes. Therefore, we can tell the cache to treat Book.author objects as the same object as long as they belong to the same Book. This enables the cache to merge the name and dateOfBirth fields returned by different queries above.

To achieve this, we can define a custom merge function for the author field within the type policy for Book:

Here, we use the mergeObjects helper function to merge values from the existing and incoming Author objects. It's important to use mergeObjects here instead of merging the objects with object spread syntax , because mergeObjects makes sure to call any defined merge functions for subfields of Book.author.

Notice that this merge function has zero Book- or Author-specific logic in it! This means you can reuse it for any number of non-normalized object fields. And because this exact merge function definition is so common, you can also define it with the following shorthand:

In summary, the Book.author policy above enables the cache to intelligently merge all of the author objects associated with any particular normalized Book object.

Remember that for merge: true to merge two non-normalized objects, all of the following must be true:

If you require behavior that violates any of these rules, you need to write a custom merge function instead of using merge: true.

Merging arrays of non-normalized objects

Make sure you've read Merging arrays and Merging non-normalized objects first.

Consider what happens if a Book can have multiple authors:

The favoriteBook.authors field contains a list of non-normalized Author objects. In this case, we need to define a more sophisticated merge function to make sure the name and language fields returned by the two queries above are correctly associated with each other.

Instead of replacing the existing authors array with the incoming array, this code concatenates the arrays together, while also checking for duplicate author names. Whenever a duplicate name is found, the fields of the repeated Author objects are merged.

The readField helper function is more robust than using author.name directly, because it tolerates the possibility that the author is a Reference object referring to data elsewhere in the cache. This is important if the Author type eventually defines keyFields and therefore becomes normalized.

As this example suggests, merge functions can become quite sophisticated. When this happens, you can often extract the generic logic into a reusable helper function:

Now that you've hidden the details behind a reusable abstraction, it no longer matters how complicated the implementation gets. This is liberating, because it allows you to improve your client-side business logic over time, while keeping related logic consistent across your entire application.

Defining a merge function at the type level

In Apollo Client 3.3 and later, you can define a default merge function for a non-normalized object type. If you do, every field that returns that type uses your default merge function unless it's overridden on a field-by-field basis.

You define this default merge function in the type policy for the non-normalized type. Here's what that looks like for the non-normalized Author type from Merging non-normalized objects:

As shown above, the field-level merge function for Book.author is no longer required. The net result in this basic example is identical, but this strategy automatically applies the default merge function to any other Author-returning fields you might add in the future (such as Essay.author).

Handling pagination

When a field holds an array, it's often useful to paginate that array's results, because the total result set can be arbitrarily large.

Typically, a query includes pagination arguments that specify:

If you implement pagination for a field, it's important to keep pagination arguments in mind if you then implement read and merge functions for the field:

As this example shows, your read function often needs to cooperate with your merge function, by handling the same arguments in the inverse direction.

If you want a given "page" to start after a specific entity ID instead of starting from args.offset, you can implement your merge and read functions as follows, using the readField helper function to examine existing task IDs:

Note that if you call readField(fieldName), it returns the value of the specified field from the current object. If you pass an object as a second argument to readField, (e.g., readField("id", task)), readField instead reads the specified field from the specified object. In the above example, reading the id field from existing Task objects allows us to deduplicate the incoming task data.

The pagination code above is complicated, but after you implement your preferred pagination strategy, you can reuse it for every field that uses that strategy, regardless of the field's type. For example:

Opting in to the default behavior for non-normalized fields

Apollo Client replaces existing data with incoming data for non-normalized fields by default. When this happens, you will encounter console warnings like "Cache data may be lost when...", even when the default behavior is desirable. You can tell Apollo Client you want this behavior by passing merge: false to a field's FieldPolicy. By opting into this behavior, the console warning is no longer emitted:

In some cases, you might want the same behavior for all occurrences of a particular type. To do so, pass merge: false to the type policy for that type like so:

Specifying key arguments

If a field accepts arguments, you can specify an array of keyArgs in the field's FieldPolicy. This array indicates which arguments are key arguments that affect the field's return value. Specifying this array can help reduce the amount of duplicate data in your cache.

Example

Let's say your schema's Query type includes a monthForNumber field. This field returns the details of particular month, given a provided number argument (January for 1 and so on). The number argument is a key argument for this field, because its value affects the field's return value:

An example of a non-key argument is an access token, which is used to authorize a query but not to calculate its result. If monthForNumber also accepts an accessToken argument, the value of that argument does not affect which month's details are returned.

By default, all of a field's arguments are key arguments. This means that the cache stores a separate value for every unique combination of argument values you provide when querying a particular field.

If you specify a field's key arguments, the cache understands that the rest of that field's arguments aren't key arguments. This means that the cache doesn't need to store a completely separate value when a non-key argument changes.

For example, let's say you execute two different queries with the monthForNumber field, passing the same number argument but different accessToken arguments. In this case, the second query response will overwrite the first, because both invocations use an identical value for the only key argument.

Providing a keyArgs function

If you need more control over a particular field's keyArgs, you can pass a function instead of an array of argument names. This keyArgs function takes two parameters:

For details, see KeyArgsFunction in the API reference below.

FieldPolicy API reference

Here are the definitions for the FieldPolicy type and its related types:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4