maxlag is something that can be added to any Action API query. When set, the API query will ask the server if it is too busy. If it is, the API query will return an error instead of processing the API request. The error message will include the number of seconds behind it is, and this information can be used to determine how long to wait before retrying the query. The idea is to pause bot operations during times of high server load, to give human operations priority. This is an optional but recommended parameter, and many major bot frameworks such as Pywikibot implement it by default.
If you are running MediaWiki on a replicated database cluster (like Wikimedia is), then high operation rates (especially edits) may cause the replica servers to lag. One way to mitigate replication lag is to have all bots and maintenance tasks automatically stop whenever lag goes above a certain value. In MediaWiki 1.10, the maxlag
parameter was introduced which allows the same thing to be done in client-side scripts. Since 1.27, it only applies to api.php
requests.
The maxlag
parameter can be passed to api.php
through a URL parameter or POST data. It is an integer number of seconds. For example, this link shows metadata about the page "MediaWiki" unless the lag is greater than 1 second while this one (with -1
at the end) shows you the actual lag without metadata.
If the specified lag is exceeded at the time of the request, the API returns an error (with 200 status code, see T33156) like the following:
{ "error": { "code": "maxlag", "info": "Waiting for $host: $lag seconds lagged", "host": $host, "lag": $lag, "*": "See https://www.mediawiki.org/w/api.php for API usage" } }
The following HTTP headers are set:
Recommended usage for Wikimedia wikis is as follows:
maxlag=5
(5 seconds). This is an appropriate non-aggressive value, set as default value on Pywikibot. Higher values mean more aggressive behaviour, lower values are nicer.maxlag
parameter. Noninteractive tasks should always use it. See also API:Etiquette#The maxlag parameter.Note that the caching layer (Varnish or Squid ) may also generate error messages with a 503 status code, due to timeout of an upstream server. Clients should treat these errors differently, because they may occur consistently when you try to perform a long-running expensive operation. Repeating the operation on timeout would use excessive server resources and may leave your client in an infinite loop. You can distinguish between cache-layer errors and MediaWiki lag conditions using any of the following:
X-Database-Lag
header is distinctive to replication lag errors in MediaWikiRetry-After
in Varnish errorsX-Squid-Error
header should be present in Squid errors/Waiting for [^ ]*: [0-9.-]+ seconds? lagged/
For testing purposes, you may intentionally make the software refuse a request by passing a negative value, such as in the following URL: //www.mediawiki.org/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4