A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/stefanprodan/AspNetCoreRateLimit/issues/83 below:

Distribute cache concurrency issue · Issue #83 · stefanprodan/AspNetCoreRateLimit · GitHub

Hi @stefanprodan ,
When deploying a lots of api services(use distributed cache to store counter) behind load balance, every instance need to get conuter and use process' lock to increse count, as the code below

 using (await AsyncLock.WriterLockAsync(counterId).ConfigureAwait(false))
            {
                var entry = await _counterStore.GetAsync(counterId, cancellationToken);

                if (entry.HasValue)
                {
                    // entry has not expired
                    if (entry.Value.Timestamp + rule.PeriodTimespan.Value >= DateTime.UtcNow)
                    {
                        // increment request count
                        var totalCount = entry.Value.Count + _config.RateIncrementer?.Invoke() ?? 1;

                        // deep copy
                        counter = new RateLimitCounter
                        {
                            Timestamp = entry.Value.Timestamp,
                            Count = totalCount
                        };
                    }
                }

                // stores: id (string) - timestamp (datetime) - total_requests (long)
                await _counterStore.SetAsync(counterId, counter, rule.PeriodTimespan.Value, cancellationToken);
            }

I think it has concurrency issues, counter will not work correctly when a lots of requests come in. Every instance's counter count by themself then save to the cache store, the last one will cover the previous one.

Why not use Token Bucket algorithm to achieve this feature.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4