A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/serilog/serilog/issues/809 below:

Multithreaded Sink · Issue #809 · serilog/serilog · GitHub

We are using Serilog for our REST service, between our logger class and an Azure queue.
We have a custom ILogEventSink to write the messages to our Azure queue.

Each REST service call is resulting in on average 100 logging calls (various things like Tracing, Counting, Auditing etc.) And that has really slowed down the response time of each web service call for us.

If we remove the [final] step in our sink that actually makes the call to write to the Azure queue, a service call completes 5X faster!
So we have concluded that the performance bottleneck for us is having the web service request thread doing the actual writing to the Azure queue in the sink.
We want to offload that I/O bound work to another thread pool thread, and allow the web request thread to get out of logging as soon as possible.

A typical design pattern for this is the Producer/Consumer pattern, where one thread produces data on a shared [thread safe] memory queue (i.e. BlockingCollection<T>, or BufferBlock<T>), and another thread consumes that data and does work with it.

Ideally, with serilog, this producer/consumer could be implemented as a delegating ILogEventSink that can be configured between Serilog LoggerConfiguration (producer) and another sink that does the hard (consumer) work (in our case our Azure sink).

Does serilog accomadate anything like this?

Basically, we just don't want the same thread that makes the call to ILogger.* to be the same thread that runs our ILogEventSink.

What would be the Serilog way to solve this problem?


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4