Unverified Commit 90fc6be7 by Bryan Boreham Committed by GitHub

Default to bigger remote_write sends (#5267)

* Default to bigger remote_write sends

Raise the default MaxSamplesPerSend to amortise the cost of remote
calls across more samples. Lower MaxShards to keep the expected max
memory usage within reason.
Signed-off-by: 's avatarBryan Boreham <bryan@weave.works>

* Change default Capacity to 2500

To maintain ratio with MaxSamplesPerSend
Signed-off-by: 's avatarBryan Boreham <bjboreham@gmail.com>
parent f0f8e505
Pipeline #66418 passed with stages
in 6 minutes 46 seconds
......@@ -104,16 +104,16 @@ var (
// DefaultQueueConfig is the default remote queue configuration.
DefaultQueueConfig = QueueConfig{
// With a maximum of 1000 shards, assuming an average of 100ms remote write
// time and 100 samples per batch, we will be able to push 1M samples/s.
MaxShards: 1000,
// With a maximum of 200 shards, assuming an average of 100ms remote write
// time and 500 samples per batch, we will be able to push 1M samples/s.
MaxShards: 200,
MinShards: 1,
MaxSamplesPerSend: 100,
MaxSamplesPerSend: 500,
// Each shard will have a max of 500 samples pending in it's channel, plus the pending
// samples that have been enqueued. Theoretically we should only ever have about 600 samples
// per shard pending. At 1000 shards that's 600k.
Capacity: 500,
// Each shard will have a max of 2500 samples pending in its channel, plus the pending
// samples that have been enqueued. Theoretically we should only ever have about 3000 samples
// per shard pending. At 200 shards that's 600k.
Capacity: 2500,
BatchSendDeadline: model.Duration(5 * time.Second),
// Backoff times for retrying a batch of samples on recoverable errors.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment