This project is mirrored from https://github.com/prometheus/prometheus.git. Updated .
  1. 11 May, 2020 1 commit
  2. 07 May, 2020 2 commits
  3. 06 May, 2020 1 commit
    • M-map full chunks of Head from disk (#6679) · d4b9fe80
      When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
      
      Prom startup now happens in these stages
       - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
      - Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
      
      If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
      
      [Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md)  - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
      [The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
      In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
      
      **Prombench results**
      
      _WAL Replay_
      
      1h Wal reply time
      30% less wal reply time - 4m31 vs 3m36
      2h Wal reply time
      20% less wal reply time - 8m16 vs 7m
      
      _Memory During WAL Replay_
      
      High Churn:
      10-15% less RAM -  32gb vs 28gb
      20% less RAM after compaction 34gb vs 27gb
      No Churn:
      20-30% less RAM -  23gb vs 18gb
      40% less RAM after compaction 32.5gb vs 20gb
      
      Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
      Signed-off-by: 's avatarGanesh Vernekar <cs15btech11018@iith.ac.in>
      Ganesh Vernekar authored
  4. 29 Apr, 2020 1 commit
  5. 23 Apr, 2020 1 commit
  6. 11 Apr, 2020 1 commit
  7. 29 Mar, 2020 1 commit
  8. 23 Mar, 2020 1 commit
  9. 19 Mar, 2020 1 commit
  10. 02 Mar, 2020 1 commit
  11. 18 Feb, 2020 1 commit
  12. 17 Feb, 2020 6 commits
  13. 09 Feb, 2020 1 commit
  14. 08 Feb, 2020 2 commits
  15. 30 Jan, 2020 1 commit
  16. 28 Jan, 2020 1 commit
    • Remove MaxConcurrent from the PromQL engine opts (#6712) · 9adad8ad
      Since we use ActiveQueryTracker to check for concurrency in
      d992c36b it does not make sense to keep
      the MaxConcurrent value as an option of the PromQL engine.
      
      This pull request removes it from the PromQL engine options, sets the
      max concurrent metric to -1 if there is no active query tracker, and use
      the value of the active query tracker otherwise.
      
      It removes dead code and also will inform people who import the promql
      package that we made that change, as it breaks the EngineOpts struct.
      Signed-off-by: 's avatarJulien Pivotto <roidelapluie@inuits.eu>
      Julien Pivotto authored
  17. 27 Jan, 2020 1 commit
  18. 20 Jan, 2020 1 commit
  19. 19 Jan, 2020 2 commits
  20. 15 Jan, 2020 1 commit
  21. 09 Jan, 2020 1 commit
  22. 08 Jan, 2020 1 commit
  23. 12 Dec, 2019 1 commit
    • *: avoid missed Alertmanager targets (#6455) · cccd5428
      This change makes sure that nearly-identical Alertmanager configurations
      aren't merged together.
      
      The config's identifier was the MD5 hash of the configuration serialized
      to JSON but because `relabel.Regexp` has no public field and doesn't
      implement the JSON.Marshaler interface, it was always serialized to
      "{}".
      
      In practice, the identifier can be based on the index of the
      configuration in the list.
      Signed-off-by: 's avatarSimon Pasquier <spasquie@redhat.com>
      Simon Pasquier authored
  24. 30 Nov, 2019 1 commit
    • Add time units to storage.tsdb.retention.size flag (#6365) · 0ea3a221
      * Add time units to storage.tsdb.retention.size flag
      
      In an effort to reduce confusion with the `m` option of the
      `ParseDuration()` function, this commit adds the available time units to
      the `storage.tsdb.retention.time` flag to help showcase that there is no
      option for months (which could be assumed to be `m`).
      
      If someone were looking to set the retention to six months, they may
      mistakenly do so with `6m`, which would reduce their retention to six
      minutes.
      Signed-off-by: 's avatarBrooks Swinnerton <bswinnerton@gmail.com>
      Brooks Swinnerton authored
  25. 19 Nov, 2019 1 commit
  26. 18 Nov, 2019 1 commit
  27. 05 Nov, 2019 1 commit
  28. 25 Sep, 2019 1 commit
  29. 20 Sep, 2019 1 commit
  30. 28 Aug, 2019 1 commit
  31. 24 Aug, 2019 1 commit
  32. 19 Aug, 2019 1 commit