This project is mirrored from https://github.com/prometheus/prometheus.git. Updated .
  1. 21 Sep, 2020 1 commit
  2. 25 Aug, 2020 1 commit
  3. 20 Aug, 2020 1 commit
  4. 12 Aug, 2020 1 commit
  5. 06 Aug, 2020 1 commit
  6. 30 Jul, 2020 2 commits
  7. 24 Jul, 2020 2 commits
  8. 23 Jul, 2020 1 commit
  9. 22 Jul, 2020 1 commit
    • promql: Removed global and add ability to have better interval for subqueries if… · a0df8a38
      promql: Removed global and add ability to have better interval for subqueries if not specified (#7628)
      
      * promql: Removed global and add ability to have better interval for subqueries if not specified
      
      ## Changes
      * Refactored tests for better hints testing
      * Added various TODO in places to enhance.
      * Moved DefaultEvalInterval global to opts with func(rangeMillis int64) int64 function instead
      
      Motivation: At Thanos we would love to have better control over the subqueries step/interval.
      This is important to choose proper resolution. I think having proper step also does not harm for
      Prometheus and remote read users. Especially on stateless querier we do not know evaluation interval
      and in fact putting global can be wrong to assume for Prometheus even.
      
      I think ideally we could try to have at least 3 samples within the range, the same
      way Prometheus UI and Grafana assumes.
      
      Anyway this interfaces allows to decide on promQL user basis.
      
      Open question: Is taking parent interval a smart move?
      
      Motivation for removing global: I spent 1h fighting with:
      
      
      === RUN   TestEvaluations
          TestEvaluations: promql_test.go:31: unexpected error: error evaluating query "absent_over_time(rate(nonexistant[5m])[5m:])" (line 687): unexpected error: runtime error: integer divide by zero
      --- FAIL: TestEvaluations (0.32s)
      FAIL
      
      At the end I found that this fails on most of the versions including this master if you run this test alone. If run together with many
      other tests it passes. This is due to SetDefaultEvaluationInterval(1 * time.Minute)
      in test that is ran before TestEvaluations. Thanks to globals (:
      
      Let's fix it by dropping this global.
      Signed-off-by: 's avatarBartlomiej Plotka <bwplotka@gmail.com>
      
      * Added issue links for TODOs.
      Signed-off-by: 's avatarBartlomiej Plotka <bwplotka@gmail.com>
      
      * Removed irrelevant changes.
      Signed-off-by: 's avatarBartlomiej Plotka <bwplotka@gmail.com>
      Bartlomiej Plotka authored
  10. 21 Jul, 2020 1 commit
  11. 16 Jul, 2020 1 commit
  12. 03 Jul, 2020 1 commit
  13. 26 Jun, 2020 1 commit
  14. 24 Jun, 2020 1 commit
    • storage: Adjusted fully storage layer support for chunk iterators: Remote read… · b7889867
      storage: Adjusted fully storage layer support for chunk iterators: Remote read client, readyStorage, fanout. (#7059)
      
      * Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
      * Added ChunkQueryable implementation to fanout and readyStorage.
      * Added more comments.
      * Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
      both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
      * Added ErrChunkSeriesSet
      * Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
      * Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
      * Improved tests.
      * Split remote client into Write (old storage) and read.
      * Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
      all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
      
      In next commit: Changes for TSDB.
      Signed-off-by: 's avatarBartlomiej Plotka <bwplotka@gmail.com>
      Bartlomiej Plotka authored
  15. 21 Jun, 2020 1 commit
  16. 18 Jun, 2020 1 commit
  17. 15 Jun, 2020 1 commit
  18. 11 May, 2020 1 commit
  19. 07 May, 2020 2 commits
  20. 06 May, 2020 1 commit
    • M-map full chunks of Head from disk (#6679) · d4b9fe80
      When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
      
      Prom startup now happens in these stages
       - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
      - Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
      
      If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
      
      [Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md)  - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
      [The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
      In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
      
      **Prombench results**
      
      _WAL Replay_
      
      1h Wal reply time
      30% less wal reply time - 4m31 vs 3m36
      2h Wal reply time
      20% less wal reply time - 8m16 vs 7m
      
      _Memory During WAL Replay_
      
      High Churn:
      10-15% less RAM -  32gb vs 28gb
      20% less RAM after compaction 34gb vs 27gb
      No Churn:
      20-30% less RAM -  23gb vs 18gb
      40% less RAM after compaction 32.5gb vs 20gb
      
      Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
      Signed-off-by: 's avatarGanesh Vernekar <cs15btech11018@iith.ac.in>
      Ganesh Vernekar authored
  21. 29 Apr, 2020 1 commit
  22. 23 Apr, 2020 1 commit
  23. 11 Apr, 2020 1 commit
  24. 29 Mar, 2020 1 commit
  25. 23 Mar, 2020 1 commit
  26. 19 Mar, 2020 1 commit
  27. 02 Mar, 2020 1 commit
  28. 18 Feb, 2020 1 commit
  29. 17 Feb, 2020 6 commits
  30. 09 Feb, 2020 1 commit
  31. 08 Feb, 2020 2 commits