This project is mirrored from https://github.com/prometheus/prometheus.git. Updated .
  1. 25 Aug, 2020 1 commit
  2. 28 Jan, 2019 1 commit
  3. 05 Oct, 2017 1 commit
  4. 07 Feb, 2017 1 commit
  5. 15 Sep, 2015 1 commit
  6. 22 Aug, 2015 1 commit
  7. 21 Aug, 2015 1 commit
  8. 26 Jan, 2015 1 commit
    • More efficient JSON query result format. · d4374a92
      This depends on https://github.com/prometheus/client_golang/pull/51.
      
      For vectors, the result format looks like this:
      
      ```json
      {
         "version": 1,
         "type" : "vector",
         "value" : [
            {
               "timestamp" : 1421765411.045,
               "value" : "65.475000",
               "metric" : {
                  "quantile" : "0.5",
                  "instance" : "http://localhost:9090/metrics",
                  "job" : "prometheus",
                  "__name__" : "http_request_duration_microseconds",
                  "handler" : "/static/",
                  "method" : "get",
                  "code" : "304"
               }
            },
            {
               "timestamp" : 1421765411.045,
               "value" : "5826.339000",
               "metric" : {
                  "quantile" : "0.9",
                  "instance" : "http://localhost:9090/metrics",
                  "job" : "prometheus",
                  "__name__" : "http_request_duration_microseconds",
                  "handler" : "prometheus",
                  "method" : "get",
                  "code" : "200"
               }
            },
            /* ... */
         ]
      }
      ```
      
      For matrices, it looks like this:
      
      ```json
      {
         "version": 1,
         "type" : "matrix",
         "value" : [
            {
               "metric" : {
                  "quantile" : "0.99",
                  "instance" : "http://localhost:9090/metrics",
                  "job" : "prometheus",
                  "__name__" : "http_request_duration_microseconds",
                  "handler" : "/static/",
                  "method" : "get",
                  "code" : "200"
               },
               "values" : [
                  [
                     1421765547.659,
                     "29162.953000"
                  ],
                  [
                     1421765548.659,
                     "29162.953000"
                  ],
                  [
                     1421765549.659,
                     "29162.953000"
                  ],
                  /* ... */
               ]
            }
         ]
      }
      ```
      Julius Volz authored
  9. 21 Jan, 2015 1 commit
    • Clean up license issues. · 5859b74f
      - Move CONTRIBUTORS.md to the more common AUTHORS.
      - Added the required NOTICE file.
      - Changed "Prometheus Team" to "The Prometheus Authors".
      - Reverted the erroneous changes to the Apache License.
      Bjoern Rabenstein authored
  10. 12 Dec, 2014 1 commit
  11. 25 Nov, 2014 4 commits
  12. 16 Apr, 2014 2 commits
    • Separate storage implementation from interfaces. · 01f652cb
      This was initially motivated by wanting to distribute the rule checker
      tool under `tools/rule_checker`. However, this was not possible without
      also distributing the LevelDB dynamic libraries because the tool
      transitively depended on Levigo:
      
      rule checker -> query layer -> tiered storage layer -> leveldb
      
      This change separates external storage interfaces from the
      implementation (tiered storage, leveldb storage, memory storage) by
      putting them into separate packages:
      
      - storage/metric: public, implementation-agnostic interfaces
      - storage/metric/tiered: tiered storage implementation, including memory
                               and LevelDB storage.
      
      I initially also considered splitting up the implementation into
      separate packages for tiered storage, memory storage, and LevelDB
      storage, but these are currently so intertwined that it would be another
      major project in itself.
      
      The query layers and most other parts of Prometheus now have notion of
      the storage implementation anymore and just use whatever implementation
      they get passed in via interfaces.
      
      The rule_checker is now a static binary :)
      
      Change-Id: I793bbf631a8648ca31790e7e772ecf9c2b92f7a0
      Julius Volz authored
    • Parameterize the buffer for marshal/unmarshal. · 3e969a8c
      We are not reusing buffers yet.  This could introduce problems,
      so the behavior is disabled for now.
      
      Cursory benchmark data:
      - Marshal for 10,000 samples: -30% overhead.
      - Unmarshal for 10,000 samples: -15% overhead.
      
      Change-Id: Ib006bdc656af45dca2b92de08a8f905d8d728cac
      Matt T. Proud authored
  13. 15 Apr, 2014 1 commit
    • Correct size of unmarshalling destination buffer. · 6ec72393
      The format header size is not deducted from the size of the byte
      stream when calculating the output buffer size for samples.  I have
      yet to notice problems directly as a result of this, but it is good
      to fix.
      
      Change-Id: Icb07a0718366c04ddac975d738a6305687773af0
      Matt T. Proud authored
  14. 11 Mar, 2014 2 commits
    • Convert metric.Values to slice of values. · 86fc13a5
      The initial impetus for this was that it made unmarshalling sample
      values much faster.
      
      Other relevant benchmark changes in ns/op:
      
      Benchmark                                 old        new   speedup
      ==================================================================
      BenchmarkMarshal                       179170     127996     1.4x
      BenchmarkUnmarshal                     404984     132186     3.1x
      
      BenchmarkMemoryGetValueAtTime           57801      50050     1.2x
      BenchmarkMemoryGetBoundaryValues        64496      53194     1.2x
      BenchmarkMemoryGetRangeValues           66585      54065     1.2x
      
      BenchmarkStreamAdd                       45.0       75.3     0.6x
      BenchmarkAppendSample1                   1157       1587     0.7x
      BenchmarkAppendSample10                  4090       4284     0.95x
      BenchmarkAppendSample100                45660      44066     1.0x
      BenchmarkAppendSample1000              579084     582380     1.0x
      BenchmarkMemoryAppendRepeatingValues 22796594   22005502     1.0x
      
      Overall, this gives us good speedups in the areas where they matter
      most: decoding values from disk and accessing the memory storage (which
      is also used for views).
      
      Some of the smaller append examples take minimally longer, but the cost
      seems to get amortized over larger appends, so I'm not worried about
      these. Also, we're currently not bottlenecked on the write path and have
      plenty of other optimizations available in that area if it becomes
      necessary.
      
      Memory allocations during appends don't change measurably at all.
      
      Change-Id: I7dc7394edea09506976765551f35b138518db9e8
      Julius Volz authored
    • Add version field to LevelDB sample format. · a7d0973f
      This doesn't add complex discriminator logic yet, but adds a single
      version byte to the beginning of each samples chunk. If we ever need to
      change the disk format again, this will make it easy to do so without
      having to wipe the entire database.
      
      Change-Id: I60c39274256f790bc2da83167a1effaa174588fe
      Julius Volz authored
  15. 09 Mar, 2014 1 commit
  16. 27 Feb, 2014 1 commit
    • Major code cleanup in storage. · 6bc083f3
      - Mostly docstring fixed/additions.
        (Please review these carefully, since most of them were missing, I
        had to guess them from an outsider's perspective. (Which on the
        other hand proves how desperately required many of these docstrings
        are.))
      
      - Removed all uses of new(...) to meet our own style guide (draft).
      
      - Fixed all other 'go vet' and 'golint' issues (except those that are
        not fixable (i.e. caused by bugs in or by design of 'go vet' and
        'golint')).
      
      - Some trivial refactorings, like reorder functions, minor renames, ...
      
      - Some slightly less trivial refactoring, mostly to reduce code
        duplication by embedding types instead of writing many explicit
        forwarders.
      
      - Cleaned up the interface structure a bit. (Most significant probably
        the removal of the View-like methods from MetricPersistenc. Now they
        are only in View and not duplicated anymore.)
      
      - Removed dead code. (Probably not all of it, but it's a first
        step...)
      
      - Fixed a leftover in storage/metric/end_to_end_test.go (that made
        some parts of the code never execute (incidentally, those parts
        were broken (and I fixed them, too))).
      
      Change-Id: Ibcac069940d118a88f783314f5b4595dce6641d5
      Bjoern Rabenstein authored
  17. 03 Dec, 2013 1 commit
    • Use custom timestamp type for sample timestamps and related code. · 740d4489
      So far we've been using Go's native time.Time for anything related to sample
      timestamps. Since the range of time.Time is much bigger than what we need, this
      has created two problems:
      
      - there could be time.Time values which were out of the range/precision of the
        time type that we persist to disk, therefore causing incorrectly ordered keys.
        One bug caused by this was:
      
        https://github.com/prometheus/prometheus/issues/367
      
        It would be good to use a timestamp type that's more closely aligned with
        what the underlying storage supports.
      
      - sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
        Unix timestamp (possibly even a 32-bit one). Since we store samples in large
        numbers, this seriously affects memory usage. Furthermore, copying/working
        with the data will be faster if it's smaller.
      
      *MEMORY USAGE RESULTS*
      Initial memory usage comparisons for a running Prometheus with 1 timeseries and
      100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
      tests, this advantage for some reason decreased a bit the more samples the
      timeseries had (to 5-7% for millions of samples). This I can't fully explain,
      but perhaps garbage collection issues were involved.
      
      *WHEN TO USE THE NEW TIMESTAMP TYPE*
      The new clientmodel.Timestamp type should be used whenever time
      calculations are either directly or indirectly related to sample
      timestamps.
      
      For example:
      - the timestamp of a sample itself
      - all kinds of watermarks
      - anything that may become or is compared to a sample timestamp (like the timestamp
        passed into Target.Scrape()).
      
      When to still use time.Time:
      - for measuring durations/times not related to sample timestamps, like duration
        telemetry exporting, timers that indicate how frequently to execute some
        action, etc.
      
      *NOTE ON OPERATOR OPTIMIZATION TESTS*
      We don't use operator optimization code anymore, but it still lives in
      the code as dead code. It still has tests, but I couldn't get all of them to
      pass with the new timestamp format. I commented out the failing cases for now,
      but we should probably remove the dead code soon. I just didn't want to do that
      in the same change as this.
      
      Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
      Julius Volz authored
  18. 25 Jun, 2013 1 commit