This project is mirrored from Updated .
  1. 04 Jan, 2021 1 commit
  2. 01 Dec, 2020 1 commit
  3. 20 Aug, 2020 1 commit
  4. 28 Jul, 2020 1 commit
  5. 18 Oct, 2019 1 commit
  6. 04 Apr, 2019 1 commit
  7. 25 Mar, 2019 2 commits
  8. 20 Feb, 2019 1 commit
  9. 29 Jan, 2019 1 commit
  10. 27 Nov, 2018 1 commit
  11. 24 Oct, 2018 1 commit
  12. 13 Sep, 2018 1 commit
  13. 04 Apr, 2018 1 commit
  14. 09 Mar, 2018 1 commit
  15. 08 Jan, 2018 1 commit
  16. 29 Dec, 2017 1 commit
    • Refactor SD configuration to remove `config` dependency (#3629) · ec94df49
      * refactor: move targetGroup struct and CheckOverflow() to their own package
      * refactor: move auth and security related structs to a utility package, fix import error in utility package
      * refactor: Azure SD, remove SD struct from config
      * refactor: DNS SD, remove SD struct from config into dns package
      * refactor: ec2 SD, move SD struct from config into the ec2 package
      * refactor: file SD, move SD struct from config to file discovery package
      * refactor: gce, move SD struct from config to gce discovery package
      * refactor: move HTTPClientConfig and URL into util/config, fix import error in httputil
      * refactor: consul, move SD struct from config into consul discovery package
      * refactor: marathon, move SD struct from config into marathon discovery package
      * refactor: triton, move SD struct from config to triton discovery package, fix test
      * refactor: zookeeper, move SD structs from config to zookeeper discovery package
      * refactor: openstack, remove SD struct from config, move into openstack discovery package
      * refactor: kubernetes, move SD struct from config into kubernetes discovery package
      * refactor: notifier, use targetgroup package instead of config
      * refactor: tests for file, marathon, triton SD - use targetgroup package instead of config.TargetGroup
      * refactor: retrieval, use targetgroup package instead of config.TargetGroup
      * refactor: storage, use config util package
      * refactor: discovery manager, use targetgroup package instead of config.TargetGroup
      * refactor: use HTTPClient and TLS config from configUtil instead of config
      * refactor: tests, use targetgroup package instead of config.TargetGroup
      * refactor: fix tagetgroup.Group pointers that were removed by mistake
      * refactor: openstack, kubernetes: drop prefixes
      * refactor: remove import aliases forced due to vscode bug
      * refactor: move main SD struct out of config into discovery/config
      * refactor: rename configUtil to config_util
      * refactor: rename yamlUtil to yaml_config
      * refactor: kubernetes, remove prefixes
      * refactor: move the TargetGroup package to discovery/
      * refactor: fix order of imports
      Shubheksha Jalan authored
  17. 25 Oct, 2017 1 commit
  18. 15 Sep, 2017 1 commit
    • Improve DNS response handling to prevent "stuck" records [Fixes #2799] (#3138) · 33694223
      The problem reported in #2799 was that in the event that all records for a
      name were removed, the target group was never updated to be the "empty" set.
      Essentially, whatever Prometheus last saw as a non-empty list of targets
      would stay that way forever (or at least until Prometheus restarted...).  This
      came about because of a fairly naive interpretation of what a valid-looking
      DNS response actually looked like -- essentially, the only valid DNS responses
      were ones that had a non-empty record list.  That's fine as long as your
      config always lists only target names which have non-empty record sets; if
      your environment happens to legitimately have empty record sets sometimes,
      all hell breaks loose (otherwise-cleanly shutdown systems trigger up==0 alerts,
      for instance).
      This patch is a refactoring of the DNS lookup behaviour that maintains
      existing behaviour with regard to search paths, but correctly handles empty
      and non-existent record sets.
      RFC1034 s4.3.1 says there's three ways a recursive DNS server can respond:
      1.  Here is your answer (possibly an empty answer, because of the way DNS
         considers all records for a name, regardless of type, when deciding
         whether the name exists).
      2. There is no spoon (the name you asked for definitely does not exist).
      3. I am a teapot (something has gone terribly wrong).
      Situations 1 and 2 are fine and dandy; whatever the answer is (empty or
      otherwise) is the list of targets.  If something has gone wrong, then we
      shouldn't go updating the target list because we don't really *know* what
      the target list should be.
      Multiple DNS servers to query is a straightforward augmentation; if you get
      an error, then try the next server in the list, until you get an answer or
      run out servers to ask.  Only if *all* the servers return errors should you
      return an error to the calling code.
      Where things get complicated is the search path.  In order to be able to
      confidently say, "this name does not exist anywhere, you can remove all the
      targets for this name because it's definitely GORN", at least one server for
      *all* the possible names need to return either successful-but-empty
      responses, or NXDOMAIN.  If any name errors out, then -- since that one
      might have been the one where the records came from -- you need to say
      "maintain the status quo until we get a known-good response".
      It is possible, though unlikely, that a poorly-configured DNS setup (say,
      one which had a domain in its search path for which all configured recursive
      resolvers respond with REFUSED) could result in the same "stuck" records
      problem we're solving here, but the DNS configuration should be fixed in
      that case, and there's nothing we can do in Prometheus itself to fix the
      I've tested this patch on a local scratch instance in all the various ways I
      can think of:
      1. Adding records (targets get scraped)
      2. Adding records of a different type
      3. Remove records of the requested type, leaving other type records intact
         (targets don't get scraped)
      4. Remove all records for the name (targets don't get scraped)
      5. Shutdown the resolver (targets still get scraped)
      There's no automated test suite additions, because there isn't a test suite
      for DNS discovery, and I was stretching my Go skills to the limit to make
      this happen; mock objects are beyond me.
      Matt Palmer authored
  19. 08 Sep, 2017 1 commit
  20. 01 Jun, 2017 1 commit
  21. 17 Mar, 2017 1 commit
  22. 13 Feb, 2017 1 commit
  23. 22 Nov, 2016 1 commit
  24. 18 Nov, 2016 1 commit
  25. 03 Nov, 2016 1 commit
  26. 21 Oct, 2016 1 commit
  27. 07 Sep, 2016 1 commit
  28. 06 Sep, 2016 1 commit
  29. 06 May, 2016 2 commits
  30. 03 Apr, 2016 1 commit
  31. 01 Mar, 2016 2 commits
    • Preserve target state across reloads. · d15adfc9
      This commit moves Scraper handling into a separate scrapePool type.
      TargetSets only manage TargetProvider lifecycles and sync the
      retrieved updates to the scrapePool.
      TargetProviders are now expected to send a full initial target set
      within 5 seconds. The scrapePools preserve target state across reloads
      and only drop targets after the initial set was synced.
      Fabian Reinartz authored
    • Change TargetProvider interface. · 5b30bdb6
      This commit changes the TargetProvider interface to use a
      context.Context and send lists of TargetGroups, rather than
      single ones.
      Fabian Reinartz authored
  32. 09 Oct, 2015 1 commit
    • Fix SD mechanism source prefix handling. · d88aea7e
      The prefixed target provider changed a pointerized target group that was
      reused in the wrapped target provider, causing an ever-increasing chain
      of source prefixes in target groups from the Consul target provider.
      We now make this bug generally impossible by switching the target group
      channel from pointer to value type and thus ensuring that target groups
      are copied before being passed on to other parts of the system.
      I tried to not let the depointerization leak too far outside of the
      channel handling (both upstream and downstream) because I tried that
      initially and caused some nasty bugs, which I want to minimize.
      Julius Volz authored
  33. 03 Oct, 2015 1 commit
  34. 26 Aug, 2015 1 commit
    • Fix most golint warnings. · 995d3b83
      This is with `golint -min_confidence=0.5`.
      I left several lint warnings untouched because they were either
      incorrect or I felt it was better not to change them at the moment.
      Julius Volz authored
  35. 22 Aug, 2015 1 commit
  36. 21 Aug, 2015 1 commit
  37. 14 Aug, 2015 1 commit