Prometheus is an open-source systems monitoring and alerting toolkit with an active ecosystem. It is the only system directly supported by Kubernetes and the de facto standard across the cloud native ecosystem. See the overview.
See the comparison page.
The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies.
Yes.
Cloud native is a flexible operating model, breaking up old service boundaries to allow for more flexible and scalable deployments.
Prometheus's service discovery integrates with most tools and clouds. Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud-native deployments. There are always trade-offs to make when running services, and Prometheus values reliably getting alerts out to humans above all else.
Yes, run identical Prometheus servers on two or more separate machines. Identical alerts will be deduplicated by the .
Alertmanager supports by interconnecting multiple Alertmanager instances to build an Alertmanager cluster. Instances of a cluster communicate using a gossip protocol managed via library.
This is often more of a marketing claim than anything else.
A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus. You can run Prometheus reliably with tens of millions of active series.
If you need more than that, there are several options. Scaling and Federating Prometheus on the Robust Perception blog is a good starting point, as are the long storage systems listed on our integrations page.
Most Prometheus components are written in Go. Some are also written in Java, Python, and Ruby.
All repositories in the Prometheus GitHub organization that have reached version 1.0.0 broadly follow . Breaking changes are indicated by increments of the major version. Exceptions are possible for experimental components, which are clearly marked as such in announcements.
Even repositories that have not yet reached version 1.0.0 are, in general, quite
stable. We aim for a proper release process and an eventual 1.0.0 release for
each repository. In any case, breaking changes will be pointed out in release
notes (marked by [CHANGE]
) or communicated clearly for components that do not
have formal releases yet.
Pulling over HTTP offers a number of advantages:
Overall, we believe that pulling is slightly better than pushing, but it should not be considered a major point when considering a monitoring system.
For cases where you must push, we offer the Pushgateway.
Short answer: Don't! Use something like or instead.
Longer answer: Prometheus is a system to collect and process metrics, not an event logging system. The Grafana blog post provides more details about the differences between logs and metrics.
If you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's documentation.
Prometheus was initially started privately by and . The majority of its initial development was sponsored by .
It's now maintained and extended by a wide range of companies and individuals.
Prometheus is released under the license.
After extensive research, it has been determined that the correct plural of 'Prometheus' is 'Prometheis'.
If you can not remember this, "Prometheus instances" is a good workaround.
Yes, sending SIGHUP
to the Prometheus process or an HTTP POST request to the
/-/reload
endpoint will reload and apply the configuration file. The
various components attempt to handle failing changes gracefully.
Yes, with the .
We support sending alerts through email, various native integrations, and a webhook system anyone can add integrations to.
Yes, we recommend Grafana for production usage. There are also Console templates.
To avoid any kind of timezone confusion, especially when the so-called daylight saving time is involved, we decided to exclusively use Unix time internally and UTC for display purposes in all components of Prometheus. A carefully done timezone selection could be introduced into the UI. Contributions are welcome. See for the current state of this effort.
There are a number of client libraries for instrumenting your services with Prometheus metrics. See the client libraries documentation for details.
If you are interested in contributing a client library for a new language, see the exposition formats.
Yes, the exposes an extensive set of machine-level metrics on Linux and other Unix systems such as CPU usage, memory, disk utilization, filesystem fullness, and network bandwidth.
Yes, the allows monitoring of devices that support SNMP. For industrial networks, there's also a .
Yes, using the Pushgateway. See also the best practices for monitoring batch jobs.
See the list of exporters and integrations.
Yes, for applications that you cannot instrument directly with the Java client, you can use the either standalone or as a Java Agent.
Performance across client libraries and languages may vary. For Java, indicate that incrementing a counter/gauge with the Java client will take 12-17ns, depending on contention. This is negligible for all but the most latency-critical code.
We restrained ourselves to 64-bit floats to simplify the design. The supports integer precision for values up to 253. Supporting native 64 bit integers would (only) help if you need integer precision above 253 but below 263. In principle, support for different sample value types (including some kind of big integer, supporting even more than 64 bit) could be implemented, but it is not a priority right now. A counter, even if incremented one million times per second, will only run into precision issues after over 285 years.
This documentation is . Please help improve it by filing issues or pull requests.