prometheus-podman-exporter
Prometheus exporter for podman v4.x environment exposing containers, pods, images, volumes and networks information.
prometheus-podman-exporter uses the podman v4.x (libpod) library to fetch the statistics and not rest api (no need to enable podman.socket service).
Installation
Building from source, using container image or installing packaged versions are detailed in install guide.
Usage and Options
Usage:
prometheus-podman-exporter [flags]
Flags:
-a, --collector.enable-all Enable all collectors by default.
-i, --collector.image Enable image collector.
-n, --collector.network Enable network collector.
-o, --collector.pod Enable pod collector.
-s, --collector.system Enable system collector.
-v, --collector.volume Enable volume collector.
-d, --debug set log level to debug
-h, --help help for podman_exporter
--version print version and exit
-e, --web.disable-exporter-metrics Exclude metrics about the exporter itself (promhttp_*, process_*, go_*).
-l, --web.listen-address string Address on which to expose metrics and web interface. (default ":9882")
-m, --web.max-requests int Maximum number of parallel scrape requests. Use 0 to disable (default 40)
-p, --web.telemetry-path string Path under which to expose metrics. (default "/metrics")
By default only container collector is enabled, in order to enable all collectors use --collector.enable-all
or use --collector.enable-<name>
flag to enable other collector.
Example:
enable all available collectors:
$ ./bin/prometheus-podman-exporter --collector.enable-all
Collectors
The table below list all existing collector and their description.
Name | Description |
---|---|
container | exposes containers information |
image | exposes images information |
network | exposes networks information |
pod | exposes pod information |
volume | exposes volume information |
system | exposes system (host) information |
Collectors examples output
container
# HELP podman_container_info Container information.
# TYPE podman_container_info gauge
podman_container_info{id="19286a13dc23",image="docker.io/library/sonarqube:latest",name="sonar01",pod_id="",ports="0.0.0.0:9000->9000/tcp"} 1
podman_container_info{id="22e3d69be889",image="localhost/podman-pause:4.1.0-1651853754",name="959a0a3530db-infra",pod_id="959a0a3530db",ports=""} 1
podman_container_info{id="390ac740fa80",image="localhost/podman-pause:4.1.0-1651853754",name="d05cda23085a-infra",pod_id="d05cda23085a",ports=""} 1
podman_container_info{id="482113b805f7",image="docker.io/library/httpd:latest",name="web_server",pod_id="",ports="0.0.0.0:8000->80/tcp"} 1
podman_container_info{id="642490688d9c",image="docker.io/grafana/grafana:latest",name="grafana",pod_id="",ports="0.0.0.0:3000->3000/tcp"} 1
podman_container_info{id="ad36e85960a1",image="docker.io/library/busybox:latest",name="busybox01",pod_id="3e8bae64e9af",ports=""} 1
podman_container_info{id="dda983cc3ecf",image="localhost/podman-pause:4.1.0-1651853754",name="3e8bae64e9af-infra",pod_id="3e8bae64e9af",ports=""} 1
# HELP podman_container_state Container current state (-1=unknown,0=created,1=initialized,2=running,3=stopped,4=paused,5=exited,6=removing,7=stopping).
# TYPE podman_container_state gauge
podman_container_state{id="19286a13dc23"} 2
podman_container_state{id="22e3d69be889"} 0
podman_container_state{id="390ac740fa80"} 5
podman_container_state{id="482113b805f7"} 4
podman_container_state{id="642490688d9c"} 2
podman_container_state{id="ad36e85960a1"} 5
podman_container_state{id="dda983cc3ecf"} 2
# HELP podman_container_block_input_total Container block input.
# TYPE podman_container_block_input_total counter
podman_container_block_input_total{id="19286a13dc23"} 49152
podman_container_block_input_total{id="22e3d69be889"} 0
podman_container_block_input_total{id="390ac740fa80"} 0
podman_container_block_input_total{id="482113b805f7"} 0
podman_container_block_input_total{id="642490688d9c"} 1.41533184e+08
podman_container_block_input_total{id="ad36e85960a1"} 0
podman_container_block_input_total{id="dda983cc3ecf"} 0
# HELP podman_container_block_output_total Container block output.
# TYPE podman_container_block_output_total counter
podman_container_block_output_total{id="19286a13dc23"} 1.790976e+06
podman_container_block_output_total{id="22e3d69be889"} 0
podman_container_block_output_total{id="390ac740fa80"} 0
podman_container_block_output_total{id="482113b805f7"} 8192
podman_container_block_output_total{id="642490688d9c"} 4.69248e+07
podman_container_block_output_total{id="ad36e85960a1"} 0
podman_container_block_output_total{id="dda983cc3ecf"} 0
# HELP podman_container_cpu_seconds_total total CPU time spent for container in seconds.
# TYPE podman_container_cpu_seconds_total counter
podman_container_cpu_seconds_total{id="19286a13dc23"} 83.231904
podman_container_cpu_seconds_total{id="22e3d69be889"} 0
podman_container_cpu_seconds_total{id="390ac740fa80"} 0
podman_container_cpu_seconds_total{id="482113b805f7"} 0.069712
podman_container_cpu_seconds_total{id="642490688d9c"} 3.028685
podman_container_cpu_seconds_total{id="ad36e85960a1"} 0
podman_container_cpu_seconds_total{id="dda983cc3ecf"} 0.011687
# HELP podman_container_cpu_system_seconds_total total system CPU time spent for container in seconds.
# TYPE podman_container_cpu_system_seconds_total counter
podman_container_cpu_system_seconds_total{id="19286a13dc23"} 0.007993418
podman_container_cpu_system_seconds_total{id="22e3d69be889"} 0
podman_container_cpu_system_seconds_total{id="390ac740fa80"} 0
podman_container_cpu_system_seconds_total{id="482113b805f7"} 4.8591e-05
podman_container_cpu_system_seconds_total{id="642490688d9c"} 0.00118734
podman_container_cpu_system_seconds_total{id="ad36e85960a1"} 0
podman_container_cpu_system_seconds_total{id="dda983cc3ecf"} 9.731e-06
# HELP podman_container_created_seconds Container creation time in unixtime.
# TYPE podman_container_created_seconds gauge
podman_container_created_seconds{id="19286a13dc23"} 1.655859887e+09
podman_container_created_seconds{id="22e3d69be889"} 1.655484892e+09
podman_container_created_seconds{id="390ac740fa80"} 1.655489348e+09
podman_container_created_seconds{id="482113b805f7"} 1.655859728e+09
podman_container_created_seconds{id="642490688d9c"} 1.655859511e+09
podman_container_created_seconds{id="ad36e85960a1"} 1.655859858e+09
podman_container_created_seconds{id="dda983cc3ecf"} 1.655859839e+09
# HELP podman_container_mem_limit_bytes Container memory limit.
# TYPE podman_container_mem_limit_bytes gauge
podman_container_mem_limit_bytes{id="19286a13dc23"} 9.713655808e+09
podman_container_mem_limit_bytes{id="22e3d69be889"} 0
podman_container_mem_limit_bytes{id="390ac740fa80"} 0
podman_container_mem_limit_bytes{id="482113b805f7"} 9.713655808e+09
podman_container_mem_limit_bytes{id="642490688d9c"} 9.713655808e+09
podman_container_mem_limit_bytes{id="ad36e85960a1"} 0
podman_container_mem_limit_bytes{id="dda983cc3ecf"} 9.713655808e+09
# HELP podman_container_mem_usage_bytes Container memory usage.
# TYPE podman_container_mem_usage_bytes gauge
podman_container_mem_usage_bytes{id="19286a13dc23"} 1.029062656e+09
podman_container_mem_usage_bytes{id="22e3d69be889"} 0
podman_container_mem_usage_bytes{id="390ac740fa80"} 0
podman_container_mem_usage_bytes{id="482113b805f7"} 2.748416e+06
podman_container_mem_usage_bytes{id="642490688d9c"} 3.67616e+07
podman_container_mem_usage_bytes{id="ad36e85960a1"} 0
podman_container_mem_usage_bytes{id="dda983cc3ecf"} 49152
# HELP podman_container_net_input_total Container network input.
# TYPE podman_container_net_input_total counter
podman_container_net_input_total{id="19286a13dc23"} 430
podman_container_net_input_total{id="22e3d69be889"} 0
podman_container_net_input_total{id="390ac740fa80"} 0
podman_container_net_input_total{id="482113b805f7"} 430
podman_container_net_input_total{id="642490688d9c"} 4323
podman_container_net_input_total{id="ad36e85960a1"} 0
podman_container_net_input_total{id="dda983cc3ecf"} 430
# HELP podman_container_net_output_total Container network output.
# TYPE podman_container_net_output_total counter
podman_container_net_output_total{id="19286a13dc23"} 110
podman_container_net_output_total{id="22e3d69be889"} 0
podman_container_net_output_total{id="390ac740fa80"} 0
podman_container_net_output_total{id="482113b805f7"} 110
podman_container_net_output_total{id="642490688d9c"} 12071
podman_container_net_output_total{id="ad36e85960a1"} 0
podman_container_net_output_total{id="dda983cc3ecf"} 110
# HELP podman_container_pids Container pid number.
# TYPE podman_container_pids gauge
podman_container_pids{id="19286a13dc23"} 94
podman_container_pids{id="22e3d69be889"} 0
podman_container_pids{id="390ac740fa80"} 0
podman_container_pids{id="482113b805f7"} 82
podman_container_pids{id="642490688d9c"} 14
podman_container_pids{id="ad36e85960a1"} 0
podman_container_pids{id="dda983cc3ecf"} 1
pod
# HELP podman_pod_state Pods current state current state (-1=unknown,0=created,1=error,2=exited,3=paused,4=running,5=degraded,6=stopped).
# TYPE podman_pod_state gauge
podman_pod_state{id="3e8bae64e9af"} 5
podman_pod_state{id="959a0a3530db"} 0
podman_pod_state{id="d05cda23085a"} 2
# HELP podman_pod_info Pod information
# TYPE podman_pod_info gauge
podman_pod_info{id="3e8bae64e9af",infra_id="dda983cc3ecf",name="pod01"} 1
podman_pod_info{id="959a0a3530db",infra_id="22e3d69be889",name="pod02"} 1
podman_pod_info{id="d05cda23085a",infra_id="390ac740fa80",name="pod03"} 1
# HELP podman_pod_containers Number of containers in a pod.
# TYPE podman_pod_containers gauge
podman_pod_containers{id="3e8bae64e9af"} 2
podman_pod_containers{id="959a0a3530db"} 1
podman_pod_containers{id="d05cda23085a"} 1
# HELP podman_pod_created_seconds Pods creation time in unixtime.
# TYPE podman_pod_created_seconds gauge
podman_pod_created_seconds{id="3e8bae64e9af"} 1.655859839e+09
podman_pod_created_seconds{id="959a0a3530db"} 1.655484892e+09
podman_pod_created_seconds{id="d05cda23085a"} 1.655489348e+09
image
# HELP podman_image_info Image information.
# TYPE podman_image_info gauge
podman_image_info{id="48565a8e6250",repository="docker.io/bitnami/prometheus",tag="latest"} 1
podman_image_info{id="62aedd01bd85",repository="docker.io/library/busybox",tag="latest"} 1
podman_image_info{id="75c013514322",repository="docker.io/library/sonarqube",tag="latest"} 1
podman_image_info{id="a45fa0117c2b",repository="localhost/podman-pause",tag="4.1.0-1651853754"} 1
podman_image_info{id="b260a49eebf9",repository="docker.io/library/httpd",tag="latest"} 1
podman_image_info{id="c4b778290339",repository="docker.io/grafana/grafana",tag="latest"} 1
# HELP podman_image_created_seconds Image creation time in unixtime.
# TYPE podman_image_created_seconds gauge
podman_image_created_seconds{id="48565a8e6250"} 1.655436988e+09
podman_image_created_seconds{id="62aedd01bd85"} 1.654651161e+09
podman_image_created_seconds{id="75c013514322"} 1.654883091e+09
podman_image_created_seconds{id="a45fa0117c2b"} 1.655484887e+09
podman_image_created_seconds{id="b260a49eebf9"} 1.655163309e+09
podman_image_created_seconds{id="c4b778290339"} 1.655132996e+09
# HELP podman_image_size Image size
# TYPE podman_image_size gauge
podman_image_size{id="48565a8e6250"} 5.11822059e+08
podman_image_size{id="62aedd01bd85"} 1.468102e+06
podman_image_size{id="75c013514322"} 5.35070053e+08
podman_image_size{id="a45fa0117c2b"} 815742
podman_image_size{id="b260a49eebf9"} 1.49464899e+08
podman_image_size{id="c4b778290339"} 2.98969093e+08
network
# HELP podman_network_info Network information.
# TYPE podman_network_info gauge
podman_network_info{driver="bridge",id="2f259bab93aa",interface="podman0",labels="",name="podman"} 1
podman_network_info{driver="bridge",id="420272a98a4c",interface="podman3",labels="",name="network03"} 1
podman_network_info{driver="bridge",id="6eb310d4b0bb",interface="podman2",labels="",name="network02"} 1
podman_network_info{driver="bridge",id="a5a6391121a5",interface="podman1",labels="",name="network01"} 1
volume
# HELP podman_volume_info Volume information.
# TYPE podman_volume_info gauge
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol01/_data",name="vol01"} 1
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol02/_data",name="vol02"} 1
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol03/_data",name="vol03"} 1
# HELP podman_volume_created_seconds Volume creation time in unixtime.
# TYPE podman_volume_created_seconds gauge
podman_volume_created_seconds{name="vol01"} 1.655484915e+09
podman_volume_created_seconds{name="vol02"} 1.655484926e+09
podman_volume_created_seconds{name="vol03"} 1.65548493e+09
system
# HELP podman_system_api_version Podman system api version.
# TYPE podman_system_api_version gauge
podman_system_api_version{version="4.1.1"} 1
# HELP podman_system_buildah_version Podman system buildahVer version.
# TYPE podman_system_buildah_version gauge
podman_system_buildah_version{version="1.26.1"} 1
# HELP podman_system_conmon_version Podman system conmon version.
# TYPE podman_system_conmon_version gauge
podman_system_conmon_version{version="2.1.0"} 1
# HELP podman_system_runtime_version Podman system runtime version.
# TYPE podman_system_runtime_version gauge
podman_system_runtime_version{version="crun version 1.4.5"} 1
License
Licensed under the Apache 2.0 license.
Unable to get this working with podman.sock
Struggling to get this running just on localhost to test it with user
podman.sock
. I'm not sure what to use forCONTAINER_HOST`` so I assumed the
unix://` prefix and bind mounted the podman socket. Am I missing something obvious here?Thanks! (Fedora 36, podman 4.1.1)
Exporter does not list all rootless containers running under different users
I have a RHEL8 servers with podman and I have several rootless containers running under different users (this is a security requirement in my env).
The podman-exporter is currently running in a container under root account and it can only list metrics related to the cotnainers running under root.
Is there a way to scape the metrics from all rootless containers running under different users?
feat: add a metric representing a container's health
Is your feature request related to a problem? Please describe. I would like to be able to query a container's health in addition to its state.
Describe the solution you'd like I propose to add a new gauge metric named
podman_container_health
. The metric would map the values defined in podman/libpod/define/healthchecks.go (healthy, unhealthy & starting) to integers and the help message would help map those values back to their intended meaning (similar to the waypodman_container_state
is defined).Additional context n/a
Add metric for volume storage size
I'd like to be able to monitor the amount of storage currently being utilized by my volumes. Looks like this data should be available: Command line:
podman system df -v
API: https://docs.podman.io/en/latest/_static/api.html#tag/system/operation/SystemDataUsageLibpod https://github.com/containers/prometheus-podman-exporter/blob/38ae02876323cc60759979d4f2c96ed78c1e4228/vendor/github.com/containers/podman/v4/pkg/domain/entities/engine_container.go#L95Bump github.com/prometheus/exporter-toolkit from 0.7.1 to 0.8.1
Bumps github.com/prometheus/exporter-toolkit from 0.7.1 to 0.8.1.
Changelog
Sourced from github.com/prometheus/exporter-toolkit's changelog.
Commits
c6a2415
Fix systemd socket when using a custom kingpin app (#118)7cedc3c
Release 0.8.0 (#117)b2d6e83
Merge pull request #115 from mrueg/golangci07336a0
Merge pull request #116 from roidelapluie/rel080c3bf1e8
Update build to github action & release 0.8.0bca43f1
Add systemd socket listener activation (#95)9980373
Update common Prometheus files (#112)38afac5
Rename department of redundancy (#114)67c5ada
.golangci.yml: Enable goimports and misspell linters6f56c6f
.golangci.yml: Replace deprecated golint with reviveDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Bump github.com/containers/image/v5 from 5.22.0 to 5.23.0
Bumps github.com/containers/image/v5 from 5.22.0 to 5.23.0.
Release notes
Sourced from github.com/containers/image/v5's releases.
Commits
3443821
Release v5.23.0b5f2544
Update to github.com/opencontainers/[email protected]3a6e77d
Merge pull request #1665 from containers/dependabot/go_modules/github.com/ope...daa38a3
Merge pull request #1666 from containers/dependabot/go_modules/github.com/con...f80ddc4
build(deps): bump github.com/containers/storage from 1.42.0 to 1.43.0739cef8
build(deps): bump github.com/opencontainers/selinux2c6464d
Merge pull request #1664 from containers/dependabot/go_modules/github.com/kla...815b014
build(deps): bump github.com/klauspost/compress from 1.15.10 to 1.15.1162be088
Merge pull request #1662 from containers/dependabot/go_modules/github.com/doc...a095a86
build(deps): bump github.com/docker/docker-credential-helpersDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Exclude infra containers?
Is your feature request related to a problem? Please describe. More of a question, but is it ever useful to monitor infra containers? It's easy enough to filter those but if they never do anything useful then it could be nice to prune them here.
Bump github.com/containers/image/v5 from 5.23.0 to 5.23.1
Bumps github.com/containers/image/v5 from 5.23.0 to 5.23.1.
Release notes
Sourced from github.com/containers/image/v5's releases.
Commits
a3bfb86
Release 5.23.1d92bac8
Merge pull request #1696 from mtrmac/5.23-backports52a50cf
Recognize invalid error responses of registry.redhat.io8481ec9
Make the pseudo-config used in sigstore attachments a bit more valide8566a4
Update branch configuration for a backport branch220aeb5
Merge pull request #1667 from mtrmac/v5.23.0f649a19
Bump to v5.23.1-devDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Bump github.com/prometheus/client_golang from 1.13.1 to 1.14.0
Bumps github.com/prometheus/client_golang from 1.13.1 to 1.14.0.
Release notes
Sourced from github.com/prometheus/client_golang's releases.
Changelog
Sourced from github.com/prometheus/client_golang's changelog.
Commits
254e546
Merge pull request #1162 from kakkoyun/cut-1.14.0c8a3d32
Cut v1.14.007d3a81
Merge pull request #1161 from prometheus/release-1.13870469e
Test and support 1.19 (#1160)b785d0c
Fix go_collector_latest_test Fail on go1.19 (#1136)4d54769
Fix float64 comparison test failure on archs using FMA (#1133)5f202ee
Merge pull request #1150 from prometheus/sparsehistogramfffb76c
Merge branch 'main' into sparsehistograme92a8c7
Avoid the term 'sparse' where possible0859bb8
Merge pull request #1152 from jessicalins/update-to-custom-regDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)chore: Fix trivial golangci-lint issues
Hi @navidys,
Firstly, thanks for your project! I ran
golangci-lint
v1.50.1 against the main branch, and did some trivial fixes here. Feel free to review/change any part of the PR. Note: I didn't touch the CI linter version in the Makefile.BTW, do you have an ETA for a new release?
Bump github.com/spf13/cobra from 1.6.0 to 1.6.1
Bumps github.com/spf13/cobra from 1.6.0 to 1.6.1.
Release notes
Sourced from github.com/spf13/cobra's releases.
Commits
b43be99
Check for group presence after full initialization (#1839) (#1841)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Exporter only returns stats about pods from current running user?
Describe the bug (I'm sorry to call this a "bug"; it's likely just a lack of my understanding over how to use this exporter successfully)
When I run prometheus-podman-exporter, it only exports container-level stats about the containers that are being run as the same user that I'm running the exporter as. Therefore, I don't know how to deploy prometheus-podman-exporter as a server-level monitoring tool -- it seems that I would need to run an instance of it for every user that could run pods?
To Reproduce Steps to reproduce the behavior:
podman_container_cpu_seconds_total
only reportsid=*
for containers running as the same userroot
-- even as a superuser, onlyroot
's containers are displayedExpected behavior
I expected that running as root would return metrics from all pods. Alternatively, I expected the documentation in the repo to indicate the right way, or the expected runtime model, for prometheus-podman-exporter to be used effectively for monitoring at a server-level.
Screenshots N/A; can provide if needed.
Desktop (please complete the following information): NixOS 22.11
Additional context I'm currently working on packaging prometheus-podman-exporter for NixOS. Building the software was straight-forward, but determining how to set up a systemd unit to run it has left me confused because of this. While it could run as a root user, which isn't typically how I'd want it to run anyway, it also wouldn't collect info on all the containers... leaving me a little confused on the recommended packaging approach.
noob questions (sorry): How do you filter podman_container_state on value ?
again sorry for the noob question here
so i just want to do this
podman_container_state=2
so it only shows me the running containers.
i've looked at examples of other querys, like cpu times and it just states the metrics name and then the value like
cpu_memory_time>300
but i can't get it wokring with the podman_container_state
podman_container_mem_usage_bytes metrics disappeared
Describe the bug I used to query podman_container_mem_usage_bytes in prometheus, but now noticed it's not exposed anymore. a manual crawl of the /metrics file confirms this.
To Reproduce unfortunately I can't really say what has changed. First I assumed a permission problem on the podman socket and created a separate one only for the prometheus-podman-exporter container:
logs (last 7 lines repeat):
testing the socket itself seems OK
Expected behavior have podman_container_mem_usage_bytes available in metrics exposed by prometheus-podman-exporter
environment
/bin/podman_exporter --collector.enable-all --collector.store_labels --debug --web.listen-address 127.0.0.1:9882
Additional context Any help in debugging this is welcome
Generate multi-arch Container images
Is your feature request related to a problem? Please describe. The published container image is only build for amd64 arch. This can't run on other arch like raspberry pi 4 with arm64 arch
Describe the solution you'd like Build and publish the container image for multiple arch
Additional context I might work on this but I see any existing pipeline to build and publish the container image
feat: enhance metrics ( include more labels )
Is your feature request related to a problem? Please describe. For example, New Relic does not support joining two metrics with the same label ATOW. This makes it very hard to understand graphs only with containerid legend.
Describe the solution you'd like Would be superb to enhance all metrics with the same fields as for podman_container_info metric This should also include a feature request: https://github.com/containers/prometheus-podman-exporter/issues/34.
Additional context Add any other context or screenshots about the feature request here.