Prometheus metrics
Controller metrics
HAProxy Unified Gateway serves Prometheus metrics to aid in monitoring the health, performance, and operations of your services. This guide will focus on HAProxy Unified Gateway controller metrics.
Looking for load balancer metrics?
Go to the guide on using HAProxy stats metrics.
View controller metrics Jump to heading
Controller metrics are served by HAProxy Unified Gateway’s controller-runtime metrics server on the controller port, by default on port 31060 using the /metrics path. If your use case necessitates configuring to a different port, refer to the HAProxy Unified Gateway installation guide on how to change the port number.
Send a request to the controller metrics endpoint. For example, using curl:
nixcurl http://<host>:31060/metrics
nixcurl http://<host>:31060/metrics
outputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0controller_runtime_active_workers{controller="ConfigMap"} 0controller_runtime_active_workers{controller="DefaultsCR"} 0...
outputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0controller_runtime_active_workers{controller="ConfigMap"} 0controller_runtime_active_workers{controller="DefaultsCR"} 0...
Tip
When accessing the controller metrics endpoint from a node inside your Kubernetes cluster, you can set <host> to localhost. For example: curl http://localhost:31060/metrics.
Authentication modes Jump to heading
There are three authentication modes: none, kube-rbac, and basic.
Enable none authentication mode Jump to heading
The none authentication mode is the default if deployed via controller.yaml manifest file (the default is kube-rbac if deployed via Helm), and metrics are served over plain HTTP with no authentication. Once enabled, the controller metrics are available at the endpoint http://<host>:31060/metrics.
Use cases include:
- Development and testing environments.
- Environments where the metrics port isn’t exposed outside the cluster.
- Trusted network environments with network policies restricting access.
Example network policy
The controller metrics endpoint is accessible to anyone who can reach its port. If needed for your use case, consider applying a Kubernetes NetworkPolicy to restrict access. For example, here is a NetworkPolicy that limits access from only specific pods within the cluster:
nixapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-metrics-from-prometheusnamespace: haproxy-unified-gatewayspec:podSelector:matchLabels:run: haproxy-unified-gatewaypolicyTypes:- Ingressingress:- from:- namespaceSelector:matchLabels:kubernetes.io/metadata.name: monitoringports:- port: 31060protocol: TCP
nixapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-metrics-from-prometheusnamespace: haproxy-unified-gatewayspec:podSelector:matchLabels:run: haproxy-unified-gatewaypolicyTypes:- Ingressingress:- from:- namespaceSelector:matchLabels:kubernetes.io/metadata.name: monitoringports:- port: 31060protocol: TCP
This NetworkPolicy is applied to the example haproxy-unified-gateway namespace, where pods in that namespace with the label run: haproxy-unified-gateway allow ingress only from pods in another namespace labeled monitoring, only on TCP port 31060.
To enable the none authentication mode, confirm the --metrics-auth=none argument is enabled in the controller configuration. For example:
nixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gateway
nixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gateway
controller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:...- --metrics-auth=none
controller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:...- --metrics-auth=none
If you installed HAProxy Unified Gateway without using Helm, edit your controller.yaml file with the configuration above and apply it:
nixkubectl apply -f controller.yaml
nixkubectl apply -f controller.yaml
Enable kube-rbac authentication mode Jump to heading
The kube-rbac authentication mode is the default if deployed via Helm, and it secures the controller metrics endpoint using Kubernetes’ RBAC (role-based access control) authorization over HTTPS. The controller validates requests with Kubernetes TokenReview and SubjectAccessReview API calls, ensuring only ServiceAccounts with the correct RBAC permissions can access the controller metrics endpoint.
Use cases include:
- Production environments.
- In-cluster Prometheus with a dedicated ServiceAccount.
- Environments requiring an audit trail for the controller metrics endpoint access.
To enable the kube-rbac authentication mode:
-
Configure the controller to use
kube-rbacauthentication mode by enabling its argument.nixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gatewaynixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gatewaycontroller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:...- --metrics-auth=kube-rbaccontroller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:...- --metrics-auth=kube-rbacIf you installed HAProxy Unified Gateway without using Helm, edit your
controller.yamlfile with the configuration above and apply it:nixkubectl apply -f controller.yamlnixkubectl apply -f controller.yaml -
Prometheus requires a ClusterRole that grants access to the controller metrics endpoint’s nonResourceURL and a ClusterRoleBinding to its ServiceAccount. This configuration is found in
rbac.yaml(or if you deployed with Helm, check this configuration withkubectl edit clusterrolebinding haproxy-unified-gateway-metrics-reader):nixapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: haproxy-unified-gateway-metrics-readerrules:- nonResourceURLs:- "/metrics"verbs:- get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: haproxy-unified-gateway-metrics-readerroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: haproxy-unified-gateway-metrics-readersubjects:- kind: ServiceAccountname: prometheusnamespace: monitoringnixapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: haproxy-unified-gateway-metrics-readerrules:- nonResourceURLs:- "/metrics"verbs:- get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: haproxy-unified-gateway-metrics-readerroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: haproxy-unified-gateway-metrics-readersubjects:- kind: ServiceAccountname: prometheusnamespace: monitoringOptional: Edit the
subjectssection to match your Prometheus ServiceAccount name and namespace. You can use the configuration above as a template or change it as necessary to meet your required security policies. For example, with kube-prometheus-stack, the ServiceAccount is typically namedprometheus-kube-prometheus-prometheusin themonitoringnamespace. Apply the configuration change withkubectl -f rbac.yaml.If you only have namespace-level access, you will need a cluster admin to create the ClusterRole and ClusterRoleBinding resources. Otherwise, consider using basic authentication mode because it’s configured entirely within the namespace scope.
Verify kube-rbac authentication
-
Create an ephemeral ServiceAccount bearer token for Prometheus with the
kubectl create tokencommand.nixTOKEN=$(kubectl create token SERVICE_ACCOUNT_NAME -n NAMESPACE)nixTOKEN=$(kubectl create token SERVICE_ACCOUNT_NAME -n NAMESPACE)Where:
- The
TOKENvariable can be changed to a more descriptive name. - Replace
SERVICE_ACCOUNT_NAMEwith your already existing ServiceAccount name. - Replace
NAMESPACEwith the namespace where the ServiceAccount exists.
For example:
nixTOKEN=$(kubectl create token prometheus -n monitoring)nixTOKEN=$(kubectl create token prometheus -n monitoring) - The
-
Verify by providing the bearer token to access the controller metrics endpoint on any node.
nixcurl -k -H "Authorization: Bearer $TOKEN" https://<host>:31060/metricsnixcurl -k -H "Authorization: Bearer $TOKEN" https://<host>:31060/metricsWhere:
- The
-koption makes curl skip the verification step, which is used here because we’re using a self-signed certificate. - The
-Hoption specifies an extra header, the bearer token for authorization. - Note the use of the
httpsprotocol.
For example:
nixcurl -k -H "Authorization: Bearer $TOKEN" https://<host>:31060/metricsnixcurl -k -H "Authorization: Bearer $TOKEN" https://<host>:31060/metricsoutputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0...outputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0... - The
Enable basic authentication mode Jump to heading
The basic authentication mode secures the controller metrics endpoint using HTTP Basic Authentication over HTTPS. Credentials can be provided via CLI flags or environment variables.
Use cases include:
- A Prometheus deployment is external to HAProxy Unified Gateway, and its Prometheus instances are scraping from outside the Kubernetes cluster.
- Deploying a quick, secure setup without a Kubernetes-specific configuration.
To enable basic authentication mode:
-
Create a Kubernetes generic type secret from literal values by executing the following command. Set these values:
SECRET_NAMEis an identifiable name.NAMESPACEis the namespace where HAProxy Unified Gateway is deployed.- The
--from-literal=username=option sets the username for basic authentication. - The
--from-literal=password=option sets the password for basic authentication. Always use strong passwords avoid default or example passwords in production environments.
nixkubectl create secret generic SECRET_NAME \-n NAMESPACE \--from-literal=username=USERNAME \--from-literal=password='PASSWORD'nixkubectl create secret generic SECRET_NAME \-n NAMESPACE \--from-literal=username=USERNAME \--from-literal=password='PASSWORD'An example
nixkubectl create secret generic hug-metrics-auth \-n haproxy-unified-gateway \--from-literal=username=prometheus \--from-literal=password='changeme'nixkubectl create secret generic hug-metrics-auth \-n haproxy-unified-gateway \--from-literal=username=prometheus \--from-literal=password='changeme'outputnixsecret/hug-metrics-auth createdoutputnixsecret/hug-metrics-auth created -
Configure the controller to use
basicauthentication mode and enviornment variables from the secret you created.nixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gatewaynixkubectl edit deployment -n haproxy-unified-gateway haproxy-unified-gatewaycontroller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:- --hugconf-crd=haproxy-unified-gateway/hugconf- --metrics-auth=basicenv:...- name: METRICS_BASIC_AUTH_USERvalueFrom:secretKeyRef:name: hug-metrics-authkey: username- name: METRICS_BASIC_AUTH_PASSWORDvalueFrom:secretKeyRef:name: hug-metrics-authkey: passwordcontroller.yamlnixapiVersion: apps/v1kind: Deploymentmetadata:labels:run: haproxy-unified-gatewayname: haproxy-unified-gatewaynamespace: haproxy-unified-gatewayspec:...template:metadata:labels:run: haproxy-unified-gatewayspec:serviceAccountName: haproxy-unified-gatewaycontainers:- name: haproxy-unified-gateway...args:- --hugconf-crd=haproxy-unified-gateway/hugconf- --metrics-auth=basicenv:...- name: METRICS_BASIC_AUTH_USERvalueFrom:secretKeyRef:name: hug-metrics-authkey: username- name: METRICS_BASIC_AUTH_PASSWORDvalueFrom:secretKeyRef:name: hug-metrics-authkey: passwordIf you installed HAProxy Unified Gateway without using Helm, edit your
controller.yamlfile with the configuration above and apply it:nixkubectl apply -f controller.yamlnixkubectl apply -f controller.yaml -
Verify by providing the credentials to access the controller metrics endpoint from any node.
nixcurl -k -u USERNAME:PASSWORD https://<host>:31060/metricsnixcurl -k -u USERNAME:PASSWORD https://<host>:31060/metricsWhere:
- The
-koption makes curl skip the verification step, which is used here because we’re using a self-signed certificate. - The
-uoption specifies the user name and password to use for server authentication. - Note the use of the
httpsprotocol.
An example
nixcurl -k -u prometheus:changeme https://<host>:31060/metricsnixcurl -k -u prometheus:changeme https://<host>:31060/metricsoutputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0...outputnix# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors# TYPE certwatcher_read_certificate_errors_total countercertwatcher_read_certificate_errors_total 0# HELP certwatcher_read_certificate_total Total number of certificate reads# TYPE certwatcher_read_certificate_total countercertwatcher_read_certificate_total 0# HELP controller_runtime_active_workers Number of currently used workers per controller# TYPE controller_runtime_active_workers gaugecontroller_runtime_active_workers{controller="BackendCR"} 0... - The
Troubleshooting Jump to heading
401 Unauthorized
-
If you are accessing the controller metrics endpoint with
kube-rbacauthentication mode:-
Verify if Prometheus is sending the
Authorization: Bearer <token>header. -
Confirm the target ServiceAccount exists.
-
-
If you’re accessing the controller metrics endpoint with
basicauthentication mode:-
Verify that you’re including credentials. For example,
curl -k https://<host>:31060/metricsis missing-u USERNAME:PASSWORD. -
Verify the spelling of the credentials.
-
Check if the credentials are loading from the pod environment variables.
nixkubectl exec -n <NAMESPACE> <POD_NAME> -- env | grep METRICS_BASIC_AUTH_USERnixkubectl exec -n <NAMESPACE> <POD_NAME> -- env | grep METRICS_BASIC_AUTH_USERAn example
nixkubectl exec -n haproxy-unified-gateway haproxy-unified-gateway-59fc455c6d-vmxhm -- env | grep METRICS_BASIC_AUTH_USER METRICS_BASIC_AUTH_USER=prometheusnixkubectl exec -n haproxy-unified-gateway haproxy-unified-gateway-59fc455c6d-vmxhm -- env | grep METRICS_BASIC_AUTH_USER METRICS_BASIC_AUTH_USER=prometheus
-
403 Forbidden
-
If you’re accessing the controller metrics endpoint with
kube-rbacauthentication mode:The Prometheus ServiceAccount is authenticated but not authorized. With the following command, check that the
haproxy-unified-gateway-metrics-readerClusterRole exists and the ClusterRoleBinding references the correct ServiceAccount name and namespace:nixkubectl get clusterrolebinding haproxy-unified-gateway-metrics-reader -o yamlnixkubectl get clusterrolebinding haproxy-unified-gateway-metrics-reader -o yaml
Connection refused
-
Verify the controller pod is running and the controller metrics port is configured properly.
nixkubectl get pod -n haproxy-unified-gateway <POD_NAME> -o yamlnixkubectl get pod -n haproxy-unified-gateway <POD_NAME> -o yamlexample outputnix...ports:- containerPort: 31080name: httpprotocol: TCP- containerPort: 31443name: httpsprotocol: TCP- containerPort: 31060name: metricsprotocol: TCP- containerPort: 31024name: statprotocol: TCPexample outputnix...ports:- containerPort: 31080name: httpprotocol: TCP- containerPort: 31443name: httpsprotocol: TCP- containerPort: 31060name: metricsprotocol: TCP- containerPort: 31024name: statprotocol: TCP -
Verify the Service’s controller metrics port configuration.
nixkubectl get service -n haproxy-unified-gateway haproxy-unified-gateway -o yamlnixkubectl get service -n haproxy-unified-gateway haproxy-unified-gateway -o yamlexample outputnix...ports:- name: statnodePort: 32375port: 31024protocol: TCPtargetPort: 31024- name: metricsnodePort: 31060port: 31060protocol: TCPtargetPort: 31060example outputnix...ports:- name: statnodePort: 32375port: 31024protocol: TCPtargetPort: 31024- name: metricsnodePort: 31060port: 31060protocol: TCPtargetPort: 31060 -
Check pod logs for start up errors.
nixkubectl logs -n NAMESPACE deploy/haproxy-unified-gateway | grep -i metricnixkubectl logs -n NAMESPACE deploy/haproxy-unified-gateway | grep -i metric
Controller metrics reference Jump to heading
The controller metrics shown in the versioned table below labels fields with their names, type of metric, and a description. Controller metrics specific to HAProxy Unified Gateway use the hug_ namespace prefix.
|
certwatcher_read_certificate_errors_total counter
Total number of certificate read errors. |
|
certwatcher_read_certificate_total counter
Total number of certificate reads. |
|
controller_runtime_active_workers gauge
Number of currently used workers per controller. Label: |
|
controller_runtime_conversion_webhook_panics_total counter
Total number of conversion webhook panics. |
|
controller_runtime_max_concurrent_reconciles gauge
Maximum number of concurrent reconciles per controller. Label: |
|
controller_runtime_reconcile_errors_total counter
Total number of reconciliation errors per controller. Label: |
|
controller_runtime_reconcile_panics_total counter
Total number of reconciliation panics per controller. Label: |
|
controller_runtime_reconcile_time_seconds histogram
Length of time per reconciliation per controller. Time series: |
|
controller_runtime_reconcile_timeouts_total counter
Total number of reconciliation timeouts per controller. Label: |
|
controller_runtime_reconcile_total counter
Total number of reconciliations per controller. Labels: |
|
controller_runtime_terminal_reconcile_errors_total counter
Total number of terminal reconciliation errors per controller. Label: |
|
controller_runtime_webhook_panics_total counter
Total number of webhook panics. |
|
go_cgo_go_to_c_calls_calls_total counter
Count of calls made from Go to C by the current process. Sourced from |
|
go_cpu_classes_gc_mark_assist_cpu_seconds_total counter
Estimated total CPU time Go routines spent performing GC tasks to assist the GC and prevent it from falling behind the application. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_gc_mark_dedicated_cpu_seconds_total counter
Estimated total CPU time spent performing GC tasks on processors (as defined by |
|
go_cpu_classes_gc_mark_idle_cpu_seconds_total counter
Estimated total CPU time spent performing GC tasks on spare CPU resources that the Go scheduler couldn’t otherwise find a use for. This should be subtracted from the total GC CPU time to obtain a measure of compulsory GC CPU time. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_gc_pause_cpu_seconds_total counter
Estimated total CPU time spent with the application paused by the GC. Even if only one thread is running during the pause, this is computed as |
|
go_cpu_classes_gc_total_cpu_seconds_total counter
Estimated total CPU time spent performing GC tasks. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_idle_cpu_seconds_total counter
Estimated total available CPU time not spent executing any Go or Go runtime code. In other words, the part of |
|
go_cpu_classes_scavenge_assist_cpu_seconds_total counter
Estimated total CPU time spent returning unused memory to the underlying platform in response eagerly to memory pressure. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_scavenge_background_cpu_seconds_total counter
Estimated total CPU time spent performing background tasks to return unused memory to the underlying platform. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_scavenge_total_cpu_seconds_total counter
Estimated total CPU time spent performing tasks that return unused memory to the underlying platform. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_cpu_classes_total_cpu_seconds_total counter
Estimated total available CPU time for user Go code or the Go runtime, as defined by |
|
go_cpu_classes_user_cpu_seconds_total counter
Estimated total CPU time spent running user Go code. This may also include some small amount of time spent in the Go runtime. This metric is an overestimate and not directly comparable to system CPU time measurements. Compare only with other |
|
go_gc_cleanups_executed_cleanups_total counter
Approximate total count of cleanup functions (created by |
|
go_gc_cleanups_queued_cleanups_total counter
Approximate total count of cleanup functions (created by |
|
go_gc_cycles_automatic_gc_cycles_total counter
Count of completed GC cycles generated by the Go runtime. Sourced from |
|
go_gc_cycles_forced_gc_cycles_total counter
Count of completed GC cycles forced by the application. Sourced from |
|
go_gc_cycles_total_gc_cycles_total counter
Count of all completed GC cycles. Sourced from |
|
go_gc_duration_seconds summary
A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles. Time series: |
|
go_gc_finalizers_executed_finalizers_total counter
Total count of finalizer functions (created by |
|
go_gc_finalizers_queued_finalizers_total counter
Total count of finalizer functions (created by |
|
go_gc_gogc_percent gauge
Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the |
|
go_gc_gomemlimit_bytes gauge
Go runtime memory limit configured by the user, otherwise |
|
go_gc_heap_allocs_by_size_bytes histogram
Distribution of heap allocations by approximate size. Bucket counts increase monotonically. This doesn’t include tiny objects as defined by |
|
go_gc_heap_allocs_bytes_total counter
Cumulative sum of memory allocated to the heap by the application. Sourced from |
|
go_gc_heap_allocs_objects_total counter
Cumulative count of heap allocations triggered by the application; this doesn’t include tiny objects as defined by |
|
go_gc_heap_frees_by_size_bytes histogram
Distribution of freed heap allocations by approximate size. Bucket counts increase monotonically. This doesn’t include tiny objects as defined by |
|
go_gc_heap_frees_bytes_total counter
Cumulative sum of heap memory freed by the garbage collector. Sourced from |
|
go_gc_heap_frees_objects_total counter
Cumulative count of heap allocations whose storage was freed by the garbage collector. This doesn’t include tiny objects as defined by |
|
go_gc_heap_goal_bytes gauge
Heap size target for the end of the GC cycle. Sourced from |
|
go_gc_heap_live_bytes gauge
Heap memory occupied by live objects that were marked by the previous GC. Sourced from |
|
go_gc_heap_objects_objects gauge
Number of objects, live or unswept, occupying heap memory. Sourced from |
|
go_gc_heap_tiny_allocs_objects_total counter
Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation isn’t tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. Sourced from |
|
go_gc_limiter_last_enabled_gc_cycle gauge
GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC’s CPU time gets too high. This is most likely to occur with use of |
|
go_gc_pauses_seconds histogram
Deprecated. Prefer the identical |
|
go_gc_scan_globals_bytes gauge
The total amount of global variable space that is scannable. Sourced from |
|
go_gc_scan_heap_bytes gauge
The total amount of heap space that is scannable. Sourced from |
|
go_gc_scan_stack_bytes gauge
The number of bytes of stack that were scanned last GC cycle. Sourced from |
|
go_gc_scan_total_bytes gauge
The total amount of space that is scannable. Sum of all metrics in |
|
go_gc_stack_starting_size_bytes gauge
The stack size of new goroutines. Sourced from |
|
go_godebug_non_default_behavior_allowmultiplevcs_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_asynctimerchan_events_total counter
The number of non-default behaviors executed by the time package due to a non-default |
|
go_godebug_non_default_behavior_containermaxprocs_events_total counter
The number of non-default behaviors executed by the runtime package due to a non-default |
|
go_godebug_non_default_behavior_cryptocustomrand_events_total counter
The number of non-default behaviors executed by the crypto package due to a non-default |
|
go_godebug_non_default_behavior_embedfollowsymlinks_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_execerrdot_events_total counter
The number of non-default behaviors executed by the os/exec package due to a non-default |
|
go_godebug_non_default_behavior_gocachehash_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_gocachetest_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_gocacheverify_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_gotestjsonbuildtext_events_total counter
The number of non-default behaviors executed by the cmd/go package due to a non-default |
|
go_godebug_non_default_behavior_gotypesalias_events_total counter
The number of non-default behaviors executed by the go/types package due to a non-default |
|
go_godebug_non_default_behavior_htmlmetacontenturlescape_events_total counter
The number of non-default behaviors executed by the html/template package due to a non-default |
|
go_godebug_non_default_behavior_http2client_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_http2server_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_httpcookiemaxnum_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_httplaxcontentlength_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_httpmuxgo121_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_httpservecontentkeepheaders_events_total counter
The number of non-default behaviors executed by the net/http package due to a non-default |
|
go_godebug_non_default_behavior_installgoroot_events_total counter
The number of non-default behaviors executed by the go/build package due to a non-default |
|
go_godebug_non_default_behavior_multipartmaxheaders_events_total counter
The number of non-default behaviors executed by the mime/multipart package due to a non-default |
|
go_godebug_non_default_behavior_multipartmaxparts_events_total counter
The number of non-default behaviors executed by the mime/multipart package due to a non-default |
|
go_godebug_non_default_behavior_multipathtcp_events_total counter
The number of non-default behaviors executed by the net package due to a non-default |
|
go_godebug_non_default_behavior_netedns0_events_total counter
The number of non-default behaviors executed by the net package due to a non-default |
|
go_godebug_non_default_behavior_panicnil_events_total counter
The number of non-default behaviors executed by the runtime package due to a non-default |
|
go_godebug_non_default_behavior_randautoseed_events_total counter
The number of non-default behaviors executed by the math/rand package due to a non-default |
|
go_godebug_non_default_behavior_randseednop_events_total counter
The number of non-default behaviors executed by the math/rand package due to a non-default |
|
go_godebug_non_default_behavior_rsa1024min_events_total counter
The number of non-default behaviors executed by the crypto/rsa package due to a non-default |
|
go_godebug_non_default_behavior_tarinsecurepath_events_total counter
The number of non-default behaviors executed by the archive/tar package due to a non-default |
|
go_godebug_non_default_behavior_tls10server_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_tls3des_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_tlsmaxrsasize_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_tlsrsakex_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_tlssha1_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_tlsunsafeekm_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_updatemaxprocs_events_total counter
The number of non-default behaviors executed by the runtime package due to a non-default |
|
go_godebug_non_default_behavior_urlmaxqueryparams_events_total counter
The number of non-default behaviors executed by the net/url package due to a non-default |
|
go_godebug_non_default_behavior_urlstrictcolons_events_total counter
The number of non-default behaviors executed by the net/url package due to a non-default |
|
go_godebug_non_default_behavior_winreadlinkvolume_events_total counter
The number of non-default behaviors executed by the os package due to a non-default |
|
go_godebug_non_default_behavior_winsymlink_events_total counter
The number of non-default behaviors executed by the os package due to a non-default |
|
go_godebug_non_default_behavior_x509keypairleaf_events_total counter
The number of non-default behaviors executed by the crypto/tls package due to a non-default |
|
go_godebug_non_default_behavior_x509negativeserial_events_total counter
The number of non-default behaviors executed by the crypto/x509 package due to a non-default |
|
go_godebug_non_default_behavior_x509rsacrt_events_total counter
The number of non-default behaviors executed by the crypto/x509 package due to a non-default |
|
go_godebug_non_default_behavior_x509sha256skid_events_total counter
The number of non-default behaviors executed by the crypto/x509 package due to a non-default |
|
go_godebug_non_default_behavior_x509usefallbackroots_events_total counter
The number of non-default behaviors executed by the crypto/x509 package due to a non-default |
|
go_godebug_non_default_behavior_x509usepolicies_events_total counter
The number of non-default behaviors executed by the crypto/x509 package due to a non-default |
|
go_godebug_non_default_behavior_zipinsecurepath_events_total counter
The number of non-default behaviors executed by the archive/zip package due to a non-default |
|
go_goroutines gauge
Number of goroutines that currently exist. |
|
go_info gauge
Information about the Go environment. Label: |
|
go_memory_classes_heap_free_bytes gauge
Memory that is completely free and eligible to be returned to the underlying system, but hasn’t been yet. This metric is the runtime’s estimate of free address space that is backed by physical memory. Sourced from |
|
go_memory_classes_heap_objects_bytes gauge
Memory occupied by live objects and dead objects that haven’t yet been marked free by the garbage collector. Sourced from |
|
go_memory_classes_heap_released_bytes gauge
Memory that is completely free and has been returned to the underlying system. This metric is the runtime’s estimate of free address space that is still mapped into the process, but isn’t backed by physical memory. Sourced from |
|
go_memory_classes_heap_stacks_bytes gauge
Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use. Currently, this represents all stack memory for goroutines. It also includes all OS thread stacks in non-cgo programs. Note that stacks may be allocated differently in the future, and this may change. Sourced from |
|
go_memory_classes_heap_unused_bytes gauge
Memory that is reserved for heap objects but isn’t currently used to hold heap objects. Sourced from |
|
go_memory_classes_metadata_mcache_free_bytes gauge
Memory that is reserved for runtime mcache structures but not in-use. Sourced from |
|
go_memory_classes_metadata_mcache_inuse_bytes gauge
Memory that is occupied by runtime mcache structures that are currently being used. Sourced from |
|
go_memory_classes_metadata_mspan_free_bytes gauge
Memory that is reserved for runtime mspan structures, but not in-use. Sourced from |
|
go_memory_classes_metadata_mspan_inuse_bytes gauge
Memory that is occupied by runtime mspan structures that are currently being used. Sourced from |
|
go_memory_classes_metadata_other_bytes gauge
Memory that is reserved for or used to hold runtime metadata. Sourced from |
|
go_memory_classes_os_stacks_bytes gauge
Stack memory allocated by the underlying operating system. In non-cgo programs this metric is currently zero. This may change in the future. In cgo programs this metric includes OS thread stacks allocated directly from the OS. Currently, this only accounts for one stack in c-shared and c-archive build modes, and other sources of stacks from the OS aren’t measured. This too may change in the future. Sourced from |
|
go_memory_classes_other_bytes gauge
Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more. Sourced from |
|
go_memory_classes_profiling_buckets_bytes gauge
Memory that is used by the stack trace hash map used for profiling. Sourced from |
|
go_memory_classes_total_bytes gauge
All memory mapped by the Go runtime into the current process as read-write. This doesn’t include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in |
|
go_memstats_alloc_bytes gauge
Number of bytes allocated in heap and currently in use. Equals to |
|
go_memstats_alloc_bytes_total counter
Total number of bytes allocated in heap until now, even if released already. Equals to |
|
go_memstats_buck_hash_sys_bytes gauge
Number of bytes used by the profiling bucket hash table. Equals to |
|
go_memstats_frees_total counter
Total number of heap objects frees. Equals to |
|
go_memstats_gc_sys_bytes gauge
Number of bytes used for garbage collection system metadata. Equals to |
|
go_memstats_heap_alloc_bytes gauge
Number of heap bytes allocated and currently in use, same as |
|
go_memstats_heap_idle_bytes gauge
Number of heap bytes waiting to be used. Equals to |
|
go_memstats_heap_inuse_bytes gauge
Number of heap bytes that are in use. Equals to |
|
go_memstats_heap_objects gauge
Number of currently allocated objects. Equals to |
|
go_memstats_heap_released_bytes gauge
Number of heap bytes released to OS. Equals to |
|
go_memstats_heap_sys_bytes gauge
Number of heap bytes obtained from system. Equals to |
|
go_memstats_last_gc_time_seconds gauge
Number of seconds since last garbage collection. |
|
go_memstats_mallocs_total counter
Total number of heap objects allocated, both live and gc-ed. Semantically a counter version for |
|
go_memstats_mcache_inuse_bytes gauge
Number of bytes in use by mcache structures. Equals to |
|
go_memstats_mcache_sys_bytes gauge
Number of bytes used for mcache structures obtained from system. Equals to |
|
go_memstats_mspan_inuse_bytes gauge
Number of bytes in use by mspan structures. Equals to |
|
go_memstats_mspan_sys_bytes gauge
Number of bytes used for mspan structures obtained from system. Equals to |
|
go_memstats_next_gc_bytes gauge
Number of heap bytes when next garbage collection will take place. Equals to /gc/heap/goal:bytes. |
|
go_memstats_other_sys_bytes gauge
Number of bytes used for other system allocations. Equals to |
|
go_memstats_stack_inuse_bytes gauge
Number of bytes obtained from system for stack allocator in non-CGO environments. Equals to |
|
go_memstats_stack_sys_bytes gauge
Number of bytes obtained from system for stack allocator. Equals to |
|
go_memstats_sys_bytes gauge
Number of bytes obtained from system. Equals to |
|
go_sched_gomaxprocs_threads gauge
The current |
|
go_sched_goroutines_created_goroutines_total counter
Count of goroutines created since program start. Sourced from |
|
go_sched_goroutines_goroutines gauge
Count of live goroutines. Sourced from |
|
go_sched_goroutines_not_in_go_goroutines gauge
Approximate count of goroutines running or blocked in a system call or cgo call. Not guaranteed to add up to |
|
go_sched_goroutines_runnable_goroutines gauge
Approximate count of goroutines ready to execute, but not executing. Not guaranteed to add up to |
|
go_sched_goroutines_running_goroutines gauge
Approximate count of goroutines executing. Always less than or equal to |
|
go_sched_goroutines_waiting_goroutines gauge
Approximate count of goroutines waiting on a resource (I/O or sync primitives). Not guaranteed to add up to |
|
go_sched_latencies_seconds histogram
Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running. Bucket counts increase monotonically. Sourced from |
|
go_sched_pauses_stopping_gc_seconds histogram
Distribution of individual GC-related stop-the-world stopping latencies. This is the time it takes from deciding to stop the world until all Ps are stopped. This is a subset of the total GC-related stop-the-world time ( |
|
go_sched_pauses_stopping_other_seconds histogram
Distribution of individual non-GC-related stop-the-world stopping latencies. This is the time it takes from deciding to stop the world until all pauses are stopped. This is a subset of the total non-GC-related stop-the-world time ( |
|
go_sched_pauses_total_gc_seconds histogram
Distribution of individual GC-related stop-the-world pause latencies. This is the time from deciding to stop the world until the world is started again. Some of this time is spent getting all threads to stop (this is measured directly in |
|
go_sched_pauses_total_other_seconds histogram
Distribution of individual non-GC-related stop-the-world pause latencies. This is the time from deciding to stop the world until the world is started again. Some of this time is spent getting all threads to stop (measured directly in |
|
go_sched_threads_total_threads gauge
The current count of live threads that are owned by the Go runtime. Sourced from |
|
go_sync_mutex_wait_total_seconds_total counter
Approximate cumulative time goroutines have spent blocked on a |
|
go_threads gauge
Number of OS threads created. |
|
hug_config_diffs_total counter
Total number of HAProxy configuration diffs by operation and resource type. Labels: |
|
hug_config_generation_duration_seconds histogram
Duration of HAProxy configuration diff computation in seconds. Time series: |
|
hug_config_transfer_duration_seconds histogram
Duration of HAProxy configuration transfer and application in seconds. Time series: |
|
hug_event_batch_duration_seconds histogram
Duration of event batch processing in seconds. Time series: |
|
hug_event_batch_errors_total counter
Total number of event batch processing errors. |
|
hug_event_batch_size histogram
Number of events in a processed batch. Time series: |
|
hug_event_batch_total counter
Total number of event batches processed. |
|
hug_events_processed_total counter
Total number of Kubernetes resource events processed by event type. Label: |
|
hug_haproxy_reload_total counter
Total number of HAProxy configuration reloads. |
|
process_cpu_seconds_total counter
Total user and system CPU time spent in seconds. |
|
process_max_fds gauge
Maximum number of open file descriptors. |
|
process_network_receive_bytes_total counter
Number of bytes received by the process over the network. |
|
process_network_transmit_bytes_total counter
Number of bytes sent by the process over the network. |
|
process_open_fds gauge
Number of open file descriptors. |
|
process_resident_memory_bytes gauge
Resident memory size in bytes. |
|
process_start_time_seconds gauge
Start time of the process since unix epoch in seconds. |
|
process_virtual_memory_bytes gauge
Virtual memory size in bytes. |
|
process_virtual_memory_max_bytes gauge
Maximum amount of virtual memory available in bytes. |
|
rest_client_requests_total counter
Number of HTTP requests, partitioned by status code, method, and host. Labels: |
|
workqueue_adds_total counter
Total number of adds handled by workqueue. Labels: |
|
workqueue_depth gauge
Current depth of workqueue by workqueue and priority. Labels: |
|
workqueue_longest_running_processor_seconds gauge
How many seconds has the longest running processor for workqueue been running. Labels: |
|
workqueue_queue_duration_seconds histogram
How long in seconds an item stays in workqueue before being requested. Time series: |
|
workqueue_retries_total counter
Total number of retries handled by workqueue. Labels: |
|
workqueue_unfinished_work_seconds gauge
How many seconds of work has been done that is in progress and hasn’t been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases. Labels: |
|
workqueue_work_duration_seconds histogram
How long in seconds processing an item from workqueue takes. Time series: |
See also Jump to heading
- See the Prometheus documentation on metric types.
- Reference the Kubernetes documentation about the kubectl create secret generic command.
- Reference the Kubernetes documentation on using RBAC authorization for more information.