Skip to content


Get "http://localhost:8080/version?timeout=32s": dial tcp connect: connection refused

Using helm with k3s can create this error. It can be solved with

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

The connection to the server localhost:8080 was refused - did you specify the right host or port?

See previous question

cat somefile.yaml | envsubst | kubectl apply -f -

The most basic method is to first create the file and then apply it with kubectl apply -f somefile.yaml. But in this example we use the command above to replace variables in the yaml files with the content from the environment variables we set.

No such file or directory

Are you sure you are in the manifests directory, (or correctly pick subfolder)

Node interal interface

Run the command below to list your interfaces. Look for the one with an address belonging to your private network.

ip a s | grep -i "UP\|inet"

Reset node

To reset node run uninstall script and reboot.

From rancher docs To uninstall K3s from a server node, run:


To uninstall K3s from an agent node, run:


X-Forwarded-For and X-Real-Ip (proxy protocol)

It can be important to know from what ip address the request originated Analytics and rate limiting are two common cases. This is called proxy protocol.

If we do not enable this in the traefik config we will see errors when we try to send requests. So we have to activate it in the traefik service. If you use a load balancer it is also important to remember to activate this setting in the loadbalancer itself, so you can benefit from it downstream.

We set the deployment.kind to DaemonSet, hostNetwork: true and web.proxyProtocol.insecure for testing. If you are using a load balancer, it is highly recommended to use proxyProtocol.trustedIPs instead, set to your load balancer ip.

The deamonset and hostnetwork: true settings makes sure there is a traefik pod running on every node, meaning any packet will be forwarded by traefik with the proxy protocol. This is only important if you expect incoming traffic on all nodes and you could consider just having traefik services on certain nodes, and only pointing the load balancer to those nodes.

For a single node deployment with no external load balancer, it should be sufficient to add/uncomment the following to the traefik-config.yml:

  valuesContent: |-
        externalTrafficPolicy: Local
kind: HelmChartConfig
  name: traefik
  namespace: kube-system
  valuesContent: |-
      level: INFO
        enabled: true
    #  spec:
    #    externalTrafficPolicy: Local

    #  enabled: true

    #  kind: DaemonSet
    #hostNetwork: true

    #  type: RollingUpdate
    #  rollingUpdate:
    #    maxUnavailable: 2
    #    maxSurge:

    #  - "--entryPoints.web.proxyProtocol.insecure"
    #  - "--entryPoints.websecure.proxyProtocol.insecure"
    #  - "--entryPoints.web.proxyProtocol.trustedIPs="

# See for more examples
# The deployment.kind=DaemonSet and hostNetwork=true is to get real ip and x-forwarded for,
# and can be omitted if this is not needed.

# The updateStrategy settings are required for the latest traefik helm version when using hostNetwork.
# see more here:
# but this version not yet supported by k3s, so leaving it commented out for now.
# The config above has been tested to work with latest stable k3s (v1.25.4+k3s1).