Skip to content

F.A.Q.

Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused

Using helm with k3s can create this error. It can be solved with

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

The connection to the server localhost:8080 was refused - did you specify the right host or port?

See previous question

cat somefile.yaml | envsubst | kubectl apply -f -

The most basic method is to first create the file and then apply it with kubectl apply -f somefile.yaml. But in this example we use the command above to replace variables in the yaml files with the content from the environment variables we set.

No such file or directory

Are you sure you are in the manifests directory, (or correctly pick subfolder)

Node interal interface

Run the command below to list your interfaces. Look for the one with an address belonging to your private network.

ip a s | grep -i "UP\|inet"

Reset node

To reset node run uninstall script and reboot.

From rancher docs To uninstall K3s from a server node, run:

/usr/local/bin/k3s-uninstall.sh

To uninstall K3s from an agent node, run:

/usr/local/bin/k3s-agent-uninstall.sh

X-Forwarded-For and X-Real-Ip (proxy protocol)

It can be important to know from what ip address the request originated Analytics and rate limiting are two common cases. This is called proxy protocol.

If we do not enable this in the traefik config we will see errors when we try to send requests. So we have to activate it in the traefik service. If you use a load balancer it is also important to remember to activate this setting in the loadbalancer itself, so you can benefit from it downstream.

We set the deployment.kind to DaemonSet, hostNetwork: true and web.proxyProtocol.insecure for testing. If you are using a load balancer, it is highly recommended to use proxyProtocol.trustedIPs instead, set to your load balancer ip. https://doc.traefik.io/traefik/routing/entrypoints/#proxyprotocol

The deamonset and hostnetwork: true settings makes sure there is a traefik pod running on every node, meaning any packet will be forwarded by traefik with the proxy protocol. This is only important if you expect incoming traffic on all nodes and you could consider just having traefik services on certain nodes, and only pointing the load balancer to those nodes.

For a single node deployment with no external load balancer, it should be sufficient to add/uncomment the following to the traefik-config.yml:

spec:
  valuesContent: |-
    service:
      spec:
        externalTrafficPolicy: Local
traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    logs:
      level: INFO
      access:
        enabled: true
    #service:
    #  spec:
    #    externalTrafficPolicy: Local

    #dashboard:
    #  enabled: true

    #deployment:
    #  kind: DaemonSet
    #hostNetwork: true

    #updateStrategy:
    #  type: RollingUpdate
    #  rollingUpdate:
    #    maxUnavailable: 2
    #    maxSurge:

    #additionalArguments:
    #  - "--entryPoints.web.proxyProtocol.insecure"
    #  - "--entryPoints.websecure.proxyProtocol.insecure"
    #  - "--entryPoints.web.proxyProtocol.trustedIPs=123.123.123.123"

# See https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml for more examples
# The deployment.kind=DaemonSet and hostNetwork=true is to get real ip and x-forwarded for,
# and can be omitted if this is not needed.

# The updateStrategy settings are required for the latest traefik helm version when using hostNetwork.
# see more here: https://github.com/traefik/traefik-helm-chart/blob/v20.8.0/traefik/templates/daemonset.yaml#L12-L14
# but this version not yet supported by k3s, so leaving it commented out for now.
# The config above has been tested to work with latest stable k3s (v1.25.4+k3s1).