Declaratively deploy your Kubernetes manifests, Kustomize configs, and Charts as Helm releases in one shot

  • By null
  • Last update: Jan 4, 2023
  • Comments: 14

Helmfile

Tests Container Image Repository on GHCR Slack Community #helmfile Documentation

Deploy Kubernetes Helm Charts

About

Helmfile is a declarative spec for deploying helm charts. It lets you...

  • Keep a directory of chart value files and maintain changes in version control.
  • Apply CI/CD to configuration changes.
  • Periodically sync to avoid skew in environments.

To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.

Highlights

Declarative: Write, version-control, apply the desired state file for visibility and reproducibility.

Modules: Modularize common patterns of your infrastructure, distribute it via Git, S3, etc. to be reused across the entire company (See #648)

Versatility: Manage your cluster consisting of charts, kustomizations, and directories of Kubernetes resources, turning everything to Helm releases (See #673)

Patch: JSON/Strategic-Merge Patch Kubernetes resources before helm-installing, without forking upstream charts (See #673)

Status

March 2022 Update - The helmfile project has been moved to helmfile/helmfile from the former home roboll/helmfile. Please see roboll/helmfile#1824 for more information.

Even though Helmfile is used in production environments across multiple organizations, it is still in its early stage of development, hence versioned 0.x.

Helmfile complies to Semantic Versioning 2.0.0 in which v0.x means that there could be backward-incompatible changes for every release.

Note that we will try our best to document any backward incompatibility. And in reality, helmfile had no breaking change for a year or so.

Installation

  • download one of releases
  • run as a container
  • Archlinux: install via pacman -S helmfile
  • openSUSE: install via zypper in helmfile assuming you are on Tumbleweed; if you are on Leap you must add the kubic repo for your distribution version once before that command, e.g. zypper ar https://download.opensuse.org/repositories/devel:/kubic/openSUSE_Leap_\$releasever kubic
  • Windows (using scoop): scoop install helmfile
  • macOS (using homebrew): brew install helmfile

Getting Started

Let's start with a simple helmfile and gradually improve it to fit your use-case!

Suppose the helmfile.yaml representing the desired state of your helm releases looks like:

repositories:
 - name: prometheus-community
   url: https://prometheus-community.github.io/helm-charts

releases:
- name: prom-norbac-ubuntu
  namespace: prometheus
  chart: prometheus-community/prometheus
  set:
  - name: rbac.create
    value: false

Sync your Kubernetes cluster state to the desired one by running:

helmfile apply

Congratulations! You now have your first Prometheus deployment running inside your cluster.

Iterate on the helmfile.yaml by referencing:

Docs

Please read complete documentation

Contributing

Welcome to contribute together to make helmfile better: contributing doc

Attribution

We use:

  • semtag for automated semver tagging. I greatly appreciate the author(pnikosis)'s effort on creating it and their kindness to share it!

Users

Helmfile has been used by many users in production:

For more users, please see: Users

Download

helmfile.zip

Comments(14)

  • 1

    `.Release.Labels` are not available within values templates

    Operating system

    macOS Monterey Version 12.5

    Helmfile Version

    0.147.0

    Helm Version

    3.10.1

    Bug description

    Since 0.147.0 it seems like .Release.Labels are no longer available to use within values templates. Same example works fine with previous releases.

    Example helmfile.yaml

    helmfile.yaml:

    repositories:
      - name: jenkinsci
        url: https://charts.jenkins.io
     
    templates:
      jenkins: &jenkins
        name: jenkins{{`{{.Release.Labels.jenkinsInstance}}`}}
        chart: jenkinsci/jenkins
        version: 4.1.16
        labels:
          group: jenkins
          privateRegistry: 123456789.dkr.ecr.us-west-1.amazonaws.com
        namespace: jenkins{{`{{.Release.Labels.jenkinsInstance}}`}}
        values:
          - jenkins.yaml.gotmpl
    
    releases:
      - labels:
          jenkinsInstance: "1"
          zone: us-west-1b
          branch: master
          jobsTags: ""
        <<: *jenkins
    

    jenkins.yaml.gotmpl:

    agent:
      image: '{{ .Release.Labels.privateRegistry }}/jenkins-agent'
    

    Error message you've seen (if any)

    in ./helmfile.yaml: failed to render values files "jenkins.yaml.gotmpl": failed to render [jenkins.yaml.gotmpl], because of template: stringTemplate:2:21: executing "stringTemplate" at <.Release.Labels.privateRegistry>: map has no entry for key "privateRegistry"

    Steps to reproduce

    helmfile -f ./helmfile.yaml -l name=jenkins1 template

    Working Helmfile Version

    0.146.0

    Relevant discussion

    No response

  • 2

    fea(#507): support assign `--post-renderer` flags , helmDefaults config , release config when use helm v3

    Signed-off-by: guofutan [email protected]

    This implement reuse the args in helmfile args and helmDefaults config of helmfile.yaml, these PR to maske sure the less change to helmfile code, but make sure the forward compatbility of the helmfile.yaml syntx

  • 3

    feat: dont prepare on list

    This changes list command so it doesn't run withPreparedCharts, and just lists releases instead.

    See discussion https://github.com/helmfile/helmfile/discussions/344 for reasoning.

    This might break environments which relied on list having to build everything - I can add a backwards compatible opt flag if that's an issue

    changelog note for this PR

    Add --skip-charts flag to list subcommand, to only list releases without preparing and templating charts for them

  • 4

    Add preapply hook

    As discussed in https://github.com/roboll/helmfile/issues/1940 this PR adds a preapply hook that will run every time the release is applied, even if there is no changes to the release.

  • 5

    function fetchSecretValue fails if cannot find secret

    Operating system

    MacOS

    Helmfile Version

    v0.145.3

    Helm Version

    v3.10.0

    Bug description

    Previously I store secrets in sops encrypted file and used get function for accessing this secrets. I have quite a lot of environments and it's not always all secrets exist in every env.

    get function could have default value as described here: https://github.com/helmfile/helmfile/blob/main/docs/writing-helmfile.md https://github.com/roboll/helmfile/pull/1268/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R535

    so this configuration works perfectly when no auth_token exist in env and set auth_token to "" (empty default value)

    auth_token: "{{ .Values | get "auth_token" "" }}"
    

    but after migration I cannot use fetchSecretValue with same behaviour

    this code will fail if no auth_token in google secret manager or no versions - function fails

    auth_token: "{{ .Values | fetchSecretValue "auth_token" }}"
    

    this code will fail because fetchSecretValue function cannot set default value if cannot fetch secret

    auth_token: "{{ .Values | fetchSecretValue "auth_token" "" }}"
    

    So it would be nice to fix these 2 issue with fetchSecretValue value

    Example helmfile.yaml

    above

    Error message you've seen (if any)

    err 34: failed processing release xxx: failed to render values files "values.yaml.gotmpl": failed to render [values.yaml.gotmpl], because of template: stringTemplate:1773:16: executing "stringTemplate" at : wrong number of args for fetchSecretValue: want 1 got 2

    err 34: failed processing release xxx: failed to render values files "values.yaml.gotmpl": failed to render [values.yaml.gotmpl], because of template: stringTemplate:114:42: executing "stringTemplate" at <.Values.auth_token>: map has no entry for key "auth_token"

    Steps to reproduce

    adove

    Working Helmfile Version

    none

    Relevant discussion

    No response

  • 6

    Selector break repository/chart syntax with OCI

    Operating system

    Ubuntu 20.04

    Helmfile Version

    0.145.3

    Helm Version

    3.9.3

    Bug description

    Sync, Diff, template fonction is break with OCI chart when using --selector.

    Helm version : 3.8.0 Helmfile version : 0.144.0 Repository : Azure Container Registry

    Here the templating without selector

    
    Logging in to registry
    Login Succeeded
    
    Pulled: xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Digest: sha256:f19d9413523a99b0a1a351999e0b1c6ab8f3729623077a8fee65a2b629ddac1b
    
    Exporting xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Pulled: xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Digest: sha256:f19d9413523a99b0a1a351999e0b1c6ab8f3729623077a8fee65a2b629ddac1b
    
    Templating release=cfg-shared-configs, **chart=/tmp/helmfile636816509/cfg-shared-configs/cfg-shared-configs/1.0.0/cfg-shared-configs**
    ---
    # Source: cfg-shared-configs/templates/configmaps.yaml
    apiVersion: v1
    kind: ConfigMap
    
    ...
    
    

    Here the template with --selector (chart=cfg-shared-configs)

    Logging in to registry
    Login Succeeded
    
    Pulling xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Exporting xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Pulled: xxx.azurecr.io/helm/cfg-shared-configs:1.0.0
    Digest: sha256:f19d9413523a99b0a1a351999e0b1c6ab8f3729623077a8fee65a2b629ddac1b
    
    Exporting npd03cacnacrgen.azurecr.io/helm/cfg-shared-configs:1.0.0
    Pulled: npd03cacnacrgen.azurecr.io/helm/cfg-shared-configs:1.0.0
    Digest: sha256:f19d9413523a99b0a1a351999e0b1c6ab8f3729623077a8fee65a2b629ddac1b
    
    Templating release=cfg-shared-configs, chart=acr/cfg-shared-configs
    in ./helmfile.yaml: command "/usr/local/helm/helm" exited with non-zero status:
    
    PATH:
      /usr/local/helm/helm
    
    ARGS:
      0: helm (4 bytes)
      1: template (8 bytes)
      2: cfg-shared-configs (18 bytes)
      3: acr/cfg-shared-configs (22 bytes)
      4: --version (9 bytes)
      5: 1.0.0 (5 bytes)
      6: --namespace (11 bytes)
      7: perf3 (5 bytes)
      8: --values (8 bytes)
      9: /tmp/helmfile639927867/perf3-cfg-shared-configs-values-b68dc94b7 (64 bytes)
      10: --timeout=30m0s (15 bytes)
    
    ERROR:
      exit status 1
    
    EXIT STATUS
      1
    
    STDERR:
      Error: failed to download "acr/cfg-shared-configs" at version "1.0.0"
    
    COMBINED OUTPUT:
      Error: failed to download "acr/cfg-shared-configs" at version "1.0.0"
    

    Example helmfile.yaml

    Helmfile.yaml:

    repositories:
      - name: acr
        oci: true
        url: {{ requiredEnv "ACR_URL" | quote }}
        username: {{ requiredEnv "ACR_USER" | quote }}
        password: {{ requiredEnv "ACR_PASSWORD" | quote }}
    
    bases:
      # Tools releases template
      - releases/tool-release.gotmpl
    

    tool-release.gotmpl:

    # configmap Release Templating
    
    releases:
    {{- if hasKey .Environment.Values "configmap" }}
    
    # Loop on configmap dictionary to prepare release for each of the configmap-* charts
    {{ range $chart, $params := .Environment.Values.configmap }}
    
      - name: {{ $chart }}
      
        chart: acr/{{ $chart }}
    
        labels:
          chart: {{ $chart }}
    {{- end }}
    {{- end }}
    

    Error message you've seen (if any)

    STDERR: Error: failed to download "acr/cfg-shared-configs" at version "1.0.0"

    COMBINED OUTPUT: Error: failed to download "acr/cfg-shared-configs" at version "1.0.0"

    Steps to reproduce

    Private repository

    Working Helmfile Version

    No one

    Relevant discussion

    No response

  • 7

    Implement readDirEntries method

    Implementing feature to read directories from provided directory path based on Idea here: https://github.com/helmfile/helmfile/discussions/253

    Within the README.md I have also documented the readDir function which was not documented but available for usage.

  • 8

    Refactor 'images' workflow, include Ubuntu image to push

    Until now, the 'images' workflow was separated into two different jobs, one for just building the images in e.g. pull requests and the other one for building and pushing the images e.g. after a merge to the 'main' branch, which resulted in code repetitions. Also, both jobs used different approaches, one (build) using a 'matrix strategy' based on the file name of the Dockerfile, the other one (build and push) having a seperate build and push step for each Dockerfile.

    With this change, both jobs have been unified into a single "build and optionally push" job to remove the repetitions, which now also shares the same approach - a matrix strategy based on the file names of the Dockerfiles.

    The package naming now follows a clear schema based on the file name of the Dockerfile. 'Dockerfile' will result in a 'helmfile' package, 'Dockerfile.ubuntu' will result in a 'helmfile-ubuntu' package and so on. In order to keep the 'helmfile-debian-stable-slim' image package name, the 'Dockerfile.debian' had to be renamed to 'Dockerfile.debian-stable-slim' accordingly.

    Furthermore, the evaluation of the condition whether a push is intended (or not) has been moved directly to the 'push' flag of the 'docker/build-push-action'.

  • 9

    Add the ability to specify a lock file

    I came around the issue in the old repo: https://github.com/roboll/helmfile/issues/779 while trying to achieve exactly the same thing. So I would like to be able to apply updates in one environment first and then rollout the same changes to the rest of the environments.

    So would you accept this kind of change? What else would be good to have in terms of testing / documentation?

  • 10

    helm upgrade flag "--reuse-values" via ARGS has no effect

    Operating system

    Ubuntu 18.04 LTS

    Helmfile Version

    0.145.2

    Helm Version

    v3.9.0

    Bug description

    Passing the helm upgrade flag "--reuse-values" via the "--args" option to "helmfile sync" and "helmfile apply" has no effect.

    Example: This is a simple helm chart containing one config-map with two values "first_name" and "last_name". We want to update these values using helmfile following the steps below:

    The content of "staging-values-01.yaml":

    first_name: "spring"
    last_name: "boot"
    

    The content of "staging-values-02.yaml":

    last_name: "mvc"
    
    1. Run helmfile for the 1st time:

    helmfile -f helmfile.yaml sync --values staging-values-01.yaml

    As expected, "first-name" will be "spring" and "last_name" will be "boot".

    1. ReRun helmfile with different values:

    helmfile -f helmfile.yaml sync --values staging-values-02.yaml --args "--reuse-values"

    Let say we want to update only the "last_name" and we omitted "first_name" from the values.yaml since we expect that it will remain the same:

    What happens after running the command is that the "last_name" is updated, but the "first_name" which is removed in the "staging-values-02.yaml" has been restored to the default value of the chart instead of remaining at the previous value.

    So instead of having "first_name: spring" and "last_name: mvc", we got "fist_name: lorem" and "last_name: mvc"

    We observed the same behaviour with "helmfile sync" and "helmfile update".

    Otherwise, in case this is expected, how to achieve the desired result of updating only a subset of values, the behavior of "--reuse-values", using helmfile args/options.

    Example helmfile.yaml

    releases:

    • name: simple-chart namespace: staging chart: ./simple-chart

    Error message you've seen (if any)

    There is no error message but the result of the execution is not as expected.

    Steps to reproduce

    https://github.com/Hamdiovish/helmfile-args-report

    Relevant discussion

    No response

  • 11

    feat: show live output from the Helm binary

    This PR addresses what has been discussed here: https://github.com/helmfile/helmfile/discussions/283

    To enable some workflows where is interesting to have the stdout/stderr output of the Helm binary being shown live in the Helmfile output, the --enable-live-output global flag can now be provided to switch the default behaviour from Helmfile, which is to show the output from the Helm binary only after the command has completed.

  • 12

    Drop Helm v2 support

    Discussed in https://github.com/helmfile/helmfile/discussions/390

    Originally posted by yxxhero September 26, 2022 drop helm v2 support in helmfile v1.0

  • 13

    Helmfile/helm misses crds from patched resources when using transformers in a release

    Operating system

    macOS Ventura 13.0.1

    Helmfile Version

    helmfile version v0.144.0

    Helm Version

    version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}

    Bug description

    Helmfile/helm misses crds from manifests when using any of kustomize transformers in a release. Example command:

    helmfile --debug --file=helmfile.yml --environment=local-environment diff --validate
    

    If transformers list is empty, then crds create normally. However kustomize works as expected when i run the build command manually.

    kustomize build /var/folders/__/.../cert-manager/cert-manager
    

    I am ready to provide additional information as needed

    Example helmfile.yaml

    ---
    filepath:   helmfile.yaml
    helmBinary: helm
    
    environments:
      local-environment:
    
    helmDefaults:
      atomic: true
      wait: true
      waitForJobs: true
      cleanupOnFail: true
      createNamespace: true
    
    repositories:
    - name: jetstack
      url: https://charts.jetstack.io
    
    releases:
    - name: cert-manager
      chart: jetstack/cert-manager
      installed: true
      namespace: cert-manager
      version: 1.9.1
      values:
      - installCRDs: true
    
      transformers:
      - annotations:
          werf.io/show-service-messages: true
        apiVersion: builtin
        kind: AnnotationsTransformer
        metadata:
          name: notImportantHere
        fieldSpecs:
        - create: true
          path: metadata/annotations
    
    

    Error message you've seen (if any)

    I noticed that in the first template command there is an --include-crds option and crds becomes available with it.

    running helm fetch jetstack/cert-manager --untar -d /var/folders/__/.../cert-manager-9176/cert-manager --version 1.9.1
    ...
    running helm template --debug=false --output-dir=/var/folders/__/.../cert-manager-9176/cert-manager/cert-manager/helmx.1.rendered --include-crds --validate cert-manager /var/folders/__/.../cert-manager-9176/cert-manager/cert-manager -f /.../values.yaml -f /var/folders/__/... --namespace cert-manager-9176
    ...
    generated and using kustomization.yaml:
    kind: ""
    apiversion: ""
    resources:
    - templates/webhook-config.yaml
    - templates/crds.yaml
    - templates/cainjector-rbac.yaml
    - templates/service.yaml
    - templates/startupapicheck-job.yaml
    - templates/cainjector-serviceaccount.yaml
    - templates/webhook-serviceaccount.yaml
    - templates/cainjector-deployment.yaml
    - templates/startupapicheck-rbac.yaml
    - templates/webhook-service.yaml
    - templates/webhook-deployment.yaml
    - templates/startupapicheck-serviceaccount.yaml
    - templates/rbac.yaml
    - templates/webhook-rbac.yaml
    - templates/webhook-mutating-webhook.yaml
    - templates/webhook-validating-webhook.yaml
    - templates/serviceaccount.yaml
    - templates/deployment.yaml
    transformers:
    - transformers/transformer.0.yaml
    ...
    **Detected 43 resources and 6 CRDs**
    ...
    1 release(s) found in issue.yml
    

    But there is no --include-crds flag in the second template command. When transformers are empty, this flag is also not set, but crds is fine

    exec: helm diff upgrade --reset-values --allow-unreleased cert-manager /var/folders/__/.../cert-manager-9176/cert-manager/cert-manager --version 1.9.1 --namespace cert-manager-9176 --values /var/folders/__/.../cert-manager-9176-cert-manager-values-55dfd7ddf4 --debug
    helm:OWYxY> Executing helm version
    helm:OWYxY> Executing helm get manifest cert-manager --namespace cert-manager-9176
    helm:OWYxY> Executing helm template cert-manager /var/folders/__/.../cert-manager-9176/cert-manager/cert-manager --version 1.9.1 --namespace cert-manager-9176 --values /var/folders/__/.../cert-manager-9176-cert-manager-values-55dfd7ddf4 --validate --is-upgrade
    helm:OWYxY> Executing helm get hooks cert-manager --namespace cert-manager-9176
    

    Steps to reproduce

    Working Helmfile Version

    Relevant discussion

    No response

  • 14

    helmDefaults do not seem to override helmDefaults of sub-helmfile

    Operating system

    alpine 3.16.2

    Helmfile Version

    v0.148.1

    Helm Version

    v3.10.2

    Bug description

    I have a parent helmfile with timeout of 360s, and a child helmfile with 60s timeout

    In the end, helmfile uses timeout of 60s instead of 360s on helmfile sync Is it expected? Is there a way to override timeout of sub-helmfile?

    Example helmfile.yaml

    parent helmfile:

    helmDefaults:
      wait: true
      timeout: 360
    
    helmfiles:
      - path: sms-web/helmfile.yaml
    

    sub-helmfile:

    helmDefaults:
      wait: true
      timeout: 60
    
    releases:
      - name: sms-web
        chart: chartmuseum/sms_web
    

    Error message you've seen (if any)

    PATH:
      /usr/local/bin/helm
    ARGS:
    ...
      9: --timeout (9 bytes)
      10: 60s (3 bytes)
    ...
    

    Steps to reproduce

    sorry, may I provide it a bit later?

    Working Helmfile Version

    haven't yet tried another versions

    Relevant discussion

    No response