{ "version": "https://jsonfeed.org/version/1", "title": "Analogous Dev Blog", "home_page_url": "https://www.analogous.dev", "description": "The analogous developer. Technology topics for newcomers and experts alike.", "icon": "analogous-dev.png", "items": [ { "id": "using-the-kubernetes-python-client-with-aws", "content_html": "\nAs someone who normally just uses `kubectl` and `helm` to talk to my Kubernetes\nclusters, the idea of scripting modifications to my Kubernetes cluster was\nexciting!! I cracked open the\n[`kubernetes-python`](https://github.com/kubernetes-client/python) client and\nstarted playing.\n\n## TL;DR;\n\nWe use the `boto3`, `eks-token`, and `kubernetes` python packages to\ntalk to an EKS cluster without depending on `kubeconfig`.\n\n## Why\n\nFor those of us interactively building and maintaining kubernetes resources, \n`helm` or `kubectl` become our bread and butter. They provide very nice CLI\ninterfaces and have all the bells and whistles one could ask for!\n\nMoreover, when it comes to AWS (Amazon Web Services) and their EKS (Elastic\nKubernetes Service) clusters, they handle authentication smoothly and easily via\na handy CLI command and the `kubeconfig` file (usually stored at `~/.kube/config`:\n\n```\naws eks update-kubeconfig --alias mycluster\n```\n\n(Tip: Don't forget the `--alias` flag!)\n\nMy assume-role settings and environment variables integrate nicely from my shell: everything is dandy!\n\nHowever, in this case, I want to build an _app_ / software program that _itself_\naccesses, modifies, and maintains resources on my Kubernetes cluster. Of course,\none can script a solution around these CLIs, but that is not very portable (requires\nthe CLIs installed) and is prone to failure because `kubeconfig` is user-defined, \nuser-maintained, and hinges entirely on `contexts` which are arbitrary text strings.\n\nSo I headed for the `kubernetes-python` API client which simplifies making API requests\nagainst a Kubernetes cluster. With any luck, I will finish the day with a Python program\nthat creates my Kubernetes resources.\n\n**NOTE**: Before we get started, it is worth noting that I am presuming you have\na way to authenticate to AWS. In my case, I have already provided environment\nvariables, an instance profile, etc. that gives this program access to the AWS\nAPI as the necessary IAM role.\n\n## The Story is Auth\n\nAs with many of my stories, this story will become one largely consisting of\nauthentication. Unraveling the \"magic\" of the `kubeconfig` file and AWS's IAM\nauthentication was not an easy task - as I soon learned, the `kubernetes-python`\nAPI client _also_ depends heavily on `kubeconfig` by default.\n\nMost of [the\nexamples](https://github.com/kubernetes-client/python/tree/master/examples) look\nsomething like this, and if you want to use `kubeconfig`, this works very well.\n\n```python\nfrom kubernetes import config\n\nconfig.load_kube_config()\ncore_v1 = core_v1_api.CoreV1Api()\n```\n\n(Although there is also a cool example of [in-cluster config](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py))\n\nHowever, for portability, I want to bypass `kubeconfig` but will be running outside of a cluster. So let's see how this\nclass works.\n\n```python\nconfig.kube_config.Configuration\n```\n\nAfter a bit of flailing, fighting, and digging, I learned that you can\ninitialize an API connection with the API endpoint and a bearer token.\nBasically, you initialize a configuration object directly and then use that to\ninitialize an API client.\n\n\n```python\nimport kubernetes\n\n\ndef k8s_api_client(endpoint: str, token: str, cafile: str) -> kubernetes.client.CoreV1Api:\n kconfig = kubernetes.config.kube_config.Configuration(\n host=endpoint,\n api_key={'authorization': 'Bearer ' + token}\n )\n kconfig.ssl_ca_cert = cafile\n kclient = kubernetes.client.ApiClient(configuration=kconfig)\n return kubernetes.client.CoreV1Api(api_client=kclient)\n```\n\n\nThere are other useful options in `kconfig`, like `kconfig.proxy` and\n`kconfig.verify_ssl`. Have a look at the class for more details!\n\n### Authenticate to AWS / EKS\n\nNow we need to figure out how to get a bearer token to talk to EKS. I cannot use\nthe magic in `kubeconfig`, which led me to the [AWS CLI's `aws eks get-token`\ncommand](https://docs.aws.amazon.com/cli/latest/reference/eks/get-token.html). I\nchose to opt for a pure python solution in the\n[`eks-token`](https://pypi.org/project/eks-token/) package. You can evaluate the\nsource for yourself [here](https://github.com/peak-ai/eks-token) and [more\nspecifically,\nhere](https://github.com/peak-ai/eks-token/blob/master/eks_token/logics.py).\n\n\nWith this module, we can acquire an EKS token easily:\n\n\n```python\nimport eks_token\n\ncluster_name = 'my-eks-cluster'\nmy_token = eks_token.get_token(cluster_name)\n```\n\n\n**NOTE:** It is worth checking whether the `boto3` or `aws` libraries provide\naccess to this functionality directly from Python as things improve!\n\n### Now TLS\n\nUnfortunately, the `kubernetes-python` client does not allow for inlining\nthe TLS CA Certificate. As a result, we have to write it to a temp file\n(which by inspection is exactly what the `kubeconfig` approach is doing)\n\n\n```python\n\n \nimport boto3\nimport tempfile\nimport base64\n\n\ndef _write_cafile(data: str) -> tempfile.NamedTemporaryFile:\n # protect yourself from automatic deletion\n cafile = tempfile.NamedTemporaryFile(delete=False)\n cadata_b64 = data\n cadata = base64.b64decode(cadata_b64)\n cafile.write(cadata)\n cafile.flush()\n return cafile\n\n\nbclient = boto3.client('eks')\ncluster_data = bclient.describe_cluster(name=cluster_name)['cluster']\nmy_cafile = _write_cafile(cluster_data['certificateAuthority']['data'])\n```\n\n### Put it all together\n\nArmed with a bearer token and transport layer security (TLS), we now have the\ntools we need to succeed!!\n\n\n```python\napi_client = k8s_api_client(\n endpoint=cluster_data['endpoint'],\n token=my_token['status']['token'],\n cafile=my_cafile.name\n)\n\napi_client.list_namespace()\n```\n\n## Go wild\n\nThe first time `list_namespace()` returned data, I was ecstatic.\nAuthentication successful! Now the world is your oyster.\n\nAll that's left is perusing the [API client\ndocs](https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md)\nand refreshing AWS creds every so often!\n\nFor example, creating a configmap:\n\n\n```python\nmy_configmap = kubernetes.client.V1ConfigMap(\n api_version='v1',\n metadata={'name': 'my-configmap'},\n kind='ConfigMap',\n data={'my_file.txt': 'mycontent'}\n)\n\napi_client.create_namespaced_config_map(namespace='default', body=my_configmap)\n\n# NOTE: to update a configmap, you need to\n# use k8s_client.replace_namespaced_config_map\n#\n# If it already exists, create will give a 409 conflict\n```\n\nHave fun!!\n\n[The source for this article is available here (along with a few extra\nexamples)](https://github.com/colearendt/example-python-kubernetes)\n\n## Outtakes\n\nYes, there were some outtakes:\n\n- Standing up [`mitmproxy`](https://mitmproxy.org/) in a docker container and\n setting `kconfig.proxy='http://localhost:8080` and `kconfig.verify_ssl=False`\n before initializing `api_client` so I could see the requests being made to the\n Kubernetes cluster and verify whether authentication was being sent\n- Flailing on temp files (today, I learned that python disposes of temp files\n rather quickly by default)\n- Trolling the internet and finding an unfortunate absence of docs on this\n topic. \"Is this obvious to everyone else? Am I doing something wrong?\" Classic\n questions of a lonely explorer.\n- Accidentally installing the `aws` package into my `pyenv` and breaking my\n terminal's ability to assume roles when in certain directories.\n\nProgramming can be hard! It is so nice to know we are not alone!\n", "url": "/blog/using-the-kubernetes-python-client-with-aws", "title": "Using the Kubernetes Python Client with AWS", "summary": "\nAs someone who normally just uses `kubectl` and `helm` to talk to my Kubernetes\nclusters, the idea of scripting modifications to my Kubernetes cluster was\nexciting!! I cracked open the\n`kubernetes-python` client and\nstarted playing. TL;DR; We use the `boto3`, `eks-token`, and `kubernetes` python packages to\ntalk to an EKS ...", "date_modified": "2021-01-27T00:00:00.000Z", "author": {}, "tags": [ "DevOps", "Python", "Programming", "HowTo", "AWS", "Kubernetes" ] }, { "id": "sssd-without-tls", "content_html": "\n[`sssd`](https://sssd.io/) has established itself as the most common way to provision system\naccounts via LDAP or Active Directory on linux servers across all linux\ndistributions. However, working with it can be tricky!\n\n## TL;DR;\n\nWe show an example of using `sssd` to contact an LDAP server that is\nlistening on port 389 (in plaintext / no TLS). This is _NOT_ a good\nidea in any production environment. However, it can be important\nand helpful in playgrounds, learning, or other experiments. The magic\nconfiguration is `ldap_auth_disable_tls_never_use_in_production = true`.\n\n## Why\n\nIt is quite straightforward to stand up an LDAP server listening in plaintext. My\nfavorite mechanism is using the [`openldap`\ncontainer](https://github.com/osixia/docker-openldap)\n, [although there are other options](https://github.com/nitnelave/lldap).\n\n```bash\ndocker run -it --rm -p 389:389 osixia/openldap:latest\n```\n\nHowever, if you have a toy linux container running `sssd`, this is unfortunately not an obvious option! Why, you ask?\nThis is all just a dev playground!? Right. Well the `sssd` maintainers want to be very careful about not creating\nsecurity vulnerabilities or letting their users get hacked. This means you have to work hard to open yourself up to this\ntype of vulnerability in your playground.\n\nSpecifically, we will use the `ldap_auth_disable_tls_never_use_in_production` setting.\n\n> NOTE: Do not use this setting in any \"real\" environment with \"real\" users, passwords, sensitive data, etc.\n\n## Give it a Shot\n\n### Create Users\n\nFirst, we need to create and populate our LDAP server. Let's go ahead and do that. It is easiest if we create a file\nwith users first. For a more advanced LDIF file, check\nout [the repository associated with this post](https://github.com/colearendt/container-playground):\n\n_users.ldif_\n```ldif\nversion: 1\n\n## Entry 1: dc=angl,dc=dev\n#dn: dc=angl,dc=dev\n#dc: angl\n#o: Angl Dev\n#objectclass: top\n#objectclass: dcObject\n#objectclass: organization\n#\n## Entry 2: cn=admin,dc=angl,dc=dev\n#dn: cn=admin,dc=angl,dc=dev\n#cn: admin\n#description: LDAP administrator\n#objectclass: simpleSecurityObject\n#objectclass: organizationalRole\n#userpassword: {SSHA}+FquX8RcwTtBPo7mu2pgSvjaQYX9HpCL\n#\n#\n# Entry 3: cn=engineering_group,dc=angl,dc=dev\ndn: cn=engineering_group,dc=angl,dc=dev\ncn: engineering_group\ngidnumber: 500\nmemberuid: joe\nmemberuid: julie\nobjectclass: posixGroup\nobjectclass: top\n\n# Entry 4: dc=engineering,dc=angl,dc=dev\ndn: dc=engineering,dc=angl,dc=dev\ndc: engineering\ndescription: The Engineering Department\no: Engineering\nobjectclass: dcObject\nobjectclass: organization\nobjectclass: top\n\n\n# Entry 5: cn=joe,dc=engineering,dc=angl,dc=dev\ndn: cn=joe,dc=engineering,dc=angl,dc=dev\ncn: joe\ngidnumber: 500\ngivenname: Joe\nhomedirectory: /home/joe\nloginshell: /bin/sh\nmail: joe@angl.dev\nobjectclass: inetOrgPerson\nobjectclass: posixAccount\nobjectclass: top\nsn: Golly\nuid: joe\nuidnumber: 1000\nuserpassword: {MD5}j/MkifkvM0FmlL6P3C1MIg==\n\n# Entry 9: cn=julie,dc=engineering,dc=angl,dc=dev\ndn: cn=julie,dc=engineering,dc=angl,dc=dev\ncn: julie\ngidnumber: 500\ngivenname: Julie\nhomedirectory: /home/julie\nloginshell: /bin/sh\nmail: julie@angl.dev\nobjectclass: inetOrgPerson\nobjectclass: posixAccount\nobjectclass: top\nsn: Jolly\nuid: julie\nuidnumber: 1001\nuserpassword: {MD5}FvEvXoN54ivpleUF6/wbhA==\n```\n\nYou will notice that the first two entries are commented out. They are included to represent a _complete_ LDIF file.\nHowever, the `osixia/docker-openldap` container help us by provisioning these automatically.\n\nFurther, you will notice that passwords are included. This makes things easier for our playground, but is _definitely_ a\nbad idea in real life / production applications.\n\n### Create LDAP Server\n\nNow let's create the server itself!\n\n```bash\ndocker network create playground-network\ndocker run \\\n -d --name openldap --rm \\\n -p 389:389 \\\n --network playground-network \\\n -v $(pwd)/users.ldif:/container/service/slapd/assets/config/bootstrap/ldif/50-bootstrap.ldif \\\n -e LDAP_TLS=false \\\n -e LDAP_DOMAIN=\"angl.dev\" \\\n -e LDAP_ADMIN_PASSWORD=\"admin\" \\\n osixia/openldap:1.5.0 \\\n --copy-service --loglevel debug\n```\n\nAnd check that it is working\n\n```bash\ndocker exec -it openldap ldapsearch -D cn=admin,dc=angl,dc=dev -b dc=angl,dc=dev -w admin cn\ndocker exec -it openldap ldapsearch -D cn=admin,dc=angl,dc=dev -b dc=angl,dc=dev -w admin cn=julie \\*\n```\n\nIf you look carefully, you will notice that:\n\n1. We created a persistent network for our containers to share\n2. We provisioned users from our `ldif` file\n3. We disabled TLS on the service\n4. We bumped up the logging verbosity for debugging purposes\n\nThese are all useful tidbits to dig into if you are not familiar!\n\n### Configure sssd Server\n\nIt is possible to run `sssd` in a fairly vanilla `ubuntu:jammy` container.\n\n```bash\ndocker run -it --name sssd --rm --network playground-network ubuntu:jammy bash\n\napt update && apt install -y sssd ldap-utils vim\n```\n\nThen you need to create your `sssd.conf` file. Notice our magic\noption `ldap_auth_disable_tls_never_use_in_production=true`. This will be the magic that makes things work for us!\n```bash\ncat << EOF > /etc/sssd/sssd.conf\n[sssd]\nconfig_file_version = 2\nservices = nss, pam\ndomains = LDAP\n\n[nss]\nfilter_users = root,named,avahi,haldaemon,dbus,radiusd,news,nscd\nfilter_groups =\n\n[pam]\n\n[domain/LDAP]\nid_provider = ldap\nauth_provider = ldap\nchpass_provider = ldap\nsudo_provider = ldap\nenumerate = true\n# ignore_group_members = true\ncache_credentials = false\nldap_schema = rfc2307\nldap_uri = ldap://openldap:389\nldap_search_base = dc=angl,dc=dev\nldap_user_search_base = dc=angl,dc=dev\nldap_user_object_class = posixAccount\nldap_user_name = uid\n\nldap_group_search_base = dc=angl,dc=dev\nldap_group_object_class = posixGroup\nldap_group_name = cn\nldap_id_use_start_tls = false\nldap_tls_reqcert = never\nldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt\nldap_default_bind_dn = cn=admin,dc=angl,dc=dev\nldap_default_authtok = admin\naccess_provider = ldap\nldap_access_filter = (objectClass=posixAccount)\nmin_id = 1\nmax_id = 0\nldap_user_uuid = entryUUID\nldap_user_shell = loginShell\nldap_user_home_directory = homeDirectory\nldap_user_uid_number = uidNumber\nldap_user_gid_number = gidNumber\nldap_group_gid_number = gidNumber\nldap_group_uuid = entryUUID\nldap_group_member = memberUid\nldap_auth_disable_tls_never_use_in_production = true\nuse_fully_qualified_names = false\nldap_access_order = filter\ndebug_level=6\nEOF\nchmod 600 /etc/sssd/sssd.conf\n```\n\nNow let's start the `sssd` service\n```bash\nsssd -i\n# should see some log messages that suggest things are happening!\n```\n\n### Be sure it works!\n\nNow let's make sure that this works by starting another shell in our `jammy` container.\n\n```bash\ndocker exec -it sssd bash\n\nid joe\n# uid=1000(joe) gid=500(engineering_group) groups=500(engineering_group)\nid julie\n# uid=1001(julie) gid=500(engineering_group) groups=500(engineering_group)\n```\n\n## Using `docker-compose`\n\nFor playground environments like this, `docker-compose` makes this setup much easier to architect and reuse. You can\nuse [my example compose setup](https://github.com/colearendt/container-playground) if you prefer.\n\n```bash\ncd compose/\ndocker network create playground-network\nNETWORK=playground-network docker-compose -f ldap.yml -f sssd.yml -f network.yml up -d\ndocker exec -it compose_sssd_1 bash\n\nsssd -i >/tmp/sssd.log 2>&1 &\nid joe\n```\n\n## Review\n\nWell done! You have successfully started your own `sssd` container. Although this is very much a toy, it is a\ngreat \"jumping off point\" to learn and understand how `sssd` works in more detail!\n\nAny time you need a toy LDAP server for `sssd`, just remember: `ldap_auth_disable_tls_never_use_in_production = true`.\n", "url": "/blog/sssd-without-tls", "title": "Using sssd in a Playground Without TLS", "summary": "\n`sssd` has established itself as the most common way to provision system\naccounts via LDAP or Active Directory on linux servers across all linux\ndistributions. However, working with it can be tricky! TL;DR; We show an example of using `sssd` to contact an LDAP server that is\nlistening ...", "date_modified": "2022-10-01T00:00:00.000Z", "author": {}, "tags": [ "Docker", "Container", "LDAP", "sssd", "DevOps", "SysAdmin", "HowTo", "openldap" ] }, { "id": "helm-cheatsheet", "content_html": "\nBelow is an introduction to Helm! If you want to [skip to the\ncheatsheet](#cheat-sheet), you can [download it\nhere](https://www.analogous.dev/download/cheatsheet/helm.pdf).\n\n## What is Helm\n\nAccording to [its own docs](https://helm.sh/docs/), Helm is \"the\" package\nmanager for Kubernetes. What does this mean?\n\nIt's a way of keeping track of all your Kubernetes stuff!\n\nHelm as I describe it is a mechanism for packaging and parameterizing standard\nKubernetes YAML files. It uses [Go\nTemplating](https://blog.gopheracademy.com/advent-2017/using-go-templates/) for\nmost of this mechanism, and adds a layer of version / metadata tracking as\nwell. All of this packaged up into tarballs used by a client-side-only (as of\n`helm` v3) CLI.\n\nSo basically: Helm = YAML + Go Templating + Versioning + Tar balls.\n\n## Why use it?\n\nWhy use it? There are lots of alternatives out there, and many purported \"Helm replacements,\"\nbut Helm has yet to give up its throne, and I have not found anything better\nfor my own use cases... yet. So what are Helm's strengths?\n\nI will do my best not to wax poetic. I am biased and a big fan of Helm. As a layer of\nabstraction between an application and Kubernetes, I think it is a fantastic asset.\n\nIn particular, I think this is because:\n\n- No runtime dependency\n- Client-side only utility\n- Data stored server side for collaboration\n- output represents native Kubernetes objects (i.e. interoperable with other tools)\n- `helm template` gives rapid feedback on iterating and testing\n- plain text file output / diffs is very easy to parse\n\nAs a system administrator, it is nice because it offers:\n\n- Version pinning for reproducibility\n- Everything is open source tarballs, so dependencies are easy to track and introspect\n- application vendors will ideally maintain their own chart and good NEWS files\n\n## When to use it?\n\nSo that's _what_ it is, and _why_ it is desirable. But _when_ is it useful?\n\nI find that helm particularly shines in a handful of situations:\n\n- Managing an array of applications deployed on Kubernetes\n- Packaging your own application for use by customers\n- Encoding complex knowledge about \"how to run an application\" (to an extent,\n then you get to\n [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/))\n- To set up easy \"roll-back\" policies for applications that support the\n behavior\n\nOccasionally a wrapper like [ArgoCD](https://argoproj.github.io/cd/), [Flux](https://fluxcd.io/)\n, [helmfile](https://github.com/helmfile/helmfile), or [pulumi](https://www.pulumi.com/docs/get-started/kubernetes/)\nwill be useful to manage your helm deployments too, so\nthat you don't have to keep track of a bunch of CLI commands.\n\n## When not to use it?\n\nHelm can definitely be overkill in some \"hello world\" or very simple deployment\nsituations. Unfortunately, it also **does not have a great answer for\n[CRDs](https://helm.sh/docs/topics/charts/#custom-resource-definitions-crds)\nyet**. Moreover, it is **only useful for Kubernetes**, so if you are unfamiliar\nwith Kubernetes, it will have limited utility for you.\n\nThe other case where it may not be useful is in some **internal applications**.\nMaintaining a helm chart for an application can end up being a sizable amount\nof work, and they do not allow arbitrary inputs, so if you miss some key (i.e.\n\"imagePullSecrets,\") you can end up spending a lot of time key-chasing across\nyour charts. I have heard of folks using [Kustomize](https://kustomize.io/) in\nsuch a situation, although another option is to use a meta chart (one chart for\nmany apps) or Functions-as-a-Service (FaaS) framework like\n[Serverless](https://www.serverless.com/),\n[OpenFaas](https://www.openfaas.com/), [Knative](https://knative.dev/docs/),\netc.\n\nAlso, helm charts do have a **complexity ceiling**. Go Templating provides\nlots of flexibility, but being DRY is hard, and there are many parts of the\nprocess that are not optimal from a software development point of view. As\ncharts become more complex, an\n[operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)\nbecomes increasingly beneficial as a mechanism to provide better software\nsemantics to the application management process. However, the learning curve\nfor operators can also be a bit steep.\n\nFinally, helm charts unfortunately **do not have hard-and-fast standards about how values are used** across the\necosystem. As a result, you will often encounter wild variations in chart quality, value naming, and value behavior.\n\n## Hello World\n\nLet's get started on a hello world example! First, you need to [install\nkubectl](https://kubernetes.io/docs/tasks/tools/#kubectl), [install\nhelm](https://helm.sh/docs/intro/install/), and have a kubernetes cluster\navailable. Once those things are taken care of, a hello world example of a\nhelm deployment is pretty straightforward!\n\nFor this example, we will use my [generic\nchart](https://github.com/colearendt/helm/tree/main/charts/generic), useful for\ndeploying simple services with standard configuration or helm needs.\n\nWe are also going to use [this hello-world container](https://hub.docker.com/r/paulbouwer/hello-kubernetes).\n\nFirst, add the repository that houses our example chart:\n```\nhelm repo add colearendt https://colearendt.github.io/helm/\n```\n\nYou can look at the values available for the chart:\n```\nhelm show values colearendt/generic\n\n# I like to pipe it to a pager for search and such\nhelm show values colearendt/generic | less\n```\n\nThen create a YAML file called _my-values.yaml_ to hold values:\n\n_my-values.yaml_\n```\nimage:\n repository: paulbouwer/hello-kubernetes\n tag: \"1.10\"\npod:\n port: 8080\n```\n\nThen template the output:\n```\nhelm template hello-world colearendt/generic -f my-values.yaml\n```\n\nAnd install it into the Kubernetes cluster!\n\n```\nhelm upgrade --install hello-world colearendt/generic -f my-values.yaml\n```\n\nThen you should be able to see the app deployed:\n```\nhelm list\nkubectl get pods\n```\n\nAnd view the service in your web browser at http://localhost:8080:\n```\nkubectl port-forward svc/hello-world-generic 8080:80\n```\n\n### Clean Up\n\nIf you want to clean up after yourself:\n\n```bash\n# delete the helm release\nhelm delete hello-world\n\n# delete the repository reference\nhelm repo remove colearendt\n```\n\nUnfortunately, I have not taken much time to dive into troubleshooting here! If you are hitting issues,\nplease [shoot me an email](mailto:info@analogous.dev) - I would love to have feedback on what to improve! Maybe\nsomeday I will take the time to set up comments 😅\n\n## Best Practices\n\nSo now you have a \"Hello World\" deployment under your belt. However, it also helps to keep in mind some best practices\nas you keep improving. Below is a handful of helm chart conventions that may be unfamiliar if you are new to the\ncommunity:\n\n- Make sure to pin helm chart versions with the `--version` flag\n- Maintain a `NEWS.md` file (or read the `NEWS.md` file) to keep track of\n changes between versions\n- Keep an eye out for \"upgrading directions\" in the `README.md` or elsewhere\n- Use `helm show values` to see the default values (and comment strings\n associated). Ideally these are presented or discussed in a `README` as well.\n- Avoid [`sub-charts`](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) if you can. It is tempting as a\n DRY software principle, but turns out to be a pretty advanced topic with lots of tricky edge cases. In particular,\n namespaces can be painful.\n\n## Cheat Sheet\n\nI took the time to arrange a \"cheat sheet\" of my favorite helm commands and the\ncontexts in which they are useful. It was inspired by [RStudio's array of excellent\ncheat sheets for the R community](https://www.rstudio.com/resources/cheatsheets/).\n\nA hit-list of some of the most useful commands:\n\n- `helm show values chartrepo/chartname`\n- `helm template releasename chartrepo/chartname`\n- `helm upgrade --install releasename chartrepo/chartname`\n- `helm repo add https://repourl`\n- `helm repo list`\n- `helm search repo`\n- `helm info`\n- `helm list`\n\nAnd the cheat-sheet itself can be downloaded [here](https://www.analogous.dev/download/cheatsheet/helm.pdf).\n\n