{ "version": "https://jsonfeed.org/version/1", "title": "Analogous Dev Blog", "home_page_url": "https://www.analogous.dev", "description": "The analogous developer. Technology topics for newcomers and experts alike.", "icon": "analogous-dev.png", "items": [ { "id": "analogies-help-us-learn", "content_html": "\nI love analogies. Perhaps that is easy to guess based on the name of the\nwebsite. Why? Well, I'll tell you.\n\n### What is an analogy?\n\nPerhaps you recall something like this from primary or secondary school:\n\n```\negg:chicken::acorn:?\n```\n\nIf that use of analogies brings back painful memories (like the SAT), then rest\nassured, that is not how I like to think of analogies. The definition of an\nanalogy is:\n\n> **analogy:** _noun_. a comparison between two things. typically for the\n> purpose of explanation or clarification\n\nNote that some definitions explicitly note that analogies are a comparison of\ntwo \"unalike\" things, thereby highlighting their similarities in a certain sense.\n\n### Why analogies are great\n\nLearning is defined as \"the acquisition of knowledge or skills through\nexperience, study, or by being taught.\" We have all learned _many_ things\nin our lives, and they are typically in areas that we have spent the most\ntime or have gained the most hands-on experience.\n\nHowever, learning is sometimes hard to \"transfer\" into new areas. _Related_\nareas come easily (i.e. good soccer players tend to be good at futsal, because\nthe two are very closely related), but _unrelated_ areas, not so much. Being good\nat basketball does not necessarily transfer well to rocket science, for instance.\n\nEnter analogies! Analogies help us take experience or learning from one _unrelated_\narea to another, and thereby allow us to jump from basketball to rocket science!\nWe cannot expect to become experts purely by way of analogies, but it is a great\nway to soften learning curves, simplify topics, and transfer our experience!!\n\nPlus, they're fun!\n\n### An example\n\nI love math and technology, and am a bit of a nerd. Many of the people I care\nabout are either not nerds or are not nerds about math and technology. As a\nresult, I use analogies to help explain my world to them without requiring them\nto become math/programming nerds.\n\nFor instance, I might explain that \"Calculus is like Driving a Car.\" This is a\nuseful analogy I used when I was teaching calculus. To help give newcomers an\nintuition about calculus, I would draw upon their experience driving cars! Need\nhelp understanding derivatives? Think about the spedometer! Need help\nunderstanding integrals? Think about the odometer!\n\nNone of us is capable of _really_ understanding things for which we have no\ncontext, background, or logical foundation for. Analogies help us bridge that\ngap and provide an easier entry-point based on similarities of otherwise unlike\nthings.\n\n### How we use them here\n\nOn this site, we love to use analogies to explain tech topics and things that\nare otherwise complex or tricky to understand. As a result, when sharing an\nanalogy, we will generally shoot for a few highlights:\n\n- The analogy\n- The high points\n- The breakdown\n\n#### The Analogy\n\nWe begin with an analogy! Most analogies will require a bit of\nelaboration and qualification. At first glance, perhaps it seems strange to\nrelate calculus to driving a car.\n\n#### The High Points\n\nIf the analogy is good, there are some very clear points of similarity that will\nhelp explain the \"less accessible\" topic. The high points of an analogy are its\nfocus and primary value, so this is where you should spend the bulk of your time\nwhen learning.\n\n#### The Breakdown\n\nEvery analogy \"breaks down\" somewhere. We should always acknowledge the limits\nof our tools, and every analogy has its limits. Generally, it is best to focus\non the high points of an analogy, lest we confuse newcomers with our\nqualifications. However, in this context we will briefly mention our\nshortcomings as a highlight for where you can dig deeper. Further, our\nshortcomings can help us understand the desired topic by establishing\nguard-rails.\n\nTo take our example, while driving a car gives you an _entry point_ into\ncalculus, it will not make you an expert. At some point, you _do_ have to master\nthe definitions and mechanics of derivatives and integrals. Also, I can't think of\nany way relate an oil change to calculus, so let's not get too literal.\n\n### Closing\n\nIt is worth noting that this site is named for analogies. Coincidentally, I\nwill often focus on them. However, I'm not very creative. As a result, many\nposts on the site will not be about analogies at all. However, my aim is that\nthe theme of explaining topics in an accessible way will remain.\n\nNext time you are trying to explain something complex, do some thinking to see\nif an analogy will help you out! A well-thought-out analogy can be very useful\nand a lot of fun! Of course, others fall completely flat, but that's fun in its\nown way.\n\nFinally, if analogies are ever actively hindering your understanding or just making\nyou more confused, _please_ drop them. If an analogy is relating to something you have\nno context for, it may be better to try another or to just ignore the analogy altogether.\nAfter all, the focus is learning!\n", "url": "/blog/analogies-help-us-learn", "title": "Analogies Help Us Learn!", "summary": "\nI love analogies. Perhaps that is easy to guess based on the name of the\nwebsite. Why? Well, I'll tell you. What is an analogy? Perhaps you recall something like this from primary or secondary school: egg:chicken::acorn:? If that use of analogies brings back painful memories ...", "date_modified": "2020-10-26T00:00:00.000Z", "author": {}, "tags": [ "Analogy", "Meta", "General", "Tech" ] }, { "id": "debugging-kerberos-is-like-hades", "content_html": " \nWe've all been there. You start debugging kerberos, and it starts getting warmer\nin the room. Ok, maybe not all of us. Debugging Kerberos certainly can feel an\nawful lot like Hades, though.\n\n### The Analogy\n\n> Debugging Kerberos is like Hades\n\nHades is the Greek God of the underworld or sometimes a reference for the place\nof the dead, generally. I promise, this analogy is not _completely_ out of left\nfield.\n\nKerberos is a computer protocal developed by MIT and used in many enterprise\nsettings for authentication and authorization. However, it gets its name from\nCerberus - the guard dog of Hades. As a result, feeling close to Hades when you are close\nto Kerberos is reasonable at some level.\n\nAlthough there are many aspects of Hades we could highlight for the purposes of\nour analogy, we will focus on a few: heat, torment, eternal, inexplicable,\naimless, and generally terrible.\n\n### The High Points\n\nWhy is debugging Kerberos so awful? A couple of reasons below. Particularly,\nreasons that can encourage us how _not_ to make software:\n\n- **No documentation** - Ok there _is_ documentation, but much of it is just too\n theoretical to be useful. Further, much of programming in the 21st century is\n search-engine-foo. Unfortunately, searching Kerberos issues generally leads to\n Microsoft documentation from Windows XP. Despair usually ensues. If you have been\n there, I promise you are not alone.\n- **No logging** - The next best problem for software: no logging. Kerberos has\n a knack for giving useless error messages or error messages that need to be run\n through a universal translation engine (which does not exist). Good luck finding\n out what a `Generic preauthentication failure` is.\n - _TIP_: Set the environment variable `KRB5_TRACE=/tmp/somefile.log` when\n debugging a Kerberos client. The\n [client-spec](https://web.mit.edu/kerberos/krb5-devel/doc/admin/troubleshoot.html)\n sends client debug logs to that location!\n- **Little direction** - This goes alongside the previous items. If you have\n theoretical docs and useless logging, the usual sources of direction and \"next\n steps\" are lost. Internet tutorials and a general sense for what I want to\n accomplish are about the only places I have found direction for Kerberos issues.\n Keep at it! Perseverance _usually_ wins out in the end.\n - _TIP_: As with most debugging, try to simplify your case as much as possible \n first. Then slowly add pieces until you get to a working state.\n- **Cryptic configuration** - To properly configure Kerberos, you typically need\n to master a bunch of unusual terminology (realm, kdc, ticket, cache, keytab),\n wade through a myriad of brackets, and pay particular attention to case. Not to\n mention the importance of tricky UDP networking rules and the [directionless\n failure](https://social.technet.microsoft.com/Forums/windowsserver/en-US/7fbece1a-9e72-4ed1-b8d6-1a08f633f0bd/trouble-joining-linux-server-to-ad-domain-in-aws-failed-to-find-dc-for-domain?forum=winserverDS)\n you get if you misstep. Tread carefully! A slew of confounding error messages\n are watching your every move.\n- **Little tolerance for humans** - What is the difference between\n \"analogous.dev\" and \"ANALOGOUS.DEV\"? As far as your browser is concerned,\n nothing. As far as humans are generally concerned, maybe one reads more like\n shouting? As far as Kerberos is concerned, `Realm not local to KDC while getting\n initial credentials`. We humans need help doing software. Please help us,\n Kerberos. Make your logs better.\n - _TIP_: Yes, case matters in domain names to Kerberos. It feels like\n flailing, but in some cases it can actually help!\n\nPut all of this together, and what do you get? It may not be Hades, but it is\npretty close to eternal torment.\n\n### The Breakdown\n\nNow that I have completed that rant, Kerberos has some really nice things about\nit. It is also worth noting that it has achieved success by being a pretty\nbrilliant piece of software.\n\n- Pass-through authentication to backend services\n- Granular access controls\n- Minimal password usage\n- Integration into Windows\n- Integration with security keys\n- Quickly expiring sessions\n\nThis makes the protocol highly desirable for security, particularly in the enterprise. However,\nwith the internet age upon us, browser-based paradigms like OAuth2 and JWT (JSON web tokens)\nare shaping up to replace Kerberos for most of these purposes.\n\nIn many large enterprises and backend systems (like databases), Kerberos still rules the day.\n\n### Closing\n\nAlthough Kerberos is a necessary fixture in some peoples' lives (like mine) and\nnonexistent in others, I am hopeful that it can teach us important lessons about\nwhat makes good software and the interesting interplay of obligation versus\nenjoyment.\n\nAt very least, community, empathy, and a bit of laughter can get us through our\nstruggles with this beast.\n", "url": "/blog/debugging-kerberos-is-like-hades", "title": "Debugging Kerberos is like Hades", "summary": " We've all been there. You start debugging kerberos, and it starts getting warmer\nin the room. Ok, maybe not all of us. Debugging Kerberos certainly can feel an\nawful lot like Hades, though. The Analogy Debugging Kerberos is like Hades Hades is the Greek God of ...", "date_modified": "2020-12-19T00:00:00.000Z", "author": {}, "tags": [ "DevOps", "Analogy", "Tech", "Kerberos", "Auth", "Tricks" ] }, { "id": "devops-is-like-building-sand-castles", "content_html": "\nHow can less-technical people understand what DevOps (short for Development\nOperations) people do? Welcome to my childish attempt to explain.\n\n### The Analogy\n\n> Doing DevOps work is like building sand castles!\n\nFirst, I'm not a _real_ DevOps person, but bear with me. When you build sand castles\nat the beach, you are aiming to:\n\n1. build something architecturally sound\n1. build it on a decently realistic timeline (or else the tides will eat it for lunch)\n1. build something that is close to destructive forces (waves). Or for extra\n points, involves them (i.e. fill an epic moat)\n1. do something productive (after all, you could just lie down in the sun instead)\n1. defend yourself from an imaginary invader\n\nDevOps is similar to these things! Of course, the primary component in DevOps\nis enabling others (Developers) and making their work more productive.\n_That_ part of the job is more like holding a ladder (more on that another day).\nI am concerned here with mostly what a DevOps person _does_ or _builds_.\n\n### The High Points\n\n1. DevOps involves building and maintaining digital architecture (i.e. sand castles). Whether using\n virtual machines, containers, or \"infrastructure-as-code\", the end goal is a castle.\n1. DevOps engineers often work hard and work quickly, as their work is in high\n demand by the teams that depend on them. Timeliness is important\n1. Although a system without any users would be easier to maintain, the destructive\n force that is \"users\" will assault the DevOps engineers' well-architected fortress.\n1. Not any sand castle will do. DevOps engineers often care about building\n things correctly. Like mathematicians, they also do not want to do the same \n thing over and over again (\"toil\").\n1. Often times DevOps engineers are concerned about _actual_ invaders... but to\n prepare, they will load test their architecture with imaginary invaders (to be\n sure it will work) or get their coworkers to test it out.\n\nSome of these connections are very helpful. For instance, a load test becomes\n\"simultaneously building a sand-castle and trying to throw waves at it, to make\nsure it is strong.\" Or deploying an update to a cluster is \"automatically\nreconfiguring 40 sand-castles and hoping all of them keep standing.\" Perhaps in\na moment of deep thought, \"I can't figure out why my sand-castles keep crashing.\nThe waves aren't even that big today, and they did just fine yesterday.\"\n\nA Docker container is a sand castle, let's say, and a docker _image_ is a\ntemplate that allows me to build new sand castles super fast when I need them!\nThe explanatory power is wonderful!\n\n### The Breakdown\n\nThe most dangerous thing about this analogy is that it is quaint and accessible,\nand could therefore easily be used to over-simplify or demean the work of DevOps\nengineers. However, these engineers are building incredibly intricate\narchitectures that run some of the most lucrative businesses on the face of the\nplanet. It is work that should not be taken lightly! Just because sand-castles\nrarely have a lucrative or serious purpose does not mean that DevOps work shares\nthat trait.\n\nA related breakdown: the imaginative childishness of sandcastles coupled with\n\"digital\" architecture that cannot be objectively seen can lead down a similar\npath suggesting that DevOps work is not productive. It is worth remembering that\nthese engineers are actively building, maintaining, and improving the foundation\nthat the _entirety_ of most technology careers are standing upon. The\ntwenty-first century would not look the same without these contributors!\n\nAs such, I think this analogy is most productive when an engineer describes their\n_own_ work this way. An observer proposing the analogy could come across as demeaning.\n\nPerhaps a better way to think of it as an external observer:\n\n> Wow. This person is building and maintaining multi-dimensional _real_ castles\n in another universe that I cannot see with my eyes, or even understand, but\n which I am dependent on for my twenty-first century lifestyle. Thank you\n\n### Closing\n\nI have tremendous respect for the Operations team that I work with. They do\namazing work and really keep the rest of the teams rolling smoothly! As someone\nwho occasionally dabbles in their area of expertise, I have seen that it can be\nhard to explain the topic to others. This is my feeble attempt at explaining the\nidea to those outside the field.\n\nI have to say, it has been a recurring analogy in discussions with my wife, who\ndoes an amazing job being interested in my digital playground. If you need me,\nI'll be building sandcastles with the kiddos.\n", "url": "/blog/devops-is-like-building-sand-castles", "title": "DevOps is like Building Sand Castles", "summary": "\nHow can less-technical people understand what DevOps (short for Development\nOperations) people do? Welcome to my childish attempt to explain. The Analogy Doing DevOps work is like building sand castles! First, I'm not a real DevOps person, but bear with me. When you build sand castles\nat ...", "date_modified": "2020-11-05T00:00:00.000Z", "author": {}, "tags": [ "DevOps", "Analogy", "Tech" ] }, { "id": "using-the-kubernetes-python-client-with-aws", "content_html": "\nAs someone who normally just uses `kubectl` and `helm` to talk to my Kubernetes\nclusters, the idea of scripting modifications to my Kubernetes cluster was\nexciting!! I cracked open the\n[`kubernetes-python`](https://github.com/kubernetes-client/python) client and\nstarted playing.\n\n## TL;DR;\n\nWe use the `boto3`, `eks-token`, and `kubernetes` python packages to\ntalk to an EKS cluster without depending on `kubeconfig`.\n\n## Why\n\nFor those of us interactively building and maintaining kubernetes resources, \n`helm` or `kubectl` become our bread and butter. They provide very nice CLI\ninterfaces and have all the bells and whistles one could ask for!\n\nMoreover, when it comes to AWS (Amazon Web Services) and their EKS (Elastic\nKubernetes Service) clusters, they handle authentication smoothly and easily via\na handy CLI command and the `kubeconfig` file (usually stored at `~/.kube/config`:\n\n```\naws eks update-kubeconfig --alias mycluster\n```\n\n(Tip: Don't forget the `--alias` flag!)\n\nMy assume-role settings and environment variables integrate nicely from my shell: everything is dandy!\n\nHowever, in this case, I want to build an _app_ / software program that _itself_\naccesses, modifies, and maintains resources on my Kubernetes cluster. Of course,\none can script a solution around these CLIs, but that is not very portable (requires\nthe CLIs installed) and is prone to failure because `kubeconfig` is user-defined, \nuser-maintained, and hinges entirely on `contexts` which are arbitrary text strings.\n\nSo I headed for the `kubernetes-python` API client which simplifies making API requests\nagainst a Kubernetes cluster. With any luck, I will finish the day with a Python program\nthat creates my Kubernetes resources.\n\n**NOTE**: Before we get started, it is worth noting that I am presuming you have\na way to authenticate to AWS. In my case, I have already provided environment\nvariables, an instance profile, etc. that gives this program access to the AWS\nAPI as the necessary IAM role.\n\n## The Story is Auth\n\nAs with many of my stories, this story will become one largely consisting of\nauthentication. Unraveling the \"magic\" of the `kubeconfig` file and AWS's IAM\nauthentication was not an easy task - as I soon learned, the `kubernetes-python`\nAPI client _also_ depends heavily on `kubeconfig` by default.\n\nMost of [the\nexamples](https://github.com/kubernetes-client/python/tree/master/examples) look\nsomething like this, and if you want to use `kubeconfig`, this works very well.\n\n```python\nfrom kubernetes import config\n\nconfig.load_kube_config()\ncore_v1 = core_v1_api.CoreV1Api()\n```\n\n(Although there is also a cool example of [in-cluster config](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py))\n\nHowever, for portability, I want to bypass `kubeconfig` but will be running outside of a cluster. So let's see how this\nclass works.\n\n```python\nconfig.kube_config.Configuration\n```\n\nAfter a bit of flailing, fighting, and digging, I learned that you can\ninitialize an API connection with the API endpoint and a bearer token.\nBasically, you initialize a configuration object directly and then use that to\ninitialize an API client.\n\n\n```python\nimport kubernetes\n\n\ndef k8s_api_client(endpoint: str, token: str, cafile: str) -> kubernetes.client.CoreV1Api:\n kconfig = kubernetes.config.kube_config.Configuration(\n host=endpoint,\n api_key={'authorization': 'Bearer ' + token}\n )\n kconfig.ssl_ca_cert = cafile\n kclient = kubernetes.client.ApiClient(configuration=kconfig)\n return kubernetes.client.CoreV1Api(api_client=kclient)\n```\n\n\nThere are other useful options in `kconfig`, like `kconfig.proxy` and\n`kconfig.verify_ssl`. Have a look at the class for more details!\n\n### Authenticate to AWS / EKS\n\nNow we need to figure out how to get a bearer token to talk to EKS. I cannot use\nthe magic in `kubeconfig`, which led me to the [AWS CLI's `aws eks get-token`\ncommand](https://docs.aws.amazon.com/cli/latest/reference/eks/get-token.html). I\nchose to opt for a pure python solution in the\n[`eks-token`](https://pypi.org/project/eks-token/) package. You can evaluate the\nsource for yourself [here](https://github.com/peak-ai/eks-token) and [more\nspecifically,\nhere](https://github.com/peak-ai/eks-token/blob/master/eks_token/logics.py).\n\n\nWith this module, we can acquire an EKS token easily:\n\n\n```python\nimport eks_token\n\ncluster_name = 'my-eks-cluster'\nmy_token = eks_token.get_token(cluster_name)\n```\n\n\n**NOTE:** It is worth checking whether the `boto3` or `aws` libraries provide\naccess to this functionality directly from Python as things improve!\n\n### Now TLS\n\nUnfortunately, the `kubernetes-python` client does not allow for inlining\nthe TLS CA Certificate. As a result, we have to write it to a temp file\n(which by inspection is exactly what the `kubeconfig` approach is doing)\n\n\n```python\n\n \nimport boto3\nimport tempfile\nimport base64\n\n\ndef _write_cafile(data: str) -> tempfile.NamedTemporaryFile:\n # protect yourself from automatic deletion\n cafile = tempfile.NamedTemporaryFile(delete=False)\n cadata_b64 = data\n cadata = base64.b64decode(cadata_b64)\n cafile.write(cadata)\n cafile.flush()\n return cafile\n\n\nbclient = boto3.client('eks')\ncluster_data = bclient.describe_cluster(name=cluster_name)['cluster']\nmy_cafile = _write_cafile(cluster_data['certificateAuthority']['data'])\n```\n\n### Put it all together\n\nArmed with a bearer token and transport layer security (TLS), we now have the\ntools we need to succeed!!\n\n\n```python\napi_client = k8s_api_client(\n endpoint=cluster_data['endpoint'],\n token=my_token['status']['token'],\n cafile=my_cafile.name\n)\n\napi_client.list_namespace()\n```\n\n## Go wild\n\nThe first time `list_namespace()` returned data, I was ecstatic.\nAuthentication successful! Now the world is your oyster.\n\nAll that's left is perusing the [API client\ndocs](https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md)\nand refreshing AWS creds every so often!\n\nFor example, creating a configmap:\n\n\n```python\nmy_configmap = kubernetes.client.V1ConfigMap(\n api_version='v1',\n metadata={'name': 'my-configmap'},\n kind='ConfigMap',\n data={'my_file.txt': 'mycontent'}\n)\n\napi_client.create_namespaced_config_map(namespace='default', body=my_configmap)\n\n# NOTE: to update a configmap, you need to\n# use k8s_client.replace_namespaced_config_map\n#\n# If it already exists, create will give a 409 conflict\n```\n\nHave fun!!\n\n[The source for this article is available here (along with a few extra\nexamples)](https://github.com/colearendt/example-python-kubernetes)\n\n## Outtakes\n\nYes, there were some outtakes:\n\n- Standing up [`mitmproxy`](https://mitmproxy.org/) in a docker container and\n setting `kconfig.proxy='http://localhost:8080` and `kconfig.verify_ssl=False`\n before initializing `api_client` so I could see the requests being made to the\n Kubernetes cluster and verify whether authentication was being sent\n- Flailing on temp files (today, I learned that python disposes of temp files\n rather quickly by default)\n- Trolling the internet and finding an unfortunate absence of docs on this\n topic. \"Is this obvious to everyone else? Am I doing something wrong?\" Classic\n questions of a lonely explorer.\n- Accidentally installing the `aws` package into my `pyenv` and breaking my\n terminal's ability to assume roles when in certain directories.\n\nProgramming can be hard! It is so nice to know we are not alone!\n", "url": "/blog/using-the-kubernetes-python-client-with-aws", "title": "Using the Kubernetes Python Client with AWS", "summary": "\nAs someone who normally just uses `kubectl` and `helm` to talk to my Kubernetes\nclusters, the idea of scripting modifications to my Kubernetes cluster was\nexciting!! I cracked open the\n`kubernetes-python` client and\nstarted playing. TL;DR; We use the `boto3`, `eks-token`, and `kubernetes` python packages to\ntalk to an EKS ...", "date_modified": "2021-01-27T00:00:00.000Z", "author": {}, "tags": [ "DevOps", "Python", "Programming", "HowTo", "AWS", "Kubernetes" ] }, { "id": "sssd-without-tls", "content_html": "\n[`sssd`](https://sssd.io/) has established itself as the most common way to provision system\naccounts via LDAP or Active Directory on linux servers across all linux\ndistributions. However, working with it can be tricky!\n\n## TL;DR;\n\nWe show an example of using `sssd` to contact an LDAP server that is\nlistening on port 389 (in plaintext / no TLS). This is _NOT_ a good\nidea in any production environment. However, it can be important\nand helpful in playgrounds, learning, or other experiments. The magic\nconfiguration is `ldap_auth_disable_tls_never_use_in_production = true`.\n\n## Why\n\nIt is quite straightforward to stand up an LDAP server listening in plaintext. My\nfavorite mechanism is using the [`openldap`\ncontainer](https://github.com/osixia/docker-openldap)\n, [although there are other options](https://github.com/nitnelave/lldap).\n\n```bash\ndocker run -it --rm -p 389:389 osixia/openldap:latest\n```\n\nHowever, if you have a toy linux container running `sssd`, this is unfortunately not an obvious option! Why, you ask?\nThis is all just a dev playground!? Right. Well the `sssd` maintainers want to be very careful about not creating\nsecurity vulnerabilities or letting their users get hacked. This means you have to work hard to open yourself up to this\ntype of vulnerability in your playground.\n\nSpecifically, we will use the `ldap_auth_disable_tls_never_use_in_production` setting.\n\n> NOTE: Do not use this setting in any \"real\" environment with \"real\" users, passwords, sensitive data, etc.\n\n## Give it a Shot\n\n### Create Users\n\nFirst, we need to create and populate our LDAP server. Let's go ahead and do that. It is easiest if we create a file\nwith users first. For a more advanced LDIF file, check\nout [the repository associated with this post](https://github.com/colearendt/container-playground):\n\n_users.ldif_\n```ldif\nversion: 1\n\n## Entry 1: dc=angl,dc=dev\n#dn: dc=angl,dc=dev\n#dc: angl\n#o: Angl Dev\n#objectclass: top\n#objectclass: dcObject\n#objectclass: organization\n#\n## Entry 2: cn=admin,dc=angl,dc=dev\n#dn: cn=admin,dc=angl,dc=dev\n#cn: admin\n#description: LDAP administrator\n#objectclass: simpleSecurityObject\n#objectclass: organizationalRole\n#userpassword: {SSHA}+FquX8RcwTtBPo7mu2pgSvjaQYX9HpCL\n#\n#\n# Entry 3: cn=engineering_group,dc=angl,dc=dev\ndn: cn=engineering_group,dc=angl,dc=dev\ncn: engineering_group\ngidnumber: 500\nmemberuid: joe\nmemberuid: julie\nobjectclass: posixGroup\nobjectclass: top\n\n# Entry 4: dc=engineering,dc=angl,dc=dev\ndn: dc=engineering,dc=angl,dc=dev\ndc: engineering\ndescription: The Engineering Department\no: Engineering\nobjectclass: dcObject\nobjectclass: organization\nobjectclass: top\n\n\n# Entry 5: cn=joe,dc=engineering,dc=angl,dc=dev\ndn: cn=joe,dc=engineering,dc=angl,dc=dev\ncn: joe\ngidnumber: 500\ngivenname: Joe\nhomedirectory: /home/joe\nloginshell: /bin/sh\nmail: joe@angl.dev\nobjectclass: inetOrgPerson\nobjectclass: posixAccount\nobjectclass: top\nsn: Golly\nuid: joe\nuidnumber: 1000\nuserpassword: {MD5}j/MkifkvM0FmlL6P3C1MIg==\n\n# Entry 9: cn=julie,dc=engineering,dc=angl,dc=dev\ndn: cn=julie,dc=engineering,dc=angl,dc=dev\ncn: julie\ngidnumber: 500\ngivenname: Julie\nhomedirectory: /home/julie\nloginshell: /bin/sh\nmail: julie@angl.dev\nobjectclass: inetOrgPerson\nobjectclass: posixAccount\nobjectclass: top\nsn: Jolly\nuid: julie\nuidnumber: 1001\nuserpassword: {MD5}FvEvXoN54ivpleUF6/wbhA==\n```\n\nYou will notice that the first two entries are commented out. They are included to represent a _complete_ LDIF file.\nHowever, the `osixia/docker-openldap` container help us by provisioning these automatically.\n\nFurther, you will notice that passwords are included. This makes things easier for our playground, but is _definitely_ a\nbad idea in real life / production applications.\n\n### Create LDAP Server\n\nNow let's create the server itself!\n\n```bash\ndocker network create playground-network\ndocker run \\\n -d --name openldap --rm \\\n -p 389:389 \\\n --network playground-network \\\n -v $(pwd)/users.ldif:/container/service/slapd/assets/config/bootstrap/ldif/50-bootstrap.ldif \\\n -e LDAP_TLS=false \\\n -e LDAP_DOMAIN=\"angl.dev\" \\\n -e LDAP_ADMIN_PASSWORD=\"admin\" \\\n osixia/openldap:1.5.0 \\\n --copy-service --loglevel debug\n```\n\nAnd check that it is working\n\n```bash\ndocker exec -it openldap ldapsearch -D cn=admin,dc=angl,dc=dev -b dc=angl,dc=dev -w admin cn\ndocker exec -it openldap ldapsearch -D cn=admin,dc=angl,dc=dev -b dc=angl,dc=dev -w admin cn=julie \\*\n```\n\nIf you look carefully, you will notice that:\n\n1. We created a persistent network for our containers to share\n2. We provisioned users from our `ldif` file\n3. We disabled TLS on the service\n4. We bumped up the logging verbosity for debugging purposes\n\nThese are all useful tidbits to dig into if you are not familiar!\n\n### Configure sssd Server\n\nIt is possible to run `sssd` in a fairly vanilla `ubuntu:jammy` container.\n\n```bash\ndocker run -it --name sssd --rm --network playground-network ubuntu:jammy bash\n\napt update && apt install -y sssd ldap-utils vim\n```\n\nThen you need to create your `sssd.conf` file. Notice our magic\noption `ldap_auth_disable_tls_never_use_in_production=true`. This will be the magic that makes things work for us!\n```bash\ncat << EOF > /etc/sssd/sssd.conf\n[sssd]\nconfig_file_version = 2\nservices = nss, pam\ndomains = LDAP\n\n[nss]\nfilter_users = root,named,avahi,haldaemon,dbus,radiusd,news,nscd\nfilter_groups =\n\n[pam]\n\n[domain/LDAP]\nid_provider = ldap\nauth_provider = ldap\nchpass_provider = ldap\nsudo_provider = ldap\nenumerate = true\n# ignore_group_members = true\ncache_credentials = false\nldap_schema = rfc2307\nldap_uri = ldap://openldap:389\nldap_search_base = dc=angl,dc=dev\nldap_user_search_base = dc=angl,dc=dev\nldap_user_object_class = posixAccount\nldap_user_name = uid\n\nldap_group_search_base = dc=angl,dc=dev\nldap_group_object_class = posixGroup\nldap_group_name = cn\nldap_id_use_start_tls = false\nldap_tls_reqcert = never\nldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt\nldap_default_bind_dn = cn=admin,dc=angl,dc=dev\nldap_default_authtok = admin\naccess_provider = ldap\nldap_access_filter = (objectClass=posixAccount)\nmin_id = 1\nmax_id = 0\nldap_user_uuid = entryUUID\nldap_user_shell = loginShell\nldap_user_home_directory = homeDirectory\nldap_user_uid_number = uidNumber\nldap_user_gid_number = gidNumber\nldap_group_gid_number = gidNumber\nldap_group_uuid = entryUUID\nldap_group_member = memberUid\nldap_auth_disable_tls_never_use_in_production = true\nuse_fully_qualified_names = false\nldap_access_order = filter\ndebug_level=6\nEOF\nchmod 600 /etc/sssd/sssd.conf\n```\n\nNow let's start the `sssd` service\n```bash\nsssd -i\n# should see some log messages that suggest things are happening!\n```\n\n### Be sure it works!\n\nNow let's make sure that this works by starting another shell in our `jammy` container.\n\n```bash\ndocker exec -it sssd bash\n\nid joe\n# uid=1000(joe) gid=500(engineering_group) groups=500(engineering_group)\nid julie\n# uid=1001(julie) gid=500(engineering_group) groups=500(engineering_group)\n```\n\n## Using `docker-compose`\n\nFor playground environments like this, `docker-compose` makes this setup much easier to architect and reuse. You can\nuse [my example compose setup](https://github.com/colearendt/container-playground) if you prefer.\n\n```bash\ncd compose/\ndocker network create playground-network\nNETWORK=playground-network docker-compose -f ldap.yml -f sssd.yml -f network.yml up -d\ndocker exec -it compose_sssd_1 bash\n\nsssd -i >/tmp/sssd.log 2>&1 &\nid joe\n```\n\n## Review\n\nWell done! You have successfully started your own `sssd` container. Although this is very much a toy, it is a\ngreat \"jumping off point\" to learn and understand how `sssd` works in more detail!\n\nAny time you need a toy LDAP server for `sssd`, just remember: `ldap_auth_disable_tls_never_use_in_production = true`.\n", "url": "/blog/sssd-without-tls", "title": "Using sssd in a Playground Without TLS", "summary": "\n`sssd` has established itself as the most common way to provision system\naccounts via LDAP or Active Directory on linux servers across all linux\ndistributions. However, working with it can be tricky! TL;DR; We show an example of using `sssd` to contact an LDAP server that is\nlistening ...", "date_modified": "2022-10-01T00:00:00.000Z", "author": {}, "tags": [ "Docker", "Container", "LDAP", "sssd", "DevOps", "SysAdmin", "HowTo", "openldap" ] }, { "id": "helm-cheatsheet", "content_html": "\nBelow is an introduction to Helm! If you want to [skip to the\ncheatsheet](#cheat-sheet), you can [download it\nhere](https://www.analogous.dev/download/cheatsheet/helm.pdf).\n\n## What is Helm\n\nAccording to [its own docs](https://helm.sh/docs/), Helm is \"the\" package\nmanager for Kubernetes. What does this mean?\n\nIt's a way of keeping track of all your Kubernetes stuff!\n\nHelm as I describe it is a mechanism for packaging and parameterizing standard\nKubernetes YAML files. It uses [Go\nTemplating](https://blog.gopheracademy.com/advent-2017/using-go-templates/) for\nmost of this mechanism, and adds a layer of version / metadata tracking as\nwell. All of this packaged up into tarballs used by a client-side-only (as of\n`helm` v3) CLI.\n\nSo basically: Helm = YAML + Go Templating + Versioning + Tar balls.\n\n## Why use it?\n\nWhy use it? There are lots of alternatives out there, and many purported \"Helm replacements,\"\nbut Helm has yet to give up its throne, and I have not found anything better\nfor my own use cases... yet. So what are Helm's strengths?\n\nI will do my best not to wax poetic. I am biased and a big fan of Helm. As a layer of\nabstraction between an application and Kubernetes, I think it is a fantastic asset.\n\nIn particular, I think this is because:\n\n- No runtime dependency\n- Client-side only utility\n- Data stored server side for collaboration\n- output represents native Kubernetes objects (i.e. interoperable with other tools)\n- `helm template` gives rapid feedback on iterating and testing\n- plain text file output / diffs is very easy to parse\n\nAs a system administrator, it is nice because it offers:\n\n- Version pinning for reproducibility\n- Everything is open source tarballs, so dependencies are easy to track and introspect\n- application vendors will ideally maintain their own chart and good NEWS files\n\n## When to use it?\n\nSo that's _what_ it is, and _why_ it is desirable. But _when_ is it useful?\n\nI find that helm particularly shines in a handful of situations:\n\n- Managing an array of applications deployed on Kubernetes\n- Packaging your own application for use by customers\n- Encoding complex knowledge about \"how to run an application\" (to an extent,\n then you get to\n [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/))\n- To set up easy \"roll-back\" policies for applications that support the\n behavior\n\nOccasionally a wrapper like [ArgoCD](https://argoproj.github.io/cd/), [Flux](https://fluxcd.io/)\n, [helmfile](https://github.com/helmfile/helmfile), or [pulumi](https://www.pulumi.com/docs/get-started/kubernetes/)\nwill be useful to manage your helm deployments too, so\nthat you don't have to keep track of a bunch of CLI commands.\n\n## When not to use it?\n\nHelm can definitely be overkill in some \"hello world\" or very simple deployment\nsituations. Unfortunately, it also **does not have a great answer for\n[CRDs](https://helm.sh/docs/topics/charts/#custom-resource-definitions-crds)\nyet**. Moreover, it is **only useful for Kubernetes**, so if you are unfamiliar\nwith Kubernetes, it will have limited utility for you.\n\nThe other case where it may not be useful is in some **internal applications**.\nMaintaining a helm chart for an application can end up being a sizable amount\nof work, and they do not allow arbitrary inputs, so if you miss some key (i.e.\n\"imagePullSecrets,\") you can end up spending a lot of time key-chasing across\nyour charts. I have heard of folks using [Kustomize](https://kustomize.io/) in\nsuch a situation, although another option is to use a meta chart (one chart for\nmany apps) or Functions-as-a-Service (FaaS) framework like\n[Serverless](https://www.serverless.com/),\n[OpenFaas](https://www.openfaas.com/), [Knative](https://knative.dev/docs/),\netc.\n\nAlso, helm charts do have a **complexity ceiling**. Go Templating provides\nlots of flexibility, but being DRY is hard, and there are many parts of the\nprocess that are not optimal from a software development point of view. As\ncharts become more complex, an\n[operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)\nbecomes increasingly beneficial as a mechanism to provide better software\nsemantics to the application management process. However, the learning curve\nfor operators can also be a bit steep.\n\nFinally, helm charts unfortunately **do not have hard-and-fast standards about how values are used** across the\necosystem. As a result, you will often encounter wild variations in chart quality, value naming, and value behavior.\n\n## Hello World\n\nLet's get started on a hello world example! First, you need to [install\nkubectl](https://kubernetes.io/docs/tasks/tools/#kubectl), [install\nhelm](https://helm.sh/docs/intro/install/), and have a kubernetes cluster\navailable. Once those things are taken care of, a hello world example of a\nhelm deployment is pretty straightforward!\n\nFor this example, we will use my [generic\nchart](https://github.com/colearendt/helm/tree/main/charts/generic), useful for\ndeploying simple services with standard configuration or helm needs.\n\nWe are also going to use [this hello-world container](https://hub.docker.com/r/paulbouwer/hello-kubernetes).\n\nFirst, add the repository that houses our example chart:\n```\nhelm repo add colearendt https://colearendt.github.io/helm/\n```\n\nYou can look at the values available for the chart:\n```\nhelm show values colearendt/generic\n\n# I like to pipe it to a pager for search and such\nhelm show values colearendt/generic | less\n```\n\nThen create a YAML file called _my-values.yaml_ to hold values:\n\n_my-values.yaml_\n```\nimage:\n repository: paulbouwer/hello-kubernetes\n tag: \"1.10\"\npod:\n port: 8080\n```\n\nThen template the output:\n```\nhelm template hello-world colearendt/generic -f my-values.yaml\n```\n\nAnd install it into the Kubernetes cluster!\n\n```\nhelm upgrade --install hello-world colearendt/generic -f my-values.yaml\n```\n\nThen you should be able to see the app deployed:\n```\nhelm list\nkubectl get pods\n```\n\nAnd view the service in your web browser at http://localhost:8080:\n```\nkubectl port-forward svc/hello-world-generic 8080:80\n```\n\n### Clean Up\n\nIf you want to clean up after yourself:\n\n```bash\n# delete the helm release\nhelm delete hello-world\n\n# delete the repository reference\nhelm repo remove colearendt\n```\n\nUnfortunately, I have not taken much time to dive into troubleshooting here! If you are hitting issues,\nplease [shoot me an email](mailto:info@analogous.dev) - I would love to have feedback on what to improve! Maybe\nsomeday I will take the time to set up comments 😅\n\n## Best Practices\n\nSo now you have a \"Hello World\" deployment under your belt. However, it also helps to keep in mind some best practices\nas you keep improving. Below is a handful of helm chart conventions that may be unfamiliar if you are new to the\ncommunity:\n\n- Make sure to pin helm chart versions with the `--version` flag\n- Maintain a `NEWS.md` file (or read the `NEWS.md` file) to keep track of\n changes between versions\n- Keep an eye out for \"upgrading directions\" in the `README.md` or elsewhere\n- Use `helm show values` to see the default values (and comment strings\n associated). Ideally these are presented or discussed in a `README` as well.\n- Avoid [`sub-charts`](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) if you can. It is tempting as a\n DRY software principle, but turns out to be a pretty advanced topic with lots of tricky edge cases. In particular,\n namespaces can be painful.\n\n## Cheat Sheet\n\nI took the time to arrange a \"cheat sheet\" of my favorite helm commands and the\ncontexts in which they are useful. It was inspired by [RStudio's array of excellent\ncheat sheets for the R community](https://www.rstudio.com/resources/cheatsheets/).\n\nA hit-list of some of the most useful commands:\n\n- `helm show values chartrepo/chartname`\n- `helm template releasename chartrepo/chartname`\n- `helm upgrade --install releasename chartrepo/chartname`\n- `helm repo add https://repourl`\n- `helm repo list`\n- `helm search repo`\n- `helm info`\n- `helm list`\n\nAnd the cheat-sheet itself can be downloaded [here](https://www.analogous.dev/download/cheatsheet/helm.pdf).\n\n