21 April, 2021

Contextual hazards

I'd like to describe a few types of communication breakdowns in terms of the three basic data hazards, because I feel like it.

Has someone ever assumed you know something just because they know it?

In non-tech-related situations, we might use the word "context" like this:


What I'm talking about is not the above definition exactly, but the technical word context which I would consider a synonym of shared understanding within a group. I wouldn't consider context limited to a single situation (e.g., a funeral is meant to be solemn in some cultures). Rather, it's a collection of in-group ideas which may or may not be written down anywhere, but are applicable across multiple kinds of situations.

When designing a data pipeline, we must be careful not to affect an operand at the wrong time, else we run into one of the classic data hazards: read after write, write after read, and write after write. In each of these scenarios, there's some missing state (context) not quite carried through.

RAW (Read After Write)

Even though instruction B gets prcoessed after instruction A, part of the pipeline is still working on instruction A. B tries to act on the registers A hasn't yet written. Oops.

People example: you're trying to explain something technical to someone but skip steps, or otherwise fail to justify your work. They don't get it and you don't know why since it's completely obvious to you. You feel like you've explained everything necessary to understand the situation - but some key piece of data is still missing for them. Maybe then they're upset and don't know what to ask to get you to explain whatever it is they're missing because they don't know what they're missing. It might be easier if you don't explain the whole thing up front but rather start with a high level overview and then fill in details based on what they ask questions about. Alternately: provide background, explain a few steps, give them time to ask questions, repeat.

WAW (Write After Write)

Instruction A (which writes to the operand op) is still being processed and is not commutative with instruction B, which uses the value of op in another follow on action. We want what results from A, then B; not B, then A.

People example: you go to the grocery store intending to get things you and your roommate want, but you didn't realise your roommate wanted additional stuff! You go ahead and get them things and they're mad because they don't have what they need for the week. Sometimes it's hard to do things based on what other people want. Setting expectations (I will go to the grocery store at 8 pm on Wednesdays) could be a way around this.

WAR (Write After Read)

Instruction B writes a value to op before Instruction A can read op's value. This means that A gets the wrong value from op.


People example: someone tells you half the story (and gets it wrong either purposefully or accidentally) before you can get the main details from the person who experienced the event. You have no idea what is going on. When the person who experienced the event tells you their story, you're sceptical since you already are incorrectly going off the first version you got, from the person who wasn't even there. Or worse, you tell someone the first version, and it's a serious matter, and there are unintended consequences. Eventually you learn the real facts. You feel embarrassed. You have to apologise to people, or you just look plain silly. It's sometimes hard to tell if you're being wound up or not, depending. The person telling you the story could be abusing your trust in them. Verify your info before you pass on hearsay.

04 April, 2021

Adventures in Systemd-Linux-Dockerland

I am just recently starting to use Docker again after an about six year hiatus from it (used regularly ~2014-2015). I do not recommend following the actions I document here in any way, shape, or form, but wanted to chronicle them in case it's educational for others, or in case I don't use Docker again until 2030 and want to know what worked in 2021.

Constructive suggestions or corrections (so I know for the future) are welcome! There are likely better or more efficient ways to do all of the following.


The proper, real, Docker setup documentation for Arch may be found here. If you actually want to use Docker on Arch, I would recommend trying that. You might also want to get a peep at the systemctl and systemd manpages.

Disclaimer: some of the following may be universal across Linux distributions, but as I run Arch, some of it is likely specific to Arch and perhaps also my own setup.

How I got Docker running as described here, from zero, in Arch, differs rather a bit from how people ought to do it. I was going too fast and assumed I still knew stuff. I also barely tolerate systemd, though I've gotten more accustomed to it in recent years.

I hope this illustrates some potential pitfalls of not properly following official documentation for the thing you want to run or the documentation for running said thing on your own distro when you actually want to get something done fast.

What I did, with explanations:

Took a wild guess here and:

$ pacman -S docker

According to this, you need a loop module and then you'll install Docker from the Snap store. Needing loop tracked with my previous experience with Docker so I didn't question it. I don't use Snap (as recommended in the link) given I have Pacman already in Arch, so I proceeded to skip that part of the tutorial and went on to the next bits. 

We need Docker running in order to create and interact with containers. One can run Docker a couple of different ways: with systemd (systemctl), or just by calling dockerd directly. I went the systemd route this time.

$ sudo tee /etc/modules-load.d/loop.conf <<< "loop"
$ sudo modprobe loop // this loads the loop module we just made
$ sudo systemctl start docker.service

The above call to the docker.service unit with sudo is how this recommended to start Docker, but I felt this didn't make sense for my objective after trying it. With the sudo-prepended call to systemctl, we're as far as I understand affecting the root user environment versus the current user environment even though the start command will not, to my current knowledge, cause docker.service to automatically run during future sessions. Running `sudo systemctl enable docker.service` instead would do that. 

According to the systemctl manpage `systemctl start docker` and `systemctl start docker.service` are equivalent. If not specified systemctl will just add the appropriate unit suffix for us (.service by default), but sudo adds a dimension of weirdness here. It seems to override systemctl's usual requirement to ask for your password. `systemctl start docker` without sudo would have done what I actually wanted to do: simply use systemctl to manage dockerd in a way that is localised to the booted-in user session on my machine. When sudo executes a command in this form, we use it to act as the superuser (root). This I believe implies I'd have to also use sudo to initialise, run, etc any future Docker containers as well, which wasn't the outcome I wanted.

Back to Docker. At this point, Docker Was Not Up.

`systemctl status docker.service` showed the service had failed to start. I got cranky, so went looking for docker.service. The following is what I got by default when I installed the docker package on Arch. As shown here under the Requires section, docker.socket is a dependency of docker.service:

$ cat /usr/lib/systemd/system/docker.service
    Description=Docker Application Container Engine
    After=network-online.target docker.socket firewalld.service

    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd://
    ExecReload=/bin/kill -s HUP $MAINPID
    # Having non-zero Limit*s causes performance problems due to accounting             overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    # Uncomment TasksMax if your systemd version supports it.
    # Only systemd 226 and above support this version.
    # set delegate yes so that systemd does not reset the cgroups of docker                 containers
    # kill only the docker process, not all processes in the cgroup
    # restart the docker process if it exits prematurely


So I tried:

$ systemctl start docker.socket

Which worked in that docker.socket came up successfully, but this system state change did not help me to do the command from the tutorial I was still at this point trying to follow, `sudo systemctl start docker.service`. Starting Docker either with systemctl start docker, or with dockerd should, I believe, start the socket automatically for you, though you may notice if you systemctl stop docker but still have docker.socket active that the docker.socket unit can start docker again as well.

Since docker is a socket-activated daemon as installed by default on Arch, I could have enabled the docker.socket unit and then just used that as a base for my containers in the future. In this mode, systemd would listen on the socket in question and start up docker when I start containers. This style of usage is meant to be less resource-intensive since docker daemon would then only run as needed. We could also go full socketception and make socket-activated on-demand container units, if we have containers we want to reuse, and then use systemctl to control them.

But still, all I really wanted to do was `systemctl start docker` and just use docker by itself (no sudo, no extra systemctl units) after that so I tried to fix my environment up again with:

$ systemctl disable --now docker
$ systemctl disable --now docker.socket

So that we can have networking in our containers, it appears Docker will automatically create a docker0 bridge in the DOWN state for us when we start it as our own user account using systemctl:

$ sudo ip link
[sudo] password for user:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback ...
2: enp0s25: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ...
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
    link/ether ...

$ systemctl status docker
○ docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: inactive (dead)
TriggeredBy: ○ docker.socket
       Docs: https://docs.docker.com

$ systemctl start docker
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to start 'docker.service'.
Authenticating as: user

$ systemctl status docker            
● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: active (running) since ...; 1h 36min ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2881 (dockerd)
      Tasks: 20 (limit: 19044)
     Memory: 155.5M
        CPU: 3.348s
     CGroup: /system.slice/docker.service
             ├─2881 /usr/bin/dockerd -H fd://
             └─2890 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

$ sudo ip link           
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback ...
2: enp0s25: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ...
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
    link/ether ...
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether ...

At this point, I had Docker itself working as my own user account. I thought I was ready to pull an image down and try to make a container. 

$ docker pull sickcodes/docker-osx:auto

I was wrong! 

Trying to build a headless container from the docker-osx:auto image as specified in the doc I was following did not fully work:

$ docker run -it \
    --device /dev/kvm \
    -p 50922:10022 \

I kept getting LibGTK errors which it turned out were not due to anything being wrong with the container, but rather an assortment of packages I still needed to install and a few missing group memberships for my user. I got stuck here for awhile trying to figure out all the different things I didn't have yet from a combination of the Arch documentation, the docker-osx documentation, and the rest of the internet. It's plausible you might encounter a similar error trying to run docker-osx if you don't have xhost available and this stumped me for rather awhile since I figured the issue was just xhost as described in the docker-osx troubleshooting docs at first.

Mini disclaimer: I use yay typically, not pacman, but wanted to provide pacman in the previous example since it's more commonly known. I don't recall which of the following packages are in AUR versus standard repositories, but here is what I think I ended up needing:

$ yay -S qemu libvirt dnsmasq virt-manager bridge-utils flex bison iptables-nft edk2-ovmf

I then set up the additional daemons I needed (actually enabling them this time so they'd be available on future boots) and added my user to the libvirt, kvm, and docker groups.

$ sudo systemctl enable --now libvirtd
$ sudo systemctl enable --now virtlogd
$ echo 1 | sudo tee /sys/module/kvm/parameters/ignore_msrs
$ sudo modprobe kvm
$ lsmod | grep kvm
$ sudo usermod -aG libvirt user
$ sudo usermod -aG kvm user
$ sudo usermod -aG docker user

I figured just in case the LibGTK error *was* really an xhost problem after all that, I'd follow the troubleshooting documentation I was using as well.

$ yay -S
$ xhost +

Finally, I was able to create a container and boot into it for the first time using:

$ docker run -it \
    --device /dev/kvm \
    -p 50922:10022 \

Do note this run command makes you a fresh, brand new container every time you run it. You'll be able to see all of yours with `docker ps --all`.

So that I can install things in my container and reuse it, I'll (after this) use `docker start -ai <CONTAINER ID>` or `docker start -ai <NAME>` instead of another round of `docker run -it`, but you may wish to stick to the run command if you want a new container each time you start up.

I also ran into a small snag when I decided to update my system and then suddenly couldn't start containers anymore with either `docker run -it` or `docker start` following running kernel upgrade ("docker: Error response from daemon: failed to create endpoint dazzling_ptolemy on network bridge: failed to add the host (veth1e8eb9b) <=> sandbox (veth73c911f) pair interfaces: operation not supported"), which was exciting, but fixable with just a reboot. 

The cause of this issue is a mismatch between the running kernel - the kernel still running from before the upgrade - and most recent kernel modules which match the new, upgraded kernel instead of the running kernel. On reboot, we boot into our freshly updated Linux version, which matches our most recent modules, so Docker can load kernel modules which match the running kernel once again.

Thus: Docker is either exactly as terrible as I recall, or worse I had forgotten just about everything useful I used to know about it, but I think from here on out I'll be mostly okay.

23 March, 2021

Resume antipatterns

I get to look at a lot of resumes as part of my job. Each of these represents a thinking, feeling human being, but after awhile it is very hard not to get picky about what makes it easier for people who don't have a lot of time to spend reading your document to digest it and get the information they need.

This blog post is not about the CV. That is a different document. Though there might be commonalities like publications and work experience, they should be represented differently. 

It is also specific to United States tech culture, and particularly the intersection of infosec and software development. What is perfectly acceptable in Germany or Vietnam (or CS academia, or claims adjusting, or ...) doesn't come across the same way here, and vice versa.

Looking at your resume, I'm trying to see a small number of things:

  • How well you communicate
  • Your experience in (languages|technologies|methodologies) as listed in the job description
  • Where you got that experience (startup? bootcamp? big tech companies?) so that I have a better idea about your familiarity with various styles of infrastructure and process and what will be different for you with our infrastructure and process
  • The sizes and types of teams you worked on previously
  • Whether you have technical leadership experience if you applied to a more senior role

You might consider having a peer in your field with more experience take a look at your resume before sending it in to see if they are able to grok your experience and aren't put off by formatting.

Specific peeves

None of these are from any particular resume or meant to call anyone out, just want to talk about patterns I've noticed over the last couple years.

Progress bars or star ratings

I don't know what "90% Javascript" means.  Putting that stat right next to "50% threat modelling" makes me wonder if you feel you are better overall at Javascript than threat modelling and what such a comparison even means, or how to compare these two things at all. 

It's better to just list Javascript and threat modelling as skills since then the reader can judge for themselves. If you've contributed to a bunch of core Javascript libraries or something, list a few of the most popular ones.

The following doesn't say anything useful about what John can do to people who don't know him. (John is also trolling me.)

For approximately the same reason one should not plot disparate data on the same chart unless it can be graphed on the same axes at the same scale, putting progress bars or star ratings next to each other is confusing to the reader.

Overly personal details

Some flavour is good. If you are able, get a second opinion from someone outside your social group(s) before including something not super directly related to your field.

Including one or two minor and quasi-professional things at the very end of the document, especially something somewhat common in your field (CTF participation, ham radio licence, etc), is great. It makes it easier to remember who you are when my team and I have been looking at resumes for an hour and yours is somewhere in the middle of the stack. If you choose to include something from this category include it under items like publications/talks, certifications, and so on in terms of position from top of the document. Do not include factoids like your taste in TV or music unless you were a session musician on the album in question or something.

Do not include your spouse's occupation, or your parents' occupations. This may be done sometimes elsewhere (and please do if you need to elsewhere) but it's not customary in this field, or country. Same goes for religion, children's ages, and street address. That kind of stuff is kind of uncomfortable to learn about someone one doesn't know without any other context, and not stuff that would come up in the workplace anyway unless you're close to your coworkers.


This comes with caveat that I don't have experience in working in or hiring for folks in design/UX/UI/marketing/growth hacking/etc, but for software engineering and infosec please keep the funky tables, graphics, pie charts, and flourishes for other documents.

 Source: https://commons.wikimedia.org/wiki/File:Server-side_websites_programming_languages.PNG

I just want to read this thing quickly and get a sense for the work you've done and what you know and whether you'd be a helpful pair of hands to have about for the projects my team needs to work on. Graphics are usually just something I have to puzzle through and interpret.

More than one or maybe two fonts is just going to make it hard to read continuously across the various areas of your resume. Fancy typography has got to be first and foremost readable for use in a resume. 

 As a very hyperbolic example, consider the following typography:

Source: https://www.metalsucks.net/2019/09/25/completely-unreadable-band-logo-of-the-week-win-a-grab-bag-of-metal-goodies-85/

Logos and other pictures

Source: https://www.cvplaza.com/cv-basics/logo-picture-on-a-cv/

Especially don't include logos if they don't quite render in the place you wanted in your PDF or would get mangled when your Word document comes through the recruiting system.

It is not customary in the tech industry in the United States to include a picture of yourself. Adding a headshot is customary in a lot of places, but I want to be able to consider your written accomplishments without bias toward or against the way you look.


Some infosec people really don't like them. Some do. Some jobs you can't get without a CISSP. If you have enough work experience to show you can do the things I am looking for, it pretty much doesn't matter if you have certs or not for just plain old tech jobs (government jobs and so forth are not the same). If you have bug bounty contributions, open source contributions, or CVEs that's a lot more interesting compared to whether you are good at taking tests.

If you're not applying for an entry-level role, don't include entry-level certificates like CEH if you don't also have something like OSCP to counterbalance them. 

However, if you are applying for an entry-level role, literally anything you can say on your resume to show you're at least interested in the work, even if you don't have experience, will help me understand you're interested in the work. CEH and similar are great in this case.

Extremely long resumes

I've seen folks with 20+ years' experience doing a wide variety of things fit all that in two pages or less just fine. Most resumes are one to two pages. You can do it. I believe in you.

03 February, 2021

Oncall dysfunctions

The main idea of having an oncall, as I understand it, is to share the burden of interruptish operations work so no one human on the team is a single point of failure. I have so far encountered several kinds of dysfunction in implementations of "oncall rotation" and would like to provide some ideas for other folks in similar situations on how to get to a better place.

What ownership of code or services means to a team seems to be a general contributing issue to oncall dysfunction. Everyone doesn't need to have 100% of the context for everything, but enough needs to be written down that someone with low context on the problem can quickly get pointed in the right direction to fix it. 

This post is a) not perfect, b) limited by my own experience, and c) loosely inspired by this. Some or all of this may not be new information to the reader; this is not a new topic but I wanted to take a whack at writing about it.

Assumptions in this post:  

  • if you carry the pager for it, you should be able to fix it
  • nobody on a team is exempt from oncall if the team has an oncall rotation
  • the team is larger than one person 
  • the team owns things which have existing customers 

I felt like I had a lot of thoughts on this topic after tweeting about it, so here ya go. 

Everything is on Fire

Possible symptoms: Time cannot be allocated for necessary upkeep perhaps due to management expectations or a tight release schedule for other work, leading to a situation where multiple aspects of a service or product are broken at once, or at least one feature is continually broken. The team is building on a quicksand and not meeting their previous agreements with customers.

Ideas: A relatively reasonable way out of this situation is to push back on product or whoever in management sets the schedule, and make time during normal working hours to clean up the service. Even if the amount of time the engineers can negotiate to allocate to cleaning up technical debt and fixing things that are on fire is minimal, some is better than none. 

While spending less than 100% of the team's time on net new work could result in delays to the release schedule, if the codebase is a collapsing house of cards, every new feature just worsens the situation. Adding more code to code that isn't well hardened / fails in mysterious ways just means more possibility for weirdness in production.  

The right amount of upkeep work enables the team to meet customer service agreements while still leaving plenty of time for net new work. The balance of upkeep work to new work will vary by team, but getting to a place where it's, say, 85% new to 15% upkeep if everything is really on fire across multiple products may require multiple weeks' worth of 75% or even 100% upkeep and no net new additions during that time to get to a better place. 

On the other hand, even if there's some backlog of stuff to fix, that is okay (perhaps even unavoidable) as long as agreed upon objectives are met and it's possible to prioritize fixes for critical bugs. If the team does not have SLAs but customers are Not Pleased, consider setting SLAs as a team and socializing them among your customers (perhaps delivered with context to adjust expectations like "to deliver <feature> by <date>, we need to reduce <product> service for <time period> to <specifics of availability/capacity/latency/etc> in order to focus on improving the foundation so we can deliver <feature> on time and at an <availability/capacity/latency/etc> of <metric>").

When everything is on fire long enough, it becomes normal for everything to be on fire, and so folks don't necessarily fix things until they get paged for whatever it is. One possibility for a team of at least three is to have a secondary oncall for a period of time until things are quietened. The person "up" on the secondary rotation dedicates at least part of their oncall period working on the next tech-debt-backlog item(s) and serves as a reserve set of helping hands/second pair of eyes if the primary oncall is overwhelmed.

The next step could be to write runbooks or FAQs to avoid the situation turning into Permanent Oncall over time as individuals with context leave.

Permanent Oncall (Siloing)

Possible symptoms: The team has an oncall rotation, but it doesn't reflect the team's reality. The answer to asking whoever is oncall "what's wrong with <thing on fire>" most of the time is "I need <single point of failure person's name> to take a look", not "I will check the metrics and the docs and let you know by <date and time> if I need help". When the SPOF is out sick or on vacation, the oncall might attempt to fix the issue, but there isn't enough documentation for the issue to be fixed quickly. The team would add to the documentation if much existed, but to start from scratch for all the things is too much a burden for any one oncall. When the SPOF comes back, they may return to everything on fire. 

This can happen when a team has a few experienced folks, hires a lot suddenly, and are not intentional about sharing context and onboarding. This can also happen when context / documentation hasn't been kept up over the life cycle of the team, and suddenly the folks who don't know some domain of the team's work outnumber those who do.

Ideas: Having a working agreement that includes a direct set of expectations regarding who owns what (the entire team should own everything the team owns if everyone on the team shares in oncall; the number of individuals with context on any one thing the team is responsible for, for each thing the team is responsible for, should be at least half the team, if the team size > 2) and how things are to be documented and shared can help here. 

If the team is to have a working agreement and it actually reflects reality, the EM/PM/tech lead cannot dictate it, but can help shape it to meet their expectations and customer expectations. Having one person on the team who felt like their voice didn't get heard in the process of creating a working agreement could lead to that person becoming more dissatisfied with the team and eventually leaving. Working agreements can help with evening out load between team members or at least serve as a place that assumptions get codified so everyone is mostly on the same page about what should happen.

Some teams only work on one project at a time to try to prevent this (but this may be an unrealistic thing to try for teams with many projects or many customers). It can be hard to build context as an engineer on a project you have never gone into the codebase for if you are not the kind of person who learns best from reading things. If the entire team is not going to work on the exact same workstream(s) all the time, it is crucial to have good documentation to get to a place where oncall has some information on common failure modes in the service and common places to look for things contributing to those failures. Mixing and matching tasks across team members is hard at first, but if everyone is oncall for all the things, this is going to happen anyway and it's better to do it earlier so that people have context when it's crucial. Going the other way and limiting the oncall rotation for any service to just the segment of the team who wrote it or know it best is just another, more formal variant on the Permanent Oncall problem and is also best avoided.

Lots of folks do not enjoy writing documentation or may feel like it takes them too long to document stuff for it to be useful, but oncall is not a shared burden if there is not enough sharing of context. When the SPOF leaves because they're burnt out, the team is going to have to hustle to catch up or will transition into No Experts Here.

An alternative to having extensive runbooks, if they aren't something your team is willing or able to dedicate time to keeping up, might be regularly scheduled informal discussions about how the items the team owns work, combined with the oncall documenting what the issue was and how they fixed it whenever something new to them breaks in FAQs. An FAQ will serve most of the purpose of a runbook, but may not cover the intended functionality of the system.


Sometimes, teams have an oncall rotation, but one often more senior person swoops in and tries to fix every problem within a domain. This paralyzes the other team members in situations where the swooping firefighter team member is not available and encourages learned helplessness and siloing. This is an edge case of Permanent Oncall where the SPOF actually likes and may actively encourage the situation.

Sometimes, the firefighter doesn't trust the oncall or doesn't feel they are adequately capable of fixing the problem. This can be incredibly frustrating as the oncall, especially when there is adequate documentation for the system and its failure modes and the oncall believes they can come to a working fix in an acceptable amount of time. It is also likely the firefighter hoards their own knowledge and is not writing down enough of what they know.

Sometimes the firefighter just is a little overenthusiastic about a good bug and management doesn't care how the problem gets solved. As I become more experienced in software engineering, I am finding areas within teams I have been part of where (despite having confidence the oncall will be able to eventually solve the problem...) I am susceptible to sometimes offering a little too much advice when a thing I have been primarily responsible for building or know a lot about breaks and I happen to notice around the same time the oncall does. In cases like this I am working to write down more of what I know for the team.

Actively seeking out the advice of a teammate as the oncall is likely not an example of this pattern nor of Permanent Oncall in general unless every single bug in a system requires the same person or couple of folks to fix it.

No Experts Here

Possible symptoms: The team inherited something (service, set of flaky cron jobs...) secondhand and it is unreliable. Maybe it's a proof of concept that needs to be hardened; maybe it's an old app which has no existing documentation but is critical to some aspect of the business and someone has to be woken up at 3 am to at the very least kick the system so it gets back into a slightly less broken state. The backlog for this project is just bug reports and technical debt. Nobody takes responsibility and ownership for this secondhand service is not prioritized. Management is prioritizing work on other projects. This is an edge case of Everything is On Fire.

Ideas: perhaps one or two individuals do a deep dive into the problem service, start a runbook or FAQs based on observations, and then present their findings to the team. As oncalls encounter issues, they will then have a place to add their findings and how they fixed the problem(s). The deep dive doesn't have to result in a perfect understanding of the service, just a starting point that the oncall can use to inform themselves and/or build on. 

The team as a whole needs to be on board with the idea that as owners of <thing>, they must know it well enough to fix it, or it will always be broken. If only part of the team is on board, this turns into Permanent Oncall for those folks, which is also not ideal. If nobody has time and mental space for <thing>, it needs to be transferred to a group who do have the time and space to develop proper knowledge of it, or it needs to be deprecated and spun down.

No Oncall

Possible symptoms: Everything that breaks is someone else's problem. The team does not carry a pager, but has agreements with customers about the level of production service that will be provided.

Someone else without good docs or context (perhaps an SRE or a production engineer?) is in charge of fixing things in production in order to keep agreements with customers. Perhaps this person is oncall for many different teams and does not have the time to devote to gaining enough context on each of them.

Ideas: Some things that may help this situation and help the SRE out (if having people who don't carry the pager is for some deeply ingrained cultural reason unavoidable): the team and the SRE come to a working agreement specifying when and how much to share context, when the SRE will pull someone from the team in to help, who they will pull into help at what points, and so on. If you must have someone not on the team oncall for the product, it may be useful to have the team have a "shadow" oncall so that the situation does not turn into Permanent Oncall for any one or two individuals. 


When and how folks get added to the rotation are also critical to making oncall a healthier and more equitable experience. Expecting someone to just magically know how to fix something they don't understand and have never worked on is a great way to make that person want to leave the team. Having a newer person shadow a few different folks on the existing rotation before expecting them to respond to production issues ("shadowing" == both the newer person and the experienced person get paged if something breaks, the experienced person explains their process for understanding and fixing the issue to the newer person as they go, and the newer person is able to ask questions before, during, and after the issue).


It is hopefully not news that software engineering is a collaborative profession that requires communication; being intentional about when and how agreements with customers happen and how customer expectations get adjusted so they can be met is crucial. It may also be the case that the service(s) the team owns are noncritical enough that there doesn't need to be an oncall rotation outside business hours at all and fixing issues as they can be prioritized is fine, but taking care to prioritize bugs and production problems at least some of the time is necessary. It may also be the case that the team is distributed enough that oncall can "follow the sun" and oncall can be arranged to follow business hours for all team members. There are lots of ways to learn of and fix bugs in a timely way to meet customer expectations and no one way is perfect.