Web Log of Ross Chapman

A better term for unintentional technical debt

The other day I got into a small argument with coworkers during the Sailboat exercise about the meaning of “technical debt.” We were hunched over stickies and milling about thoughtfully. Suddenly I leaped from my chair and smacked a note on the whiteboard: application too big. The rent is too damn high!

This damn slice of monorepo in which we had been toiling over the course of the last sprint had blind-sided us as a super bloom of contributions from different teams. The code was getting hard reason about; we were hostage to the out-of-bounds-ness. It was getting too complicated:

It has many interconnected parts, each full of details. Conditionals abound, forming intricate rules that may surprise us when we read them.

Now, I was reticent to describe this anchor I threw on the boards as any kind of “debt” since I’d been thinking more about debt as the future IOW on quality to increase speed in the present. By that narrow definition, if this nuanced latticework were debt, we’d be able to rustle up the ledger and observe the amortization plan, hopefully incremental and steady. To think of complicated code emerging from a combination of variegated knowledge across teams, and the teleologic weight of entropy (bit rot, cruft, and so forth) didn’t situate in Cunningham’s metaphor for me because it showed up as a stunning surprise. Are we ever really surprised by our credit card bill?

There are other sources of debt people talk about that related negligence. Procrastination, not revisiting architecture, postponing test-writing. I truly believe the teams working in this code were doing it well, so that flavor wouldn’t help situation this present monorepo madness for me.

Well, there’s the whole category of “unintentional” debt. But that also felt wrong, too. There is an aspect of stuff out of our control but the intent/unintent dichotomy feels unsatisfying if software is a rich system of people, code, automations, feedbacks…I don’t know. Language is hard. I guess I’m gliding at the edges of a linguistic constraint with the lexicon that was tossed around the table by my colleagues. Some sort of weak Sapir–Whorfian confusion that I can’t peak around.

Nonetheless, I conceded to my fellow engineers of wares that are soft: ok, this debt that so cunningly creeped with bold surprise was, in fact, a (indefinite) form of technical debt. In fact again, the frame I particularly appreciated was this: while it may not have been a result of a conscious decision, now that we observed this mucky muck we could choose to deal with it as debt. Ok, debt is debt whether it arrives by forethought or entropy.

Luckily, as I was catching up on my daily doses of software newsletters this morning, jessitron (who I can’t stop quoting) hyperlinked her latest blog to another post from the Atomist site that gives us what I think is a more accurate language for this very type of technical debt. Frankly, any optimistic (re)casting of language in our field is welcome; we’re so prone to a feeling of unfinished, incomplete, or overwhelmed, everything is fine (eh, nope).

Technology drift is a form of technical debt that comes from progress. As our target architecture advances, it leaves a trail of projects in all its momentary incarnations. As repositories proliferate, the accumulation holds us back. We have many technologies and many versions of each. We have the risks of all of them.

TECHNICAL DRIFT

Fuck yeah, let’s use that!

Two tales of Binary Search

I still have lingering rage from two years ago when an interviewer said to me: “I could probably implement this in about 20 minutes.” Seriously crushing words to utter offhand during a facetime code screen for someone who has been programming and building web apps professionally for 3+ years.

The problem was something like find the nearest value to x in the array. I’m so bad at toy algorithm questions since I basically spent those first 3sh years smashing a ton of Rails and Ember into my brain and worked to be productive building more typical business app/e-commerce style UIs. I wasn’t a whiz at data traversal. More like a deep component interface expert. What else would you call it? Suffice to say a simple filter function at 0(n) time would satisfy the requirements. That is lived experience.

But my interviewer pushed me to consider a big fat array. I was stumped, and failing hard in the spotlight. Only 20 minutes. Bro even let me spend some time after the interview coming up with a solution to email later, but that was futile. I didn’t even know how to properly phrase the problem for Google. I got rejected.

What I learned later is that bro was looking for a solution implementing binary search – something any CS grad would know. I didn’t have a CS background. I learned to program on the job and never had to care about that level of performance in the apps I worked on.

To this day, around 6 years into my career, I haven’t had to implement a binary search algorithm to scan over a big data set.


Julia Evans put out a short series of zine pages recently that describe best practices for debugging. And guess who showed up? Binary search! Leaning back at my desk chair I realized that all along I’ve been using binary search in my every day work.

When fixing a bug without a useful stack trace, triangulating the problematic code requires what I used to think as a kind of cludgy technique: step by step, commenting out large parts of the code at the top and/or bottom of suspect files. Like, literally just comment out half the file where you think the buggy code lives. Does the error still throw? If yes, the problem is not in that half. Try commenting out the other half. If no, then keep halving that part of the file. Recursively repeat. This is binary search. Also, did you notice we’re now also talking about recursion, a topic that even senior programmers have trouble with?

“Advanced” CS concepts can show up in our work all the time. Whether it’s UI, backend, databases, ops, throughout all layers of this mushy cake stack. (“Mushy” as in blended, bleeding, fluid, transitional. Not as in gross, unfit, unstable.) We need to begin reconciling with the danger of narrowly defining their employ and weaponizing them in toy code examples for gatekeeping (cough cough interviewing).

When we learn the meaning behind these CS concepts, we may actually discover more creative ways to use them. Or how they are already part of our toolkit. For example, git-bisect!

Deeper software concepts showing up in UI problems

I’ve got three posts in my brain backlog now about more complex software concepts showing up in UI work. Here’s the first!

I’ve been waiting a long time to use a bitwise operator in “real world” JavaScript – like 5-6 years – and the opportunity finally presented itself the other day.

In UI, near 100% of the time the basic comparison and logical operators of the JavaScript language give us the power and control we need to express product requirements in code. Equal, not equal, AND, OR, NOT, etc.. But then there’s that arcane set of bitwise comparison operators. These special boolean operators give us expressiveness for more complex comparison scenarios. They’re also just kinda weird because they do comparisons at the binary integral level – that is, they coerce the values to bits first.

Take XOR. Which in more layman terms is “exclusive OR”. In JavaScript XOR will return 1 when the output of either side of the operator is different and 0 if otherwise. Similar to a boolean return, we can easily pass around the result of a bitwise operator as a predicate.

As it turns out, this happens to be the very logic we need for testing existence between two dependent form fields.

At Eventbrite our UI library has graphical pickers for both date and time form fields. Our designs typically place these individual components next to each other and make them required. The user is free to change their values, though we do provide sensible defaults. One not so surprising possibility is that a user can leave one field blank by accident. Of course, not having an exact date and time for ticket sales dates doesn’t really make sense. Therefore, since we want to give the user some immediate feedback if they put the form in this state, we run a validation on blur using XOR logic!

Nonetheless, for checking existence, we don’t want to bitwise compare the two sides of the expression directly which could be many kinds of strings. To make the comparison reliable, we cast each side to boolean values with a bang. Then we wrap up the expression in a composable function. The result is a very concise one-liner:

const isOneTruthyAndTheOtherNot = (a, b) => !a ^ !b;

Which might be passed around in my hypothetical React event handler like:

dateTimeValidator: (dateValue, timeValue) => {
    const hasEmptyField = isOneTruthyAndTheOtherNot(dateValue, timeValue);
    const error = hasEmptyField ? FORM_ERRORS('dateTimeField') : null;

    this.setState({
        dateTimeFieldError: error
    }

    this.props.onError(error)
}

This could be written in a couple ways without the more arcane XOR:

... = ( foo && !bar ) || ( !foo && bar );
... = foo ? !bar : bar

I’m generally against using overly clever code in codebases that are worked on by less experienced engineers, but I think the bitwise operators are a great tool for anyone to know. And the MDN docs are very clear about how XOR works:

“Performs the XOR operation on each pair of bits. a XOR b yields 1 if a and b are different”

The docs will also introduce you to the algorithmic decision table for the XOR logic, which is another useful tool to expose new developers to.

a b a XOR b
0 0 0
0 1 1
1 0 1
1 1 0

What always makes this sort of exposé interesting is that the early-web understanding of UI still colors our perception of UI work; like, UI is just a sprinkle of scripting and layout and browser wrangling that gently rests on top of the real software where the computer science happens. Or maybe it’s changing. But I feel like there’s still too much emotional labor educating the web dev community about complexity throughout all layers of this mushy cake stack. “Mushy” as in blended, bleeding, fluid, transitional. Not as in gross, unfit, unstable.

White theft/entrepreneurship

Charles deluvio eR8qaAM6k1U unsplash

I’ve been reading two texts this week. Side-by-side they offer another reveal of the tragic double standard of black and white life in our America. That truth descends like an ashy film upon reaching 90 pages into Shoe Dog, Phil Knight’s memoir about creating Nike. On the one hand: a story about a white Christian rich kid manifesting a new destiny for himself away from mediocrity using military connections to build a business with a recently conquered nation; 1964, the year Knight starts Blue Ribbon Sports. On the other hand: I’m half-way through the drudge of Vann R. Newkirk II’s long-form in the Atlantic, The Great Land Robbery, about the vast land theft and wealth transfer during the civil rights era from blacks to whites in Mississipi. 1964: by this year almost 800,000 acres of land have transfered from blacks to whites as a result of legal discriminatory (racist) federal farm loan programs and private lenders.

Knight’s pop fantasy of himself and the pursuit of a vision to make life about “play” through footwear and lifestyling branding becomes even more willful cultural forgetting next to drivers of the globalization that were making cheap shoes possible – the return to exploitative capitalism (slavery), maintenance of a permamenent underlcass, etc… Land theft shifted majority voting power before blacks could vote, a calculated suppression. And the legacy grinds on as this robbing and stealing continues to enrich white investors, hedge fund managers, and agri-business who now own these once-black suffered farm lands.

Knight’s story is pop fun; maybe best for toilet reading. But it’s another insert into the canon of white neo-liberal colonialism. The American dream is still available for a white ruling class only – and those they selectively permit. Holding that heavy.

React inline function gotcha, but in a non-obvious way

I think a lot about Hillel Wayne’s blog post INSTRUCTIVE AND PERSUASIVE EXAMPLES: an interpolative critique of a best practice article on unit testing.

Wayne argues that “instructive” examples don’t make a reader care. In contrast, we should labor harder to craft “persuasive” examples that attempt to satisfy simian desiring machines:

  1. If your example is too simple, people brush it off.
  2. If your example is too complicated, people won’t follow. They’re also less likely - to trust it, because you complicated example might be concealing a simpler way.
  3. If your example is too abstract, people won’t see how it’s useful in practice.
  4. If your example is too concrete, people won’t see how it generalizes to their own - problems.

We see examples of #1 and #4 all of the time in technical writing – maybe because a lot of it is written for marketing purposes.

Extrapolating Wayne’s plea: let’s be careful about how we present anti-patterns and best practices when we’re trying to thought leader. Often this writing ends up being spec or framework documentation plagiarized and adorned with gifs. But without contextual complexity we may be doing a disservice to our fellow humans who dev – especially less experienced devs.

Persuasive examples are harder but the payoff is bigger. By demonstrating the why and why not there’s a better chance of putting knowledge into your reader’s brain and making it stick. This is certainly my experience. If you:

  1. Share the reasoning behind all of the small decisions getting from A to B, or determining why A or B is important
  2. Give me code that looks more like code “in the wild”
  3. Contextualize your example as either a novel approach or re-visitation

To wit. (An attempt at a persuasive example.)

Just this week I toiled on a bug with a colleague that ultimately turned out to be a case of a classic React anti-pattern where an inline function declaration caused undesirable re-renders of a Pure Component.

Now, I’ve definitely read at least a 3–4 instructive articles on the pitfalls of declaring functions inside of render(). Perf implications and unnecessary function creation, etc…. But that insight didn’t help me – or my similarly schooled – colleague come to resolution with any alacrity. We failed spectacularly to bring our academic rigor to bear because we weren’t working with sandboxed toy code from instructive examples. The root cause became obvious only after hours of thought-work finally revealed where the problem code even was.

Again, finding problematic inline functions in something like this triviality of Hillel’s lament would, of course, be much easier:

render() {
    return (
        
    )
}

But in our case, the problem function was buried in a nested component wrapped in a separate constructor function and abstracted into a separate helper in a separate file and blah dee blah…you get the point. Our journey was starting miles from render().

Our code better resembled:

// Template.js
export default (...props) => {
    // ...
    const Sidebar = connectSidebar(Sidebar);
    const Main = connectMain(Main);

    return ;
}

// Page.js
import Template from `./Template`;

export class Page extends Component {
    // ...
    render() {
        return (
            <- Template {...props} />
        )
    }
}

The bug was observed when a user clicks the “Buy on Map” button in the Sidebar. The click was getting swallowed and not transitioning the view.

So what was going on?

When we dug deeper we noticed that at the moment of click an unwanted blur event would be triggered on the form field in the <Main /> component; which would update its state. This redux-form state update would then ripple through the component tree and cause the sidebar to be re-rendered, even though none of the props passed to <Sidebar /> had changed. The result was the sidebar button getting re-rendered in the middle of the click event; which meant the click handlers of the newly rendered button were not capturing the click and able to transition to the next view! Client concurrency issues.

Inline gotcha funtimes

One hack we considered was changing the onClick handler of the sidebar button to onMouseUp – the newly rendered button could seemingly receive that event just fine in the thrash sequence (browsers are weird). But we knew our bandage likely wouldn’t last long, o we decided to troubleshoot the real issue: the unnecessary re-renders of the sidebar upon field blur.

After binary searching the code up and down <Page />, deleting chunk by chunk until the re-renders stopped, we discovered the fix to be fairly straight forward. Just move the invocation of connectSidebar and connectMain outside of the Template function context into the module context.

// Template.js
const Sidebar = connectSidebar(Sidebar);
const Main = connectMain(Main);

export default (...props) => {
  // ...
  return ;
};

After this mod, when <Layout /> was rendered, the child component passed as the prop sidebar wouldn’t be invoked – it’s already been invoked and the return value of the component has been assigned. In other words, we are not creating a new function in memory and executing it fresh every time just like all those instructive examples say!

Sigh. We likely ended up in this place by not being careful with nested connectors and their subscription boundaries. Sometimes you forget that it’s just functions all the way down. Like, declaring and assigning a component constructor inside another component constructor would not be how you’d typically compose your functions. I will humbly accept this not uncommon (read: forgivable) symptom of “bottom-up” programming. (See David Kourshid’s talk about finite state machines and the sad stories of “bottom-up”).

In sum, unlike the pristine nirvanic fields of instructive examples, we make our bed in large projects born from large organizations – cue Conway’s law. The requirements for the application accrete in fantastic ways over time. Cue McConnell’s oyster farms. The primitives you start with to satisfy embryonic requirements, like a root-level <Page /> component, may just become one large prop-drilled well. Graph hell.

This means you suddenly find yourself debugging why a click event on a sidebar button is being swallowed. You notice the divs are flashing in the Elements tab of Chrome dev tools, which means the browser is repainting the sidebar on click – re-renders!

Ok, so instructive examples won’t necessarily help you. But I’ll add an asterisk and deign they aren’t worthless either. Because I’ve read these instructives, I am able identify and classify this bug post facto as a commonly known React problem – that is, a problem with React composition not a problem inherent to React – because the instructive examples are out there contouring a problem space and a shared vocabulary. I can then use this shared language when communicating the what or what caused this… during a retrospective or or incident report. Notwithstanding, we should endeavor to be better.

499 closed connections

Bugs reveal. I look, observe. I learn things. I just experienced another one.

The customer can’t publish. Ensue existential how come???

After poking around I noticed our client code was deleting a parent entity too eagerly during a fail case while create operations were in flight for hiearchically bound entities – too sanguine, our home-backd front-end ROLLBACK. If the parent save call failed, subsequent saves of child data would nevertheless proceed, leaving unhooked child data stranded in the db. When the user would later hit “Publish”, our system would crash, unable to reconcil the ill-begotten state.

Take a look at the code (simplified for example):

Can you see how this code was written a bit too simplistically? From what I can tell there are at least two latent problems that make this code prone to fail in a way we don’t want.

  1. First, a parse error may be thrown during “other synchronous things” after the saveChildEntity promise is fulfilled. See a contrived example of that: async/await with synchronous error

  2. Second, it’s possible that the POST request initiated by saveChildEntity may succeed on the backend and persist the child data, but the connection between browser and server may be severed before the browser recieves the 200 and the promise becomes fulfilled! When that happens, the promise is actually rejected and the runtime goes into the catch block.

In the end, it was the latter.

It seems obvious in retrospect. We allow the user to hit Publish anytime which kicks off a heavy network sequence that finishes with a full page reload. Yet, while the publish sequence is in flight the user can still interact with the page. Meaning they could click another visible button – “Save” – that triggers saveStuffThunk. Based on the server logs, it seems that fairly often the Publish sequence would complete and then start to reload the page right in the middle of the second try/catch block of saveStuffThunk. When that happens nginx sends down a special 499 status code meaning the client closed the connection before the server responded with a request. The client code then interprets this as an error and sends the runtime into the catch-delete block.

The server logs (simplified):

  • POST /save/
  • DELETE / 499

It still blows my mind this happened consistently to effect hundreds of records. The browser deterministically queues/coordinates? It was a very strange UX-driven race condition.

In addition to realizing that our thunk code was written too optimistically, another aspect of this bug that fascinates me was the discovery that we had missed the really really important requirement of locking the page for the user when they click the “Publish” button. This was actually implemented for other similar interfaces, but when my team implemented a new screen with similar access to the “Publish” button, we didn’t fully understanding the potentiality of allowing this race condition. Or how to prevent it.

Organizational debt becoming bug. A big complex system with fast-shifting pubertal code and fugitive ownership creating blind spots.

“Every existing feature, and even past bugs, makes every new feature harder. Every user with expectations is a drag on change.” - jessitron

It was a weird one but we observed some new things and thereby pushed our learning edge farther out.

Like, tangential learning came from investigating potential sources of 499s. While digging, folowing a hunch about the load balancers sitting in front of our API servers, I discovered that connections could be cancelled eagerly by such a load balancer upon a timeout. Because Publishing was a long-ish operation, at one point in the debugging adventure we surmised a heavy query might be exceeding the a timeout interval. See: Nginx 499 error codes. Nevertheless that was a false start; our ops folks were able to confirm we didn’t have load balancers managing these particular requests.

Demystifying architecture is an important part of this process for the perspicacious dev.

I’m just hard reflecting on how signals of “broken” – like bad data – can reveal many interesting things about the system. Just think about how much our client promise handling hid national treasures.

Debugging a test that does nothing

This weekend I spotted Julia Evans posting tips about debugging – of course a zine quickly followed. This resonated deeeply because it touches on one aspect of debugging that I often struggle with. It’s comforting to know this is a common kind of struggle!

What Evans articulates so well is how we are always standing in a muddy, vast ecology. When we sit down and begin to debug a single piece of a program we start by gathering all the things we (think we) know to be true. Hence our starting point is a kind of simplified fact set about how the program should work. Already our field of reasoning is narrowed in advance by this incomplete information presently at our disposable. Our powers to reason are bounded and co-constituitively formed by our collection of initial assumptions.

Assumption is what makes the debugging process laborious. Each entreé becomes a heated, lasting tango between your assumptive limit and your proofs against the current reality. Herbert Simon, who coined the term “bounded rationality”, describes humans in this mode (quoting from Donella Meadows):

blundering “satisficers,” attempting to meet (satisfy) our needs well enough (sufficiently) before moving on to the next decision. (p106, Thinking in Systems)

While this trial-and-error process ensues you begin to receive feedback, often surprising, which may cause your steps to travel different hypotheses. Or it may require a pause in the action when you cancel one idea and foreclose the set of possibilities; though bounded, you are not entirely helpless. You unlearn, go back, change course. The tango lasts.

Also…

It’s impossible to throw out all we know at the beginning and re-interrogate every line of code.

…you just have to pick one and start checking.

Patience is what we need, as we methodically unpack our worldview from the inside out.


This past week I spent a good part of two days wrestling with a broken acceptance test in a somewhat unfamiliar part of code. My initial assumptions misled me from the start.

The test was written to observe a state change by simulating a click on the first menu item in a dropdown, which would flip the disabled state of a Purchase Button elsewhere in the container component. That assertion was no longer passing. Because my new code had changed the source of the the initial values for the dropdown, I had set my sights on determining if that source data in the test was wrong.

Sigh, that was an incorrect assumption. I spent a bunch of time interrogating these source values in the test setup for the dropdown menu but in the end they were correct – my initial presumption and assumption (worldview) steering me into a void. Sucked.

After repeatedly playing with the test instructions and comparing to the outcome in the browser, I discovered the click action was doing nothing because the initial state for the Purchase Button was already set through a test setup step which equaled the first value of the dropdown menu; so the click action didn’t actually change the value in the test (I was finally able to repro this in the browser). There was no state change to observe. In fact, and this is the best part, the test wasn’t needed at all. My code change inadvertently exposed a test that was applying false assumptions to the code.

Lol, how much of our software is layer cakes of fallacious worldviews???

Pre-crude development

I caught this tweet by Ruth Malan yesterday. It’s a wonderful reminder about the tension between continuous evolution and product instability in software development.

But like putting “product stability” in tension with “continuous evolution” is a very (relevant) “today” recognition. For example, if we keep changing a complex dashboard UI, we frustrate users (perpetual learning curve/adaptation pushed onto them) and it may even be unsafe…

Are you having a Val Garland DING DONG moment? It truly is a daily battle attempting to sit comfortably in this oversized chair, right? Like, certainly not a quotidian gracelessness I was prepared for at the start of my career. Back then I imagined long-lived codebases more manicured and predictable than the embryonic Rails app I would find myself working on in my first software job. Presently I’m working in a legacy codebase (like 10+ years running, ancient in software years), and I’m not really sure if it’s much more stable than that 3 year-old startup experiment. Sure, there are ignorable fossilized bits in corner tar pits that have run largely unattended for years; and quite elegant code abounds; it’s a mixed bag. Nonetheless, I often find myself in labored negotiation with the code in-between those ends, in that muddy mix; code that’s branched and maundered, complected semi-recently, as well as pubescent new growth. We might call this code pre-crude pressurized compounds with some new raw material shoveled in – does code storm, form, norm?1 These are tension points where I struggle the most. There are just enough layers of abstraction, code reuse, shared responsibility, etc… – tensile strength – that don’t make changing the code easy. Any change could destabilize things, like expose bugs, ie break existing functionality.

The day-to-day discipline of working with legacy code is wiggling in tension, but also support all existing behavior. jessitron quotes in a recent blog:

Every new feature comes with the invisible requirement: “and everything else still works.”

This is a wild task in pre-crude places!

[1]: en.wikipedia.org/wiki/Tuckman's_stages_of_group_development

Too many imports, eyes tired

There are too many imports in this React component file. I’m staring down like 50 lines of imports. External libraries (React, Lodash, etc…), our own internal libraries, components files, helpers, constants. The last bothers me the most because they feel like implementation details I want hidden away in these other components or helpers. So I’m looking at this statement inside a render() block:

const isSoldOut = this.props.statusType == SOLD_OUT;

And suddenly I’m reminded of what Kyle Simpson told me on twitter a couple weeks ago:

“in ‘functional programming,’ we love and embrace functions! functions everywhere!” – @getify

That’s it, that’s my out. We can refactor the equality expression into a function that represents data:

const isSoldOut = ({ status }) => status === SOLD_OUT;

We might find our code getting repetitive for computing different statuses:

const isSoldOut = ({ status }) => status === SOLD_OUT;
const isAvailable = ({ status }) => status !== UNAVAILABLE && status !== SOLD_OUT; 
const isUnavailable = ({ status }) => status !== UNAVAILABLE; 
// etc...

In which case we might find an indirection at a higher level that hides the implementation details of our equality expressions:

const getTicketStatusType = (ticketProps) => STATUS_TYPES_MAP[get(ticketProps, "status")];

STATUS_TYPES_MAP could implement an object hash where the equality expressions are values or our data functions.

Well, in the end I may not be decreasing the total number of imports by much, I’m perhaps doing a few other useful things:

  • Cutting down the number of imports from different files
  • Creating a reusable abstraction – which I know can be employed elsewhere
  • De-cluttering render() which makes integration tests simpler, possibly unnecessary for some cases since logic can be independently unit tested

That last point is important. At least to me. Untidy render() blocks are hard to scan, debug, and test. The machinations for constructing and composing the things to render can happen in other places: above the component class definition, in class methods, or in a separate file. The first options is one I quite like because abstractions don’t have to be all that general. It’s great if they are localized to the component at hand.

What's in a name?

There’s always heated babbling (err…babeling) on the cyberspace to assign metaphor to our embryonic field of building digital things: writing or engineering or accretion. One thing for sure, humans have a religious proclivity to conquistador in the bikeshed when faced with terra incognita. Perhaps, ironically, because it’s something of a science and therefore seems to ask for a pinning down. But these guys, some of us, just seem unable to leave it alone to variance; to let it lie under a broad, mercurial atmospheric plane of something like creating.

I remember when I attended my first Railsconf in 2014. In the opening keynote DHH made a todo about software writing (and more famously that TDD was dead).

Hello, World!

Fucking comma in there. You got an author-face bro.

At that time, 2014 in Chi-city (my place of birth), I was too green building product to really care what you called what I was doing. (Maybe I’ll never actually care that much.) It wouldn’t have helped me get a quicker handle on the revealing module pattern.

And then, a couple days later in the closing keynote, Aaron Patterson rebukes the writing metaphor and talks about the advanced engineering of improving Rails 4 query performance. They totally did that on purpose. Those bros.

What’s in a name? Sigh, we don’t have the luxury of poetic soliloquy when getting business done and keeping this company afloat. Shitty code smells like shit no matter what you call it. I’m not gonna care too much for now what the pontificators put on the RHS.

- <-{@