A lesson for the backend from a front-end technology

Posted under Developer Productivity, Software development On By Eric Shull

In R52, our team launched a new licensed feature for users to create custom asset hierarchies. The UI is a familiar spreadsheet-like interface, and we kept the backend logic simple by borrowing a concept from the front-end library React. In the process, we gained some valuable insight into writing maintainable code. In this post, I’ll outline the problem we faced, how we adapted, and the lessons we learned from it.

Our front-end UI allowed users to make changes to a table, such as adding and deleting rows and columns. On the backend, we needed to persist the current table to the database. The absolute simplest way to do that would have been to delete all pre-existing information about the table and re-create it from scratch, but that’s a lot of I/O-bound work for small, isolated changes to a table that’s remaining mostly unaffected. We preferred to make the fewest database updates possible, which didn’t seem so difficult. We have a custom ORM tailored to Seeq’s specific use cases. All we needed to do was walk some JSON that described the new state of the table, identify when a node had changed, and write the appropriate update to the database.

Unfortunately, it didn’t work well to make changes node by node. We had to track a lot of state, from where the algorithm was in the JSON representation of the table to what had changed since the database was last updated. All that state was rolled together in one big ball. Conceptually, it mapped out something like this:

It didn’t matter if we broke bits of logic into smaller methods, we had to load the entire context into our minds to understand how any code changes might affect the end behavior of the algorithm. We had to know the explicit state, such as where we were in loops nested within loops and methods called from other methods. We had to know the implicit state, such as what database locks we might be creating and how it could affect other operations on the same database rows. We began to groan under the weight of it, and we foresaw that our lovely feature was in danger of becoming a maintenance nightmare. Interleaving all that disparate state was going to create a lot of cracks for unhandled edge cases and other bugs to sneak in. We needed a new approach, and we found one on the front-end half of the team.

Seeq’s web client was originally written in AngularJS, but we’ve slowly been converting components to React, a conversion our team has been mostly responsible for implementing. React’s approach to updating the browser’s DOM is to create a separate, virtual DOM, diff it against what’s already in the browser, and then apply a minimal set of updates in a single animation frame. We needed to do something very similar on the backend, only instead of a DOM we had a database. We took a day, stepped back, and tried a new update algorithm, one that worked in phases like React, rather than processing changes node by node.

First, we loaded the current state of the asset groups table from the database. That would stand in for the DOM. Our virtual DOM was the incoming JSON description of the way the table needed to look. Once we had the current state and the desired state, we executed a second step: diffing the two to generate a list of changes. We still walked the nodes of the JSON input, but we tried to do as little as possible at this phase, nothing more than characterizing what had changed, not actually changing anything. The third phase was to apply those changes to the database. The conceptual map now looked juggled around, more horizontal than vertical:

Restructuring our algorithm like this disentangled that big ball of state. In the first phase, loading, we only had to worry about ORM state. In the second, diffing, we only had to worry about the state of where we were in the input structure. And in the third phase, applying changes, we only had to worry about the state of what we had written to the database. Overnight, the whole process became managable long-term, and through the evolution of asset groups over the last two releases, the core control flow has remained pleasingly stable.

Comparing our initial approach with what we finally landed on, I understand a bit better some code maintenance principles that before I had only intuited. The ideas below are all related, really only different perspectives on the same mindset, so it’s hard to tell where one principle ends and another begins, but here are some of the lessons this experience gave me for future use:

  1. Break up algorithms. Often, two or more algorithms masquerade as one giant algorithm. If you can learn to see how different algorithms are woven together, you can separate them and reason about them more in isolation, rather than as a whole. It’s easier to maintain five small algorithms than one combined algorithm.
  2. Bubble control flow back up to the top more often. One thing that made our initial algorithm hard to reason about was the depth of the control flow. We were diving deeper and deeper into nested loops and method invocations, burying ourselves in added mental context the further we descended into the call stack. It didn’t matter if we broke the logic up into nice, small methods, it was the depth that got to us. By splitting the algorithm into large, top-level phases, control flow returned to the main entry method more often. The call stack got shallower. Bubbling control flow back up more often gave the top-level method more opportunities to decide what to do next, such as deciding what to do with errors, whether to return early or apply partial updates.
  3. Separate actions from data processing. The React-inspired algorithm is only three phases, the first and last of which are database read and write actions, but it made a huge difference to break the data diffing out into its own middle step. It still contained the bulk of the complex logic, but without database reads and writes to worry about, testing became drastically easier.
  4. Turn actions into data. In our original algorithm, walking the input JSON produced actions (changes to the database). In our revised algorithm, walking the JSON produced a list of what actions should be performed. That not only made testing easier, but it made manual debugging easier as well. If we had a bug, we could run the diff and see what it would do, rather than stepping through the code expression by expression. It also let us modify the actions before they were executed. If we wanted to, we could group all deletions together and execute them as one database operation.

One big lesson I learned from this is that object-oriented programming and functional programming are not as diametrically opposed as typically characterized. The new algorithm is just as object-oriented as the old one, but after we thought about React, functional programming, and separating side-effects from data handling, we changed how we used objects. We learned not to rely on them for managing tricky state changes, and used them instead to organize our code, putting bits of data together with their relevant methods. The list of actions to apply to the database is a list of OOP objects sharing a common interface. We’re still object-oriented, but we’re also functional.

That’s a lesson I’ll take into many future projects.

Notify of
Inline Feedbacks
View all comments