Evaluating New Tools
I was reading about Phoenix today, looking at guides and documentation. It’s great to see in a guide when it’s easy to quickly set up a project, that makes it fun to get started and explore. The most exciting thing about Phoenix is that “reactivity”, or live updates, are a core part of the system, not an add-on. Nowadays I find any software without reactive updates frustrating and annoying to use.
But even a feature as great as reactive updates is not enough to decide whether to use a tool for building web software. Now that I’m 12 years into a software engineering career, I have some nearly-subconscious evaluations I make when looking into new tools, critical things that can kill a project for me. The most prominent are:
How Painful Will Working With a Database Be? Working with databases (or almost any data storage) is painful. Reliable data storage is hard, but there’s a sweet spot between “looks easy” and “handle every little part of it.” If the by-default approach is to use a wrapper that says “just give us your types and we’ll store them!” I’m instantly suspicious. I know that in a few months I’m going to be digging around in debug and SQL server logs trying to understand why something is slow or broken instead of being able to inspect the code I wrote and understand what’s actually happening in the database as a first step.
How Painful are Authentication and Authorization Tools? Contra database tooling, auth systems should be extremely standardized in a tech stack. People mess up authorization all the time, and it’s really hard to fix after the fact. If you screw up badly enough, you either have to keep horrible practices around or break all your clients and force them to upgrade. Don’t write your own cryptography, don’t write your own auth.
How About Passing and Handling Encrypted Data? Like I said, don’t write your own cryptography. It should be straightforward to move data back and forth between encrypted and decrypted domains. Key retrieval and storage should be a first-class part of data access layer, and the scope of access that keys have should be easy to determine.
How are Errors Handled? Can you easily ignore errors (boo! No!) Are there multiple error types that have to be handled differently? (Exceptions vs Error? boo! No!) Is it easy, at runtime, to discover the underlying type and cause of the error? Are there standards about what will be thrown where and when?
How Much “Magic” is There? How Easy is it to Get Around? Too much magic means difficult debugging. My friend Arya always uses the physics term “spooky action at a distance” to explain the issue. In my experience, more magic makes code “rot” faster. If you or your company lose experience with a codebase, magical ones take longer to brush back up on.