Trust but Verify – Measuring Value with Agile
In my last article, I talked about how trust is the secret ingredient to high-performing Agile teams.
These words seem particularly haunting when you’re neither the coder nor the maintainer, but the purchaser.
There are many reasons you might end up acquiring code, but one thing is certain: any pre-acquisition code audit happened in a lot less time than it will take you to digest, clean up and incorporate it into your business. Reviewing screens full of snarled code from that hot new startup you just acquired, calculating the months of remediation and integration ahead, you realize that ‘due diligence’ was anything but. Why didn’t anyone spot these problems? How did we get into this mess?
A better question might be, “Why did I expect anything different?” After all, startups don't spend much time worrying about functionality, maintainability, scalability and maintainability. They’re good at two things: cranking out sexy new features and piling up technical debt—debt that’s now your problem.
Shouldn’t you have caught those issues in that first code audit?
Well yes, you should. But think about how the audit went. Once the deal heated up, the business guys came over and asked the technical team to take a look. A senior-level developer or architect flipped through the structure and code classes, and after a few hours, sent back a general assessment. You’d spend more time researching a 4K TV. Truth be told, it's no wonder you wound up in this fix.
The good news is, you can avoid this pain by following two, maybe three, simple principles during your next due diligence audit.
The biggest blunder businesses make in an audit is basing their assessment on gut feel rather than tools that provide a concrete measurement. As a result, they underestimate the deficiencies and technical debt in the code, and set themselves up for months of extra work.
The problem is most serious when it comes to security. Vulnerabilities in acquired code can be so severe, the business doesn’t dare deploy it. Almost as bad, the business doesn't know how long it will take to fix the code, how much it will cost and how it should have factored the additional time and labor into the acquisition price. While you don’t have to hack the code in a live environment, you shouldn’t settle for anything less than a full static security scan using a tool like Vericode. There’s too much at stake to cut corners.
You need to get a clear picture of how well the code was designed and how much test coverage it has. I always insist on building the code in my environment, where I can do a full static code analysis. It’s amazing how often companies neglect this step. I’ve seen clients buy codebases with no unit test coverage whatsoever. It would take a miracle to get that code up and running in a reasonable amount of time. You have no idea what the real cost in effort and person hours will be. Every time you change something, you’ll run the risk of it not working anymore.
That’s why static code analysis is essential. If I see the new code has 80% unit-test coverage, it’s probably low risk. Whatever changes I make in my environment will be fine. If I’m looking at 20% unit test coverage, that’s a red flag. I know we’ll need to take care of that before doing any substantial work on the code. Either way, you’ll have a good idea how much integration will cost and how long it will take.
Another benefit of static code analysis is that gives you a good idea of how well the code was written. The more code a company is using per method, or class, the harder it will probably be to modify. On the flip side, when you have strong static code analysis numbers and grades for maintainability, you can confidently modify the code with little work and effort.
This is the trickiest step. A professional performance analysis that sets up the software in a live environment and test it under load can easily cost $250K. With open source tools and a little elbow grease, your team can test performance for significantly less money, maybe even for free. But the investment can be substantial, so you have to decide how important it is. If you’re buying back-office software that only needs to support a few dozen concurrent users, scalability is probably not a big concern. If you’re building a consumer-facing product, or one in financial services, then testing is absolutely critical, and money well spent.
If you do decide that scalability is important, be sure to validate it yourself. I’ve seen cases where the acquired company made claims about SLAs their code couldn’t begin to support. To make matters worse, most acquiring companies have only a vague idea of the load they need to handle. Decide on concrete numbers—e.g. “500-user concurrency”—and confirm it with established tools.
In sum, take the audit seriously. The implicit deal in M&A is that startups develop features, and acquirers clean up the mess. Even if you’re OK with this arrangement, you need to know what you’re signing up for. Acquiring companies rarely do. They’re like the proverbial goldfish, constantly forgetting the lesson they just learned.
Remember: you get what you tolerate. When startups write sloppy code, they're making a rational economic decision. If you purchase code for insane amounts of money without real due diligence, you’re only promoting more of the same. If everyone demanded that startups demonstrate good, secure, performant code, we might finally avoid the morning-after headache.