The thing is, as many successful projects have been developed in these aging technologies and languages, as there are cutting-edge modern React-based web sites that fail miserably. And somewhere, a mainframe is still running COBOL for some mission-critical transaction-based financial application, and your Operating System device driver might still well be written in FORTH.
I’ve even experienced this personally – one of my very first projects was in a long-forgotten database language called FoxPro. I had written some code to interface with one of the earliest predictive dialing systems, many more years ago than I’m going to admit in this article. FoxPro happened to have an interesting “macro substitution” feature – the ability to execute code contained in a memory variable – that allowed me to write a flexible and user configurable data processing and analytics system. After having lost touch, years later I got a call from the company’s CEO, who asked me if I would come to his office. And it was like walking through a time machine – that same code was still running his business, in DOS compatibility windows, and he needed some help enhancing it. It never broke, so it didn’t need to get fixed; it was still the right tool for the job.
The truth of the matter is that the specific toolset that you choose for your project, while important, is far less critical to the success of your project than making sure that the foundation you lay down and your approach to picking technologies, process, and solutions is optimized to the problems you are setting out to solve. Sure, designing a virtual agent for your call center is going to require AI – but is it really necessary for your client’s new marketing web site? In this article, we’re going to share our thoughts as to what we believe the most important principles are underpinning a software initiative’s success and our approach to selecting technologies and teams.
When you’re a hammer, everything looks like a nail
Software engineers love learning new technology – it’s part of what makes us engineers. And having engineers who are always looking at the latest and greatest bleeding edge approaches to software construction can certainly have its benefits – especially when the moment arises that some new technology or methodology is truly going to allow something to be accomplished in an elegant way that couldn’t be beforehand. A great example of this is the great advances in cloud computing and DevOps of the last decade or so; from Amazon Web Services to Heroku to WP-Engine; folks who embraced these deployment models early surely benefited from the experience of that early adoption as they became mainstream. And just wait until quantum computing is near ready for prime time.
That said it is important that the decision as to what approach or technology to select for a project not be driven by the core competencies of your chosen development team. Sure, it is absolutely possible for any web site requiring back end data to be built with a fully custom .NET content management system, or to write custom server side code in Java, or Golang, rather than rely on packaged solutions like WordPress, or headless CMS services like Prismic. And when you have a team of skilled .NET developers, they certainly are likely to be able to propose to the business unit a workable, even seemingly rational custom .NET solution. And there are definitely times where a custom .NET content management system absolutely IS going to be the most optimized approach, but not always.
Addressing this from an agency model, we see one of the most important parts of an agency’s job as being to help clients strategically understand which technology approaches best serve their business problems, and to help them make choices that address those needs both short and long term. Some approaches to addressing this on the agency side can include cross-training, professional development, education credits, and trying to keep a wide set of core competencies available – in order to be more technology and platform agnostic going into any new initiative.
Discovery: to boldly go where no one has gone before?
Which leads to the importance of that strategic work – project discovery. If you happen to practice Agile development, this is sometimes referred to as “Sprint 0” – or the work that you need to do prior to writing code. This thinking goes all the way back to the early 2000s, and IBM’s Rational Unified Process. The idea advanced there is that prior to Construction, projects should start with two discovery phases – Inception, and Elaboration.
The primary goal of Inception is to define an approximate vision of the system, make the business case, define the scope, and to be able to produce rough estimates for cost and schedule. This includes establishing feasibility and making “build or buy” decisions with regards to the systems’ foundations and technology choices.If you start with Inception – rather than leading with a technology choice – at Pastilla we find that more likely than not, especially nowadays with so many proven services and frameworks and platforms, that more and more there is a rationale for more “buy”, and configuration, over “build”, and custom code. Us engineers always love “build” as an answer – especially when there’s a chance to solve a novel problem – but the client’s needs come first. And “build” is almost always more expensive, and higher risk, and we believe should be done only when necessary.
What’s more, those battle-hardened platforms often have established success. No matter how good your engineering team – unless you are authoring a platform yourself – likely, the more brittle and risky your solution will be. And if you ARE intending to author a platform, make sure to ask yourself if that’s in your clients’ best interest. Many clients over the years have felt taken hostage by a development partner’s proprietary implementation, and those are engagements that rarely end well when the client comes to truly understand that level of dependence.
Tackle the hardest problems first
This then leads to Elaboration. The goal of Elaboration is to get down in the weeds and more completely define the solution. But one part of Elaboration sometimes overlooked is risk management – and on the tech side, risk management means making sure the hard problems are solved before going full speed ahead into construction. For example, solving for massive scale, or CMS flexibility, or whatever the specific needs of the business case expose.
A current real-world example of this is looking at implementing mobile cross-platform authoring solutions like Xamarin or React Native. Your client wants their mobile app on both iPhone and Android – who doesn’t? Xamarin and React Native both seem like attractive propositions, one code base that can be deployed to both operating systems – the same way that Java AWT and Swing were supposed to the “write once run anywhere” solutions to the desktop. And if you’ve never heard of these, there’s a reason – because the bottom line is that abstractions are always leaky, the more abstract the leakier, and so the farther away you move from the hardware with your code, the more inherent risk you are absorbing.
It’s easy to look at the simpler requirements of a mobile application, and a technology like React Native, and say – sure, I can make this entry form work for iOS and Android, done deal. But if you want to shake out your risk, look at the hardest parts of your proposed mobile application specifications, and then map those to your cross- authoring tool of choice first instead. If you can mitigate that risk with a working proof-of-concept, you’re a lot less likely to have to go back to your client and let them know that you haven’t backed yourself into a corner. And if you can’t make that proof of concept work, then maybe taking a step back to native code, despite the requirement to maintain separate code bases, really is the right approach – even if it is not nearly as “sexy”, or cutting edge.
Someone’s going to have to support this thing
Finally, it’s important to consider what comes after construction – the system is going to go into production. (At least, we hope so!) And no matter how well written your code, no matter how much time went into specifications, no matter how extensive your QA process, there’s always going to be bugs.
Gone are the days of having to pore through technical manuals and error codes – today’s debugging tools are industrial strength. But even more so, what’s truly awe inspiring are the development communities and knowledge bases that exist out there nowadays – it seems like a key development skill now is how to best utilize Google to find the person out there who ran into the problem before that you just ran into yourself. And it stands to reason that the larger the development community that exists, the more likely that person is out there – that someone asked the question that you just asked, and it was answered. Before selecting a platform or technology, we strongly advise looking at the size and accessibility of the developer community, and support resources available.
The broader the technology adoption – even if the technology itself isn’t the “latest and greatest” – the more likely the support you need is out there and readily accessible. There are as many approaches to software development as there are grains of sand on a beach – while software development is a science, there’s also an art to it, creating something new that never before existed from building blocks. That said, where good art is wholly subjective, good software can be evaluated objectively by how well it meets its user’s needs.
At Pastilla, we believe that following the strategic principles outlined above will help ensure that the art and science you create sees the light of day, and stands the test of time. And should you need help, we’re here at the ready – from discussing these principles further to sharing our experience, to rolling up our sleeves and helping you develop and implement a strategic technology plan.