Why you should never use a proof of concept in development projects
From my archive - originally published on 3 July 2011
A proof of concept is often suggested as a way to quickly demonstrate viability. Unless you manage expectations carefully, you may find that this it does nothing of the sort and merely delivers a quick hack that can undermine good system design.
A disposable piece of code produced to prove a particular point can be a pretty dangerous thing. The throw-away nature of the code undermines its very purpose by implying that functionality can be created without really demonstrating how it can be delivered.
Code that is designed to be thrown away is unlikely to take into account many of the more generic problems of software development such as exception handling, scalability or security. Can notions of correctness or completeness ever be excluded from code? If you’re not making allowance for these important aspects of code then there is a limit to how much you can really “prove”.
The ability to quickly rig some code up that works may provide reassurance to stakeholders, but it won’t do much to progress a system. Will a quickly executed proof produce loosely-coupled components with clearly-defined responsibilities and collaborations? It's more likely to produce a closely-coupled mess that isn't fit for use in a production system. A development team will have to do the work again, essentially “proving the concept” a second time, except this time with commercially viable code.
The bottom line is that proving functionality alone is not enough. You need to prove that it will work in a viable, commercially-developed application. After all, almost anything can be made to “work” by hacking together a bunch of scripts.
The biggest risk, of course, is that a proof of concept intended for demonstration purposes gets incorporated into the production system. A development team can come under a lot of pressure to incorporate incomplete code into a system on the basis that “it works”. It can be difficult to make sure that everybody understands that a proof of concept is essentially disposable code that can never be completed. Most stakeholders are unable to distinguish between an application that does something and an application that can be developed into production-ready code.
Tracer bullets and prototypes
A more realistic description of a proof of concept is described by Andrew Hunt and David Thomas in the Pragmatic Programmer – i.e. the notion of “tracer bullets”. This is a proof of concept but one that recognises the reality that the code is being used to demonstrate commercial viability. It won’t be thrown away once a point is proven and is likely to be used as a foundation for a larger system.
With tracer code you will establish the main components that will be used in a live system. This will give you an opportunity to test out a basic architecture to check that it will be fit for purpose in terms of delivering required functionality. It will also make it easier to envisage the effort and complications that might be involved in building a production system.
This fits in with the notion of agile development, where architectures can be evolved while demonstrating iterative value. It also sits better with the commercial reality of software development where there is a natural reluctance to throw away any code, particularly if it “works”.
This tracer code approach relies on the assumption that you may have some well-developed user stories in place. This does not allow for the notion of exploration. Sometimes, you need to do a bit of work to figure what work is required, which is where the prototype comes in.
A prototype is more of an investigation where there is an assumption that you will throw away anything that is created. There is an emphasis on discovery where lightweight prototype tools can be used to create faster visualisations that won’t cut it in a production context. You're trying to work out what the requirements are so are free to ignore much of the discipline required for production-ready code. The emphasis is on generating ideas and mocking up features.
The tracer application solves a different problem as it’s showing how specific functionality can be delivered in a commercially viable system. It shows how the application will hang together by producing a skeletal architecture that delivers some production-quality functionality. Note that this requires well-defined requirements and a strong sense of where the application is heading - i.e. the kind of direction that can be provided by a good prototype.
Why there's no room for a proof of concept
Both approaches can be used in the same development but at very different stages. A prototype is more appropriate for the initial, discovery phase where you are trying to work out what you are doing. A tracer is a better approach for demonstrating how to deliver functionality that is required in the requirements.
This leaves little room for a “proof of concept” which serves neither purpose. It does not provide sufficient flexibility to explore functionality and will not contribute to future development by demonstrating how a production system should be organised.