The idea that living cells and molecular complexes can be viewed as potential machinic components dates back to the late 1950s, when Richard Feynman delivered his famous paper describing sub-microscopic computers. Recently, several papers have advocated the realisation of massively parallel computation using the techniques and chemistry of molecular biology. Algorithms are not executed on a traditional, siliconbased computer, but instead employ the test-tube technology of genetic engineering. By representing information as sequences of bases in DNA molecules, existing DNA-manipulation techniques may be used to quickly detect and amplify desirable solutions to a given problem. We review the recent spate of papers in this field and take a critical view of their implications for laboratory experimentation. We note that extant models of DNA computation are flawed in that they rely upon certain error-prone biological operations. The one laboratory experiment that is seminal for current interest and claims to provide an efficient solution for the Hamiltonian path problem has proved to be unrepeatable by other researchers. We introduce a new model of DNA computation whose implementation is likely to be far more error-resistant than extant proposals. We describe an abstraction of the model which lends itself to natural algorithmic description, particularly for problems in the complexity class NP. In addition we describe a number of linear-time parallel algorithms within our model, particularly for NP-complete problems. We describe an “in vitro” realisation of the model and conclude with a discussion of future work and outstanding problems.