The world’s most expensive cloning programme has completed with news that the J Craig Venter Institute has rebooted a cell with a genome originally generated by DNA synthesis instead of natural means. The question remains why do it, other than the capacity to generate headlines? (Because, if you squint a bit at what happened in the experiment, you can put “first synthetic cell” on a press release.)
A good proportion of bacterial genome, particularly a small one like Mycoplasma genitalium, which provided the template for the original “synthetic genome”, is made up of essential genes - DNA that codes for proteins and segments of RNA needed to keep the cellular machinery running. Things like ribosomes have stayed more or less unchanged for millions of years because they are central to Earth’s lifeforms.
There is a chance that sooner or later, someone will have a go at redesigning these enzymes to see if there are better alternatives. But, given the current state of understanding of biology, it’s not going to be tomorrow. So, for any practical synthetic biology, you are better off letting the existing biological infrastructure get on with doing its thing and work on ways to insert and delete genes at will.
That’s not to say it’s not worth finding out if it’s possible to create a functional genome from a code. Creating entire genomes from scratch sounds like a big project and it is, but it’s eventual value compared with techniques that rely on the existing cellular machinery are probably to going to have more use.
A joke cracked at the SB4.0 conference about DNA synthesis and whether more should be spent on it to get much longer sequences cheaply was that “the only customer is Ham”, referring to Nobel laureate Hamilton Smith, a leading member of the JCVI project. Everyone else was, more or less, happy to work with shorter, novel sequences bolted into an existing genetic chassis or to develop conceptually more laborious but cheaper, automatable ways to manipulate a genome.
Even a cursory look at the Science Express paper reveals the amount of work that went into the process of assembling the genome. DNA synthesis was comparatively straightforward and the team had already demonstrated that it was possible to reboot a cell with a foreign genome using existing, natural DNA, even though the process by which this rebooting happens still isn’t clear.
Even with all the care that went into synthesising and checking the DNA segments, errors crept in. And some were errors that biology has evolved out of the process. One killer error was a tiny change in one of the 11 segments of DNA that were ultimately stitched together to form a 1.1 million base-pair genome - the team had decided to move up from the smaller M genitalium because M mycoides was known to transplant successfully. One base pair went missing - and it happened to be in the DnaA gene, which is about as essential as it gets as it’s responsible for chromosome replication. No more generations of M jcvi without that.
That was fixed but other errors crept in, including a chunk of E coli DNA that turned up in the middle of the new genome. This was in a far less essential section of the genome as this was found in the successful rebooted cells. It may seem churlish to point this out but it’s worth recalling that many of the processes used to generate the complete synthetic genome relied on cloning techniques that have been around for a while and which themselves rely on natural processes. So, it’s not too surprising to find bits of the natural world turning. However, in an experiment that is meant to demonstrate how to take a code and write it into a cell, it’s far from desirable. Yet, in other techniques for genome editing, these processes would be far less destructive because they can be made work in concert rather than against the synthetic approach.