May 2010 Archives

Prompted by the rebooting of a cell using a synthetically generated genome, US president Barack Obama has asked for a study of its implications from the Presidential Commission for the Study of Bioethical Issues.

Obama wants the report in six months and to “study the implications of this scientific milestone, as well as other advances that may lie ahead in this field of research”.

I wonder if they are going to point that this kind of thing has been going on for a while and that, although a total reboot is now possible, it’s unlikely to be the way that synthetic biology is done for a good while.

The world’s most expensive cloning programme has completed with news that the J Craig Venter Institute has rebooted a cell with a genome originally generated by DNA synthesis instead of natural means. The question remains why do it, other than the capacity to generate headlines? (Because, if you squint a bit at what happened in the experiment, you can put “first synthetic cell” on a press release.)

A good proportion of bacterial genome, particularly a small one like Mycoplasma genitalium, which provided the template for the original “synthetic genome”, is made up of essential genes - DNA that codes for proteins and segments of RNA needed to keep the cellular machinery running. Things like ribosomes have stayed more or less unchanged for millions of years because they are central to Earth’s lifeforms.

There is a chance that sooner or later, someone will have a go at redesigning these enzymes to see if there are better alternatives. But, given the current state of understanding of biology, it’s not going to be tomorrow. So, for any practical synthetic biology, you are better off letting the existing biological infrastructure get on with doing its thing and work on ways to insert and delete genes at will.

That’s not to say it’s not worth finding out if it’s possible to create a functional genome from a code. Creating entire genomes from scratch sounds like a big project and it is, but it’s eventual value compared with techniques that rely on the existing cellular machinery are probably to going to have more use.

A joke cracked at the SB4.0 conference about DNA synthesis and whether more should be spent on it to get much longer sequences cheaply was that “the only customer is Ham”, referring to Nobel laureate Hamilton Smith, a leading member of the JCVI project. Everyone else was, more or less, happy to work with shorter, novel sequences bolted into an existing genetic chassis or to develop conceptually more laborious but cheaper, automatable ways to manipulate a genome.

Even a cursory look at the Science Express paper reveals the amount of work that went into the process of assembling the genome. DNA synthesis was comparatively straightforward and the team had already demonstrated that it was possible to reboot a cell with a foreign genome using existing, natural DNA, even though the process by which this rebooting happens still isn’t clear.

Even with all the care that went into synthesising and checking the DNA segments, errors crept in. And some were errors that biology has evolved out of the process. One killer error was a tiny change in one of the 11 segments of DNA that were ultimately stitched together to form a 1.1 million base-pair genome - the team had decided to move up from the smaller M genitalium because M mycoides was known to transplant successfully. One base pair went missing - and it happened to be in the DnaA gene, which is about as essential as it gets as it’s responsible for chromosome replication. No more generations of M jcvi without that.

That was fixed but other errors crept in, including a chunk of E coli DNA that turned up in the middle of the new genome. This was in a far less essential section of the genome as this was found in the successful rebooted cells. It may seem churlish to point this out but it’s worth recalling that many of the processes used to generate the complete synthetic genome relied on cloning techniques that have been around for a while and which themselves rely on natural processes. So, it’s not too surprising to find bits of the natural world turning. However, in an experiment that is meant to demonstrate how to take a code and write it into a cell, it’s far from desirable. Yet, in other techniques for genome editing, these processes would be far less destructive because they can be made work in concert rather than against the synthetic approach.

DNA has formed the backbone of self-assembling logic circuits designed by a team at Duke University. To communicate, the circuits employ light-emitting molecules already widely used by biologists in their own experiments.

Chris Dwyer, assistant professor of electrical and computer engineering at Duke’s Pratt School of Engineering, said the technique could be used to build intelligent but tiny biosensors as well as nanoscale encryption keys.

Dwyer said the logic is form of diode-diode logic, one of the earliest approaches to digital computation used in electronics. Although it cannot form all the possible Boolean logic gates, it can be used to build simple computers from AND and OR gates. In the Duke University scheme, the diodes of an electronic circuit are replaced with chromophores - light-absorbing elements - attached to segments of DNA.

DNA-linked chromophores, particularly those that fluoresce, are used widely in biological experiments as they make it easy to identify the locations of genetic elements within a cell. Theodor Förster found in 1948 that chromophores can pass energy to other, different chromophores close by through a coupling process. Biologists often use this in Förster or fluorescence resonance energy transfer (FRET) to show when molecules such as proteins are coupled together in complexes.

About this Archive

This page is an archive of entries from May 2010 listed from newest to oldest.

March 2010 is the previous archive.

June 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.