March 2009 Archives

As the BioSysBio conference in Cambridge, UK was winding down, the Woodrow Wilson Center was publishing its latest report on synthetic biology, written by Michael Rodemeyer of the University of Virginia.

In Cambridge, people spent a fair amount of time in discussion sessions and talks on the ethical and regulatory problems that synthetic biology is going to face. And Rodemeyer's report is all about what the US could do to regulate synthetic biology. But the same problem keeps cropping up: just what is synthetic biology?

I took part in a discussion session organised by Jane Calvert and colleagues from the Innogen Centre in which everyone in the room - 30-odd people - split up into groups and worked through a bunch of future scenarios and their likely causes and consequences. One was the introduction of a synthetic-biology pet. We quickly ran into the problem of definition. If you use the widest possible definition then those pets already exist - the GloFish sold in the US by Yorktown Technologies. These are zebrafish with a gene added that produces a protein that makes them glow under a fluorescent light.

The change to the fish genetically is pretty minor, so most scientists I think would lump this in the genetically modified (GM) category. Synthetic biology needs to be a bit more ambitious, such as the lava-lamp mouse that Imperial College's Chris Hirst suggested - its fur would ripple with colours as the perfect hypnotic aid for bleary-eyed students. But you would have to know how that mouse was altered for anyone to decide whether this was a GM or a synthetic biology project. And that might affect how governments regulate, allow or ban the sale of the Psychmouse.

Cambridge University's Jim Haseloff argued that it's the ability to design a pet to a certain specification that would mark out a synpet. But I think it's possible to argue that dog breeders are able to design properties into their animals. They just don't use syringes (at least not as much) to do it and don't have the precision that we presume future biotechnologists will possess. But, thanks to the way that small genetic modifications, such as single nucleotide polymorphisms (SNPs) result in big changes in dogs, as Adam Arkin of Stanford later underlined, you can do a lot just with breeding. It didn't take long, it seems, for some wolves to become domesticated and give birth to the first dogs.

Phil Locascio of the University of Oxford argued that the big difference between traditional bioengineering - cross-breeding compatible plants and animals - and the new stuff is timescale. GM is a lot quicker and synthetic biology, because it's meant to involve less trial and error, should be faster still. GM often involves taking genes across species but natural viruses do that too. It just takes a lot longer for complete genes to cross over using the natural process and, very often, they don't function.

Rodemeyer faced the same problem in defining synthetic biology: "At this point, synthetic biology is more of a collection of tools and technologies than it is a specific discipline with a unified purpose...To some extent, synthetic biology is an extension of biotechnology; there is a certain amount of overlap, and no clear defining line between the two areas."

Most people around the synbio business are likely to follow judge Potter Stewart: "I know it when I see it."

Regulators have no such luck but the decision they make will have a dramatic effect on how a nascent synbio industry evolves, and if it does. One of the other situations we pondered was what might lead to the European Union imposing a moratorium on the commercialisation of synthetic biology. It happened for GM in Europe, so you could argue we have already had one in place for synbio. And you can't buy GloFish in Europe.

Show your work

| | Comments (0) | TrackBacks (0)

The models of biological processes that appear in scientific papers often contain serious errors that make it impossible to use them as is. And it's the system that is to blame.

Catherine Lloyd, who works in Peter Hunter's group at the University of Auckland, arrived at the BioSysBio conference in Cambridge today to argue for scientists to not just publish papers but the executable models that they used to create or explain their results.

The Auckland group has been working closely with Dennis Noble's team at the University of Oxford for many years. Noble pioneered the use of computer models in biology with his work on the electrical signals that move around the heart. Recently that work been assembled into animated models that can guide surgeons on where to operate on a diseased heart.

Although models are central to systems biology, the system for publishing research is not really set up to deal with them. Lloyd, who curates the models held by the Auckland team, said the current publishing process introduces problems. "To publish their research, [scientists] have to translate their model into text and equations for publication," she said.

One answer is to submit the model itself, or at least one that works the same way. Right now, researchers use Matlab, Mathematica and a grab bag of other tools to write theirs. The Auckland proposal is to use a derivative of XML, called not surprisingly CellML, to hold the guts of the model.

Lloyd said one problem with using something like Matlab is that there is a lot of procedural code in the typical model needed just to get it going. What researchers really want is just the the core of the model: the differential equations that replicate a biological system's behaviour.

It would streamline a lot of the work for the team at Auckland. According to Lloyd, of the almost 400 models in the repository, only four made it from the textual description to executable model in one go. The source papers for most of the others – the majority, but not all appeared in journal papers – contained typos and other mistakes that meant the model did not behave as expected. Albert Goldbeter gets the award for providing two of the error-free models.

"Sometimes we get errors where we have to contact the model author and for some models we will never be able to access the code," said Lloyd.

In some cases, how universities license IP can cause problems with access to the actual models, even if they are only used for testing a CellML derivative. And sometimes, the model just isn't available, possibly because the original paper and model don't quite agree.

"It is surprising how many researchers 'lose' their code. They just can't find it despite all the years they have worked on it," said Lloyd.

According to Lloyd, some journals are interested in the idea of publishing CellML models alongside papers. One possible incentive for scientists to do it is to provide an additional reference for the model so teams wind up getting two citations for the price of one. Or journals could simply refuse to publish papers based on models that don't turn up with the model itself.

Although publishing a model along with a paper means extra work, it could streamline things as running the CellML version acts as a kind of proofreading process for the underlying equations. Getting it into CellML is another matter, but work is underway on a Matlab to CellML converter and there are already tools such as COR and PCEnv for writing and running CellML models.

About this Archive

This page is an archive of entries from March 2009 listed from newest to oldest.

February 2009 is the previous archive.

May 2009 is the next archive.

Find recent content on the main index or look in the archives to find all content.