Comments

Thursday, January 24, 2013

The Gallistel-King Conjecture; part deux


A while ago I wrote about an idea that I dubbed the Gallistel-King conjecture (here).  The nub of their conjecture is that (i) cognition requires something like a Turing-von Neumann architecture to support it (i.e. connectionist style systems won’t serve) and (ii) the physical platform for the kinds of computational mechanisms needed exploits the same molecular structure used to pass information across generations. In other words, DNA, RNA and proteins constitute (part of) the cognitive code in addition to being the chemical realization of the genetic code.  Today, Science Daily reports (here) that the European bioinformatics Institute “have created a way to store data in the form of DNA.” And not just a little, but tons. And not just for an hour but for decades and centuries if not longer.  As Nick Goldman, the lead on the project, says:

We already know that DNA is a robust way to store information because we can extract it from bones of wooly mammoths, which date back tens of thousands of years, and make sense of it. It is also incredibly small and does not need any power for storage, so shipping and keeping it is easy.

G&K observe that these same virtues (viz. stability, energy efficiency and longevity) would be very useful for storing information in brains.  Add to this that DNA has the kind of discrete/digital structure making information thus stored easy to retrieve and appropriate for computation (c.f. G&K 167ff. for a computational demo), and it would seem that so storing information is just what we would expect from a reasonably well designed biological thinking machine.

Let me shout from the rooftops so that it is clear: I AM NO EXPERT IN THESE MATTERS. However, if G&K are right then we need a system with read-write memory to support the kinds of cognition we find in animals.  As the Science Daily article reports: “Reading DNA is fairly straightforward” the problem is that “writing it has until now been a major hurdle to making DNA storage a reality.” There are two problems that Goldman and his colleague Ewan Birney had to solve: (i) “using current methods, it is only possible to manufacture DNA in short strings” and (ii) “both writing and reading DNA are prone to errors, particularly when the same DNA letter is repeated.” What Goldman and Birney did was devise a code “using only short strings of DNA, and do it in such a way that creating a run of the same letter would be impossible.”

Before proceeding, note the similarity of this and the kinds of considerations about optimal coding that we were talking about in earlier posts (here).  This is a good example of the kind of thing I was thinking of concerning efficient coding and computation.  Note how sensitive the relevant considerations are to the problem required to be solved and to the physical context within it which it needs solving.  This is a good example, I would argue, of the kind of efficiency concerns minimalists should be interested in as well.  Ok, so what did they do?

Birney describes it as follows:

So we figured, let’s break up the code into lots of overlapping fragments going in both directions, with indexing information showing where each of the fragments belongs in the overall code, and make a coding scheme that doesn’t allow repeats. That way, you would have to have the same error on four different fragments for it to fail- and that would be very rare.

The upshot, Goldman says is “a code that is error tolerant using a molecular from we know will last in the right conditions for 10,000 years, or possibly longer...As long as someone knows what the code is, you will be able to read it back if you have a machine that can read DNA.”

Talk about long-term memory! And we certainly all embody machines that can read DNA!

Goldman and Birney see this as a great technological breakthrough; good-by hard drives, hello DNA.  However, with a little mental squinting it is not that hard to imagine how this technological breakthrough would be just what G&K would have hoped for.

Science often follows the leading technology of the day, especially in the neuro/physio world. In Descartes day the brain was imagined as a series of interconnected pipes inspired by the intricate fountains on display (c.f. Vaucanson’s duck). Earlier it was clockwork brains. In our day, it was computers of various kinds. Now, biotech may be pointing to a new paradigm.  If humans can code to DNA and retrieve info from it, why shouldn’t brains?  Moreover, wouldn’t it be odd if brains had all this computational power at its disposal but nature never figured out how to use it?  A little like birds having wings but never learning to fly? As I said, I’m no expert, but you gotta wonder…

4 comments:

  1. The big problem I have with the G&K conjecture is that there are more computer architectures than we can dream of. Turing machines and von Neumann architecture (which are very different designs) are only two very well known examples which persist in large part due to momentum: the entire computer industry is built around small variations of the von Neumann architecture (much like how the entire complexity theory literature is built around small variations of the Turing machine), and changing architectures means incurring enormous costs in backwards compatibility. But that doesn't mean they're the best architectures, or the easiest to understand, or whatever.

    There's so much room for flexibility, in fact, that even if we are indeed running DNA computers inside our skulls, the connectionists could still be correct, in the sense that DNA computers can be connectionist. And conversely, neural computers can be non-connectionist (what we know about neurons admits of a very transistor/logic-gate like structure).

    ReplyDelete
    Replies
    1. Not sure I agree. G&K specified the properties they ought a cognitively adequate architecture required. These include a read/write memory, the ability to bind a variable and value it. They argue that connection isn't architectures, not implementations but cognitive architectures, don't make these distinctions. This reiterates points made by Fodor and Pylyshyn as well as Marcus. The connection isn't not only agree that this is so but insist that this is what makes their systems distinctively different. Given this G&K argue that connections it architectures won't suffice. One can perhaps implement a TvN cognitive architecture in a net, but this is not what they are arguing about. Like I said, the differences don't seem to be in dispute. What's up for grabs is whether eir argument and F&P's and Marcus' are correct or not.

      Delete
    2. [I should've replied here so I've moved the reply]

      These are excellent candidate requirements which are not at all properties of Turing machines and von Neumann architecture in any real sense. Turing machines don't have variables or anything of the sort, and von Neumann architecture gives you, at best, addresses for memory locations, which isn't quite a variable. Both have read-write memory, but so do so many other very different architectures.


      I think we agree on the point about connectionism -- neurons could be made to do it, but not using the techniques the connectionists promote. I think for them, the counterpoint could be "the general computational architecture of the brain is genetically coded, but linguistic capacity is not". That's plausible, I suppose, but it's a hard stance to have, I think.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete