The models occupy a limbo that is biologically flavored but still quite remote from neuroanatomy; no neuroscientist is going to find any easy clues here about how to discover the analogy-making process in the activities of neural networks. There are dynamically connected nodes and spreading activation from node to node, but these are not neurons or simple assemblies thereof. There are broadcast effects that damp or enhance various activities in parallel, but these are not plausibly the discovered roles of neuromodulators diffusing through the interstices of networks. There are functionally defined places where different sorts of things happen, but these are not tied to anatomical locations by any data or even speculations.
The models are thus both mechanical and abstract at the same time. By being mechanical, they amount to a "proof of concept" of sorts: yes, it is possible, by some such mechanism, to get very warm, juicy phenomenology to emerge from the joint activities of lots of very clockworky parts. Good news. We always knew in our hearts it was possible, but it is very reassuring to have models that actually deliver such goods. We can even look at the details of the model for very indirect hints about how the phenomena might occur in a whole live brain. Compared to the GOFAI models of yore, Hofstadter's models are much more than halfway to the brain, but there is still a chasm between them and computational neuroscience.
Hofstadter provides useful comparisons along the way with many other projects in the history of AI, from GPS to ACT* and Soar, from Hearsay II and BACON to such current enterprises as Gentner and Forbus's Structure Mapping Engine, and Holyoak and Thagard's ACME. He is devastating with his criticisms, but also generous in his praise—for instance, in his homage to Walter Reitman's Argus. What emerges from these comparisons is, to this reviewer, a tempting but tentative conclusion: Hofstadter may or may not have nailed it, but he has come much closer than any other enterprise in AI to finding the right level at which to conduct further inquiry.
Alan Turing got the idea for the stored-program computer from his own systematic introspection into his mental states when dutifully executing an algorithm, the sort of mental activity that is farthest removed from artistic, creative thought. He saw that the restriction to rigid problem-solving was an inessential feature of the fundamental architecture, and predicted, correctly, the field of AI. We've been loosening the straps, widening the von Neumann bottleneck, ever since. Doug Hofstadter has pioneered the systematic introspection of the other end of the spectrum of mental activities, in a way that could not be done by anyone who lacked his deep understanding of the computational possibilities inherent in Turing's fabulous machine. It is not just that while others have concentrated on problem solving and Scientific Thinking, he has concentrated on daydreaming and Artistic Thinking. He has recognized that even Scientific Thinking is, at its base, analogical. All cognition is analogical. That is where we all must start, with the kid stuff.