Hi Alain
This is quite an interesting concept. I have been working on something similar using Zipfian distributions which involves rank ordering signals, which obviously destroys a lot of information. But if you "play out" this process across multiple scales, it is perfectly possible to put a sine wave in and get a bell curve out. So, in principle, you can use this technique to reconstruct any signal. The connection here is that the pink noise profile associated with Zipf's law is strongly associated with band-limited signals. In fact, the ubiquity of that kind of distribution likely has something to do with necessary band-limiting that occurs with any sampled signal. Here one can consider what happens inside the brain as a sort of corollary to the Nyquist theorem, because even though you only need a finite number of samples to describe a signal with a given maximum frequency component, the number of perspectives of that signal is effectively infinite (short of a universal theory of everything in physics). So the perceptual mechanism plus signal system taken as a whole is necessarily band-limited in that sense. Zipf's law is a kind of projective optimum for that kind of system, and it makes sense that biological systems have a strong evolutionary impulse to arrange itself in the most efficient way. Obviously it's more complicated than that, but that's a simple way to understand it.
The kicker of thinking about minds in this way is that we can dispense with problematic concepts such as real existences of objects and things as the basic building blocks of cognition. A simple neural network such as that of C. Elegans doesn't need to build a representation of the world, that representation builds itself simply in the gradients across the signals that it is receiving, and how those gradients change as the worm moves in its surroundings. In evolutionary terms it then becomes a comparatively simple matter of selecting for genotypes that optimize the sensitivity to things in the environment that allows the worm to succeed. It gives a nice model of cognition that allows human thought to fit into a continuum, because such a system inherently scales: More neurons = more opportunities discrimination. It helps to explain a lot of other things such as the plasticity of the brain in contrast to the specialisation of brain regions for specific tasks.
It also goes a long way towards making the difference between current generation AI and human cognition obvious. AI training starts with "pre-coded" targets as sort of "victory conditions". That's not a good analogue for real brains, where there is no real victory condition. One could say that minds are not teleologically constructed in that way: They are not constructed to see one thing or another, but optimally discriminate between things. So it's not the case that language, for example, obeys Zipf's law, it is that our understanding of things tends towards Zipf's law because that is the optimum way for us to arrange objects in our understanding (or items in our perception).
One thing that I have found in trying to code this sort of thing is that, while it seems quite straightforward in an analogue "mind", it scales poorly in a digital sense. That's simply because a wet and squishy mind has far more states that it can respond to than simply a 1/0 electrical impulse. There are chemical, mechanical, thermodynamic and even quantum effects that affect even a single neuron, and all of that can be roped into making discriminations about internal and external states that you need to navigate an environment.
In that sense, to bring it back to Logan's theorem, one way of thinking about it may be that each element of a cognitive system could be considered perfectly band-limited within a narrowly defined domain, and would be useful to follow elements deeper into the system to that extent. Learning can then be understood as refining the domains to which these elements are applied to.
Anyway, long rant, but hopefully you can find something useful there.
Doug