the first 100 pages set the foundation for the language of thinking about how repeating patterns (triangles, squares, -agons etc) work, and then you start getting into brainbending tesselations. Not a lot of fluff, its just straight into: ok if you reflect then rotate, then reflect again, all around a center point, you get a pattern that looks like this: and a couple, very clean, example images.
Great for designers, artists, tile-layers, and those into #MCEscher & #OpticalIllusions & #fractals but with a logic/systems/mathematics bent.
Not to dunk on this research, which I think is interesting and important, but if you've ever explored iterated function systems, discrete dynamical systems, fractals or the like, this is a wholly unsurprising observation. A general class of observations is that repeatedly iterating a function on a given input will diverge from that input and start assuming qualities reflective of the function itself.
For instance, watch some of the videos on this page: https://www.algorithm-archive.org/contents/barnsley/barnsley.html . In one set, you'll see a square with randomly-placed dots being squished down into various shapes. In another set, you'll see the Barnsley fern itself run through the same functions being squished down to roughly the same shapes. This is a general fixed-point …
AI models fed AI-generated data quickly spew nonsense
Researchers gave successive versions of a large language model information produced by previous generations of the AI — and observed rapid collapse.
Not to dunk on this research, which I think is interesting and important, but if you've ever explored iterated function systems, discrete dynamical systems, fractals or the like, this is a wholly unsurprising observation. A general class of observations is that repeatedly iterating a function on a given input will diverge from that input and start assuming qualities reflective of the function itself.
For instance, watch some of the videos on this page: https://www.algorithm-archive.org/contents/barnsley/barnsley.html . In one set, you'll see a square with randomly-placed dots being squished down into various shapes. In another set, you'll see the Barnsley fern itself run through the same functions being squished down to roughly the same shapes. This is a general fixed-point result of this (and all contractive affine) systems: any input set of points will be squished into the same shapes, and precisely the same fern image will emerge no matter what (non-empty) input you start with when you iterate these processes often enough (by iterate I mean feeding the output of the functions back in as input, as in the linked paper). This is an instance of the Banach fixed-point theorem applied to the Hausdorff metric on images; the theorem states that any self-map that's contractive in the metric has a unique fixed point. In this case, the unique fixed point is the fern image; the map being iterated is a bit complicated but detailed on that linked page about the fern. The theorem tells us this unique fixed point is dependent only on the self-map, not on what input is put in.
Naturally #GenerativeAI training and input-output procedures are considerably more complicated than affine functions, but the same class of fixed point phenomena are almost surely at play, especially for the image-generating ones. Personally I'd find it surprising and interesting if there weren't fixed point theorems like this for #GenerativeAI systems trained on their own outputs.
"Citrate synthase from S. elongatus has the peculiar capacity to self-assemble into a type of #fractal shape known as a #Sierpiński triangle. This is not a universal feature of citrate synthases, there's something unique about the one from this #cyanobacterium."