Back

commented on Context Changes Everything by Alicia Juarrero

Alicia Juarrero: Context Changes Everything (2023, MIT Press, The MIT Press) 1 star

#JuarreroBook Ch. 7 “Catalysts, Loops, and Closure” Part I

This chapter (finally) moves the book to more interesting examples: systems and processes exhibiting “self-organizing self-cause” and richer mereological (part-whole) relationships.

The example is catalysts and the way they function as context-dependent constraints (reminder: the constraints that create dependencies).

“Catalysts speed up chemical reactions by lowering barriers to energy flow and thereby facilitating irreversible interactions without being consumed themselves.”

as such, they illustrate the general property of context-dependent constraints whereby they “weave together interlocking dependencies without directly injecting energy”

nb “Folding-back-on-themselves processes such as feedforward and feedback loops are also catalysts. Iteration and recursion are two such examples”

“In recursive iteration, full sequences are fed back on themselves. This looping causes processes and sequences to become self-referential; recursive iteration blurs the distinction between parts and wholes.”

“Iteration and recursion feed information from the context back into the next sequence as newly initialized conditions and constraints. Such looped and contextually constraining and constrained interactions effectively import spatial and temporal information about the world into those processes and their properties. As a result, the processes become interdependent and covary with events in the world.”

One example J. lists is backpropagation and the way weight modification in neural networks leads the system to attune to meaningful real-world distinctions

“It is important to note that recursion and iteration are possible only after temporal dependencies (straightforward sequences) have already formed in response to enabling, context-dependent constraints. That is, recursion and iteration are not possible without previously constrained ordinal relations. That said, however, when the last step of a sequence feeds back to become the first in the next iteration, the looping creates self- referential configurations and nonlinearities. Nonlinearities generate multiscale and multidimensional interdependencies.”

Iteration and recursion are “hybrid constraints”: “Both take systems farther from equilibrium ..so.. qualify as context-independent constraints. But by feeding real world information back into the process, iteration and recursion also weave context, history, and the subject’s own actions into a more encompassing coordination dynamic—the spatiotemporally more extended nterdependencies of a new context. In this role, ..they..function as context-dependent constraints. Once recursive or iterative loops close thanks to integration by enabling constraints, real-world spatiotemporal information becomes embodied in a qualitatively distinct set of interlocking relations with novel properties"

one example discussed later in the chapter is the Plaut & Shallice (1991), Hinton & Shallice (1991) connectionist model of deep dyslexia

replied to uh's status

@uh @dcm @SylviaFysica

one thing that struck me is that recursion potentially seems like one answer to the question we struggled with in GoL, namely 'how do you get higher level regularities or patterns to be genuinely causally efficacious?' given there is 'no space' or gap in the low level deterministic rules. Recursion implemented through something like a context vector in a conn. network could easily be added to agents in an ABM and makes behaviour non-markovian opening up that space (??)

@UlrikeHahn @uh @SylviaFysica wouldn't then be the context vector to be causally efficacious?

Just finished reading the chapter, and for a book on ontology, there's very little discussion of ontology. At some point, J. claims that possibility spaces are real. But how? What about the possibilities that never happen? In what sense are they real? No detailed argument, actually no argument, is provided, thus they remain undefended statements.

@UlrikeHahn @uh @SylviaFysica In other points, discussing closure, J. says that homeostasis, etc., provide epistemic simplifications that capture something real in the world. Later she says that the embedding context is not other than the reactions that make it up. These seem like rather uncontroversial mainstream views in current philosophy of science, but that don't quite fit with the stated aims of the book.

@dcm @uh @SylviaFysica

Dimitri, my thinking on the recursion and the context vector was this: yes, it's the context vector input that is causally efficacious in first instance. But I think just focussing on that might miss what the context vector does: in the context of art. neural networks, it makes the system sensitive to a whole new dimension -time- and statistical properties of the environment inherent in the time dimension.

1/2

replied to Ulrike Hahn's status

@dcm @uh @SylviaFysica

2/2 this means also that "patterns' have a whole new route for potential efficacy/realism.

Again, it's not particularly well-articulated in the book, but I think there might genuinely be something there.

Imagine a NN that, due to context vectors, becomes sensitive to the possibility of long range dependencies in language.

That is starting to feel like expanding the 'rule set' in some way, and making patterns more real

@dcm @UlrikeHahn @uh @SylviaFysica this was actually one of the points in my slide - quantum level gives us probably, but human understanding contains potential and actually (the universe sees this much simpler at a dualistic level).

In that sense, yea it's a real space where thought energy converts to something physical, even if that's electrons stored on a SSD in the cloud - the other potentials still exist but only as probabilities (and either gaining or diminishing)

@dcm @UlrikeHahn @uh @SylviaFysica A dream is a dream until you write it down - then it's the foundations of an idea that may one day be an actionable plan that might make a casual difference. If you don't write it down then it's still just potential - but as we know from the past sometimes potentials can appear in multiple places (e.g. 2 or 3 people inventing something at the same time - but the one who made it a plan got the glory)