Systematicity refers to the apparent structured nature of cognitive abilities: if a cognizer can entertain one thought, it can typically entertain others that are structurally related.
For example, if it can infer P from P&Q, it can also infer P from more complex disjunctions like P&R&Q. Likewise, if it understands that a loves b, it can typically also understand that b loves a, reflecting a sensitivity to structural transformations.
If a cognizer understands John Loves Mary, it also understands Mary Loves John. Similarly, if a cognizer can perceive a red triangle on top of a blue square, it can also perceive a blue square on top of a red triangle.
Systematicity is an important concept in the history of the philosophy of cognitive science. For example, it was used to challenge connectionism (a strand in cognitive science) in the 80s.
But how do questions of systematicity relate to AI?
The original discussion in the philosophy of cognitive science was determining which cognitive architecture best fit the data: classicism or connectionism. But in the context of AI where architectures are not put forward as explanans, it can be more difficult to see the importance of systematicity. Yet I still think they are highly relevant and it will become apparent why shortly.
The Explanandum
What exactly were classicist or connectionist architectures supposed to explain? Or, more generally, what is systematicity?
Systematicity appears to be a core cognitive capacity.
Performance vs Competence
Systematicity is about competence, not performance. The question is not whether any cognizers yesterday
The Explanans
One explanation of systematicty of cognition
🔸