The Philosophy of Computing group is implementing NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809, which was awarded in November 2020. The project’s PI is Hajo Greif, the Co-investigator Paweł Stacewicz, the post-doc (hired October 2021) Adam Kubiak. The project is funded by NCN with PLN 767,130.– (approx. EUR 170,000.–) for three years (2021-2023), which involves a three-year postdoc position. The abstract can be downloaded from NCN here. More details on the project here.
Paweł Jarnicki and Hajo Greif (2022). The ›Aristotle Experience‹ Revisited. Thomas Kuhn meets Ludwik Fleck on the road to Structure. Archiv für Geschichte der Philosophie (online first). DOI: 10.1515/agph-2020-0160
Abstract: This article takes issue with Kuhn’s description of the ‘Aristotle experience,’ an event that took place in 1947 and that he retrospectively characterized as a revelation that instantly delivered to him the key concepts of The Structure of Scientific Revolutions (1962). We trace a certain transformation of this narrative over time: whereas it commenced from a description of his impression of disparity between the textbook image of science and the study of historical sources, Kuhn started to characterize it as a revelation after learning of the English translation of Fleck’s 1935 Entstehung und Entwicklung einer wissenschaftlichen Tatsache. This book anticipates many central Kuhnian claims. Kuhn read it as early as 1949, but never fully acknowledged it as a source of inspiration. We discuss four hypotheses concerning the possible influence of Fleck’s theory on Kuhn’s in light of the available evidence. We conclude that the degree of similarity between them is too great to be coincidental.
No Open Access available, please contact authors for full text.
Hajo Greif (2022). Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence. Minds & Machines 32: 111-133. Special Issue “Machine Learning: Prediction Without Explanation?”, edited by Florian Boge, Paul Grünke and Rafaela Hillerbrand. Open Access. DOI: 10.1007/s11023-022-09596-9
Abstract: The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between elements of model and target system (Black 1962). While analogue models aim at perceptually or conceptually accessible model-target relations, computer models give rise to a specific kind of underdetermination in these relations that needs to be addressed in specific ways. I then undertake a comparison between two contemporary AI approaches that, although related, distinctly align with the above modelling paradigms and represent distinct strategies towards model intelligibility: Deep Neural Networks and Predictive Processing. I conclude that their respective degrees of epistemic transparency primarily depend on the underlying purposes of modelling, not on their computational properties.
Acknowledgements: The origin of this paper was a symposium on ‘Deep Learning and the Philosophy of Artificial Intelligence’ co-organised by the author at the GWP 2019 conference of the German Society for Philosophy of Science. I thank Cameron Buckner, Holger Lyre and Carlos Zednik for motivating and moving this project. I also thank the guest editors of this Special Issue for their encouragement and patience, Alessandro Facchini and Alberto Termine for their input in the process of preparing the manuscript, and two anonymous reviewers for their competent and constructive criticisms.
Hajo Greif (2022). Likeness-Making and the Evolution of Cognition. Biology & Philosophy 37 (2022): Article 1 (online). Open Access. https://doi.org/10.1007/s10539-021-09830-1.
Abstract: Paleontological evidence suggests that human artefacts with intentional markings might have originated already in the Lower Paleolithic, up to 500.000 years ago and well before the advent of ‘behavioural modernity’. These markings apparently did not serve instrumental, tool-like functions, nor do they appear to be forms of figurative art. Instead, they display abstract geometric patterns that potentially testify to an emerging ability of symbol use. In a variation on Ian Hacking’s speculative account of the possible role of “likeness-making” in the evolution of human cognition and language, this essay explores the central role that the embodied processes of making and the collective practices of using such artefacts might have played in early human cognitive evolution. Two paradigmatic findings of Lower Paleolithic artefacts are discussed as tentative evidence of likenesses acting as material scaffolds in the emergence of symbolic reference-making. They might provide the link between basic abilities of mimesis and imitation and the development of modern language and thought.
Acknowledgements: The (already temporally distant) foundations for this work were laid in my Austrian Science Fund (FWF) grant J3448-G15. A first version of this paper was presented at the 3rd International Avant-Conference 2017 in Lublin, Poland. I thank the organiser, Marcin Trybulec, for encouragement back then, and Peter Gärdenfors, Anton Killin, Kim Sterelny and Jörg Wernecke for helpful comments and practical suggestions later on.
Hajo Greif (2021). Adaptation and its Analogues: Biological Categories for Biosemantics. Studies in History and Philosophy of Science 90 (2021): 298–307. Open Access. https://doi.org/10.1016/j.shpsa.2021.
Abstract: “Teleosemantic” or “biosemantic” theories form a strong naturalistic programme in the philosophy of mind and language. They seek to explain the nature of mind and language by recourse to a natural history of “proper functions” as selected-for effects of language- and thought-producing mechanisms. However, they remain vague with respect to the nature of the proposed analogy between selected-for effects on the biological level and phenomena that are not strictly biological, such as reproducible linguistic and cultural forms. This essay critically explores various interpretations of this analogy. It suggests that these interpretations can be explicated by contrasting adaptationist with pluralist readings of the evolutionary concept of adaptation. Among the possible interpretations of the relations between biological adaptations and their analogues in language and culture, the two most relevant are a linear, hierarchical, signalling-based model that takes its cues from the evolution of co-operation and joint intentionality and a mutualistic, pluralist model that takes its cues from mimesis and symbolism in the evolution of human communication. Arguing for the merits of the mutualistic model, the present analysis indicates a path towards an evolutionary pluralist version of biosemantics that will align with theories of cognition as being environmentally “scaffolded”. Language and other cultural forms are partly independent reproducible structures that acquire proper functions of their own while being integrated with organism-based cognitive traits in co-evolutionary fashion.
Acknowledegements: This article started as an improvised contribution to the ‘The Future of Teleosemantics’ conference in Bielefeld, Germany, in 2018. I thank the organisers, especially Peter Schulte, for providing the opportunity to present my first ideas – which matured with the help of the critical comments from the participants, especially Ruth Millikan, and two anonymous reviewers.
Paweł Stacewicz and Hajo Greif (2021). “Concepts as Decision Functions. The Issue of Epistemic Opacity of Conceptual Representations in Artificial Computing Systems”. Procedia Computer Science 192, 4120–4127. doi: 10.1016/j.procs.2021.09.187
This is a conference paper at the intersection between computer science and philosophy, with Paweł as lead author, presented at the 25th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, Szczecin, Poland, 8-10 September 2021.
Hajo Greif (2021): “Exploring minds: Modes of modelling and simulation in Artificial Intelligence”. Perspectives on Science 29, 4: 409–435. Special Issue on Exploratory Models and Exploratory Modelling in Science, Guest Editors: Axel Gelfert, Grant Fisher, Friedrich Steinle. doi:10.1162/posc_a_00377.
A self-archived manuscript version (not identical with the published version) is available free of charge at PhilSci Archive.
Abstract: The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the traditional dichotomies between symbolic and embodied AI, general intelligence and symbol and intelligence and cognitive simulation and human/non-human-like AI.
According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programs: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI).
In continuation of pragmatic views of the modes of modeling and simulating world affairs, this taxonomy of approaches to modeling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research—and what made that research program uniquely dependent on them.
The international and interdisciplinary online lecture series »Thinking Machines: History, Present and Future of Artificial Intelligence« in Summer Term 2021 has been jointly hosted by the Research Institute for the History of Science and Technology, Deutsches Museum, the European New School of Digital Studies, and the Philosophy of Computing group, ICFO. Speakers include: Pamela McCorduck, Stephanie Dick, Shannon Vallor, Harry Collins, Wolfgang Bibel, Vincent Müller, Virginia Dignum, Kristian Kersting.
More details and full programme on the official website of the series.