NCN Grant »Turing, Ashby, and ›the Action of the Brain‹«

The Philosophy of Computing group is implementing NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809, which was awarded in November 2020. The project’s PI is Hajo Greif, the Co-investigator Paweł Stacewicz, the post-doc (hired October 2021) Adam Kubiak. The project is funded by NCN with PLN 767,130.– (approx. EUR 170,000.–) for three years (2021-2023), which involves a three-year postdoc position. The abstract can be downloaded from NCN here. More details on the project here.

HaPoC-7 in Warsaw – submissions open

Submissions are now open for the 7th edition of the biennial History and Philosophy of Computing conference (HaPoC-7), to be hosted by the Faculty of Administration and Social Sciences at Warsaw University of Technology from 18 through 20 October, 2023.

Important dates:
– Submission deadline: April 30, 2023
– Notification of acceptance/rejection: June 30th, 2023
– Conference: October 18-20, 2023

Conference website:
Submission website:
HaPoC website (with links to past conferences):

Out now: »Turing’s Biological Philosophy« in Philosophies SI on »Turing the Philosopher«

Hajo Greif, Adam Kubiak, and Paweł Stacewicz (2023). Turing’s Biological Philosophy. Morphogenesis, Mechanisms and Organicism. Philosophies 8, 2023, Article 8. Special Issue “Turing the Philosopher: Established Debates and New Developments”, edited by Diane Proudfoot and Zhao Fan. Open Access. DOI: 10.3390/philosophies8010008

Abstract: Alan M. Turing’s last published work and some posthumously published manuscripts were dedicated to the development of his theory of morphogenesis. In The Chemical Basis of Morphogenesis (1952), he provided an elaborated mathematical formulation of the theory of the origins of biological form that had been first proposed by Sir D’Arcy Wendworth Thompson in On Growth and Form (1917/1942). While being his mathematically most detailed and systematically most ambitious effort, Turing’s morphogenetical writings also form the thematically most self-contained and philosophically least explored part of his work. We dedicate our inquiry to the reasons and the implications of Turing’s choice of biological topic and viewpoint. Thompson’s pioneering work in biological ‘structuralism’ was organicist in outlook and explicitly critical of the Darwinian approaches that were popular with Turing’s cyberneticist contemporaries – and partly used by Turing himself in his proto-connectionist models of learning. In particular, we will probe for possible factors in Turing’s choice that go beyond availability and acquaintance with Thompsons’ approach, in particular his quest for mechanistic, non-teleological explanations of how organisation emerges in nature that nonetheless leave room for a non-mechanistic view of nature.

Acknowledgement: The research presented in this publication was supported by NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809.

Out now: »Models, Algorithms, and the Subjects of Transparency« in PT-AI 2021

Hajo Greif (2022). Models, algorithms, and the subjects of transparency. In Philosophy and Theory of Artificial Intelligence 2021, V. C. Müller, ed., Springer, Cham, 2022, pp. 27–37. DOI: 10.1007/978-3-031-09153-7_3

Abstract: Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms ‘transparency’ and ‘opacity’ are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic situation of human agents with respect to these systems. While these diagnoses are independently discussed in the literature, juxtaposing them and exploring possible interrelations will help to get a view of the relevant distinctions between conceptions of opacity and their empirical bearing. In pursuit of this aim, two pertinent conditions affecting computer models in general and contemporary AI in particular are outlined and discussed: opacity as a problem of computational tractability and opacity as a problem of the universality of the computational method.

Acknowledgement: The research presented in this publication was supported by NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809.

IACAP board

In autumn 2022, Hajo Greif was elected as a member of the new board of IACAP, the International Association for Computing and Philosophy. The association’s board now includes Steve McKinlay (Wellington Institute of Technology, President), Björn Lundgren (Utrecht University, Vice President), Ramón Alvarado (University of Oregon, Treasurer), Ahmed Amer (Santa Clara University), Brian Ballsun-Stanton (Macquarie University), Arzu Formánek (University of Vienna) and Thomas M. Powers (University of Delaware).

Out now: »The ›Aristotle Experience‹ Revisited« in Archiv für Geschichte der Philosophie

Paweł Jarnicki and Hajo Greif (2022). The ›Aristotle Experience‹ Revisited. Thomas Kuhn meets Ludwik Fleck on the road to Structure. Archiv für Geschichte der Philosophie (online first). DOI: 10.1515/agph-2020-0160

Abstract: This article takes issue with Kuhn’s description of the ‘Aristotle experience,’ an event that took place in 1947 and that he retrospectively characterized as a revelation that instantly delivered to him the key concepts of The Structure of Scientific Revolutions (1962). We trace a certain transformation of this narrative over time: whereas it commenced from a description of his impression of disparity between the textbook image of science and the study of historical sources, Kuhn started to characterize it as a revelation after learning of the English translation of Fleck’s 1935 Entstehung und Entwicklung einer wissenschaftlichen Tatsache. This book anticipates many central Kuhnian claims. Kuhn read it as early as 1949, but never fully acknowledged it as a source of inspiration. We discuss four hypotheses concerning the possible influence of Fleck’s theory on Kuhn’s in light of the available evidence. We conclude that the degree of similarity between them is too great to be coincidental.

No Open Access available, please contact authors for full text.

Out now: »Analogue Models and Universal Machines« in Minds & Machines

Hajo Greif (2022). Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence. Minds & Machines 32: 111-133. Special Issue “Machine Learning: Prediction Without Explanation?”, edited by Florian Boge, Paul Grünke and Rafaela Hillerbrand. Open Access. DOI: 10.1007/s11023-022-09596-9  

Abstract: The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between elements of model and target system (Black 1962). While analogue models aim at perceptually or conceptually accessible model-target relations, computer models give rise to a specific kind of underdetermination in these relations that needs to be addressed in specific ways. I then undertake a comparison between two contemporary AI approaches that, although related, distinctly align with the above modelling paradigms and represent distinct strategies towards model intelligibility: Deep Neural Networks and Predictive Processing. I conclude that their respective degrees of epistemic transparency primarily depend on the underlying purposes of modelling, not on their computational properties.

Acknowledgements: The research presented in this publication was supported by NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809. The origin of this paper was a symposium on ‘Deep Learning and the Philosophy of Artificial Intelligence’ co-organised by the author at the GWP 2019 conference of the German Society for Philosophy of Science. I thank Cameron Buckner, Holger Lyre and Carlos Zednik for motivating and moving this project. I also thank the guest editors of this Special Issue for their encouragement and patience, Alessandro Facchini and Alberto Termine for their input in the process of preparing the manuscript, and two anonymous reviewers for their competent and constructive criticisms.

Out now: »Likeness-Making and the Evolution of Cognition« in Biology & Philosophy

Hajo Greif (2022). Likeness-Making and the Evolution of Cognition. Biology & Philosophy 37 (2022): Article 1 (online). Open Access.

Abstract: Paleontological evidence suggests that human artefacts with intentional markings might have originated already in the Lower Paleolithic, up to 500.000 years ago and well before the advent of ‘behavioural modernity’. These markings apparently did not serve instrumental, tool-like functions, nor do they appear to be forms of figurative art. Instead, they display abstract geometric patterns that potentially testify to an emerging ability of symbol use. In a variation on Ian Hacking’s speculative account of the possible role of “likeness-making” in the evolution of human cognition and language, this essay explores the central role that the embodied processes of making and the collective practices of using such artefacts might have played in early human cognitive evolution. Two paradigmatic findings of Lower Paleolithic artefacts are discussed as tentative evidence of likenesses acting as material scaffolds in the emergence of symbolic reference-making. They might provide the link between basic abilities of mimesis and imitation and the development of modern language and thought.

Acknowledgements: The (already temporally distant) foundations for this work were laid in my Austrian Science Fund (FWF) grant J3448-G15. A first version of this paper was presented at the 3rd International Avant-Conference 2017 in Lublin, Poland. I thank the organiser, Marcin Trybulec, for encouragement back then, and Peter Gärdenfors, Anton Killin, Kim Sterelny and Jörg Wernecke for helpful comments and practical suggestions later on.

Out now: »Adaptation and its Analogues« in SHPS

Hajo Greif (2021). Adaptation and its Analogues: Biological Categories for Biosemantics. Studies in History and Philosophy of Science 90 (2021): 298–307. Open Access.

Abstract: “Teleosemantic” or “biosemantic” theories form a strong naturalistic programme in the philosophy of mind and language. They seek to explain the nature of mind and language by recourse to a natural history of “proper functions” as selected-for effects of language- and thought-producing mechanisms. However, they remain vague with respect to the nature of the proposed analogy between selected-for effects on the biological level and phenomena that are not strictly biological, such as reproducible linguistic and cultural forms. This essay critically explores various interpretations of this analogy. It suggests that these interpretations can be explicated by contrasting adaptationist with pluralist readings of the evolutionary concept of adaptation. Among the possible interpretations of the relations between biological adaptations and their analogues in language and culture, the two most relevant are a linear, hierarchical, signalling-based model that takes its cues from the evolution of co-operation and joint intentionality and a mutualistic, pluralist model that takes its cues from mimesis and symbolism in the evolution of human communication. Arguing for the merits of the mutualistic model, the present analysis indicates a path towards an evolutionary pluralist version of biosemantics that will align with theories of cognition as being environmentally “scaffolded”. Language and other cultural forms are partly independent reproducible structures that acquire proper functions of their own while being integrated with organism-based cognitive traits in co-evolutionary fashion.

Acknowledegements: This article started as an improvised contribution to the ‘The Future of Teleosemantics’ conference in Bielefeld, Germany, in 2018. I thank the organisers, especially Peter Schulte, for providing the opportunity to present my first ideas – which matured with the help of the critical comments from the participants, especially Ruth Millikan, and two anonymous reviewers.

Out now: »Concepts as Decision Functions« in Procedia Computer Science

Paweł Stacewicz and Hajo Greif (2021). “Concepts as Decision Functions. The Issue of Epistemic Opacity of Conceptual Representations in Artificial Computing Systems”. Procedia Computer Science 192, 4120–4127. doi: 10.1016/j.procs.2021.09.187

This is a conference paper at the intersection between computer science and philosophy, with Paweł as lead author, presented at the 25th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, Szczecin, Poland, 8-10 September 2021.

Acknowledgement: The research presented in this publication was supported by NCN (National Science Centre) OPUS 19 grant ref. 2020/37/B/HS1/01809.

Out now: »Exploring Minds« in Perspectives on Science

Hajo Greif (2021): “Exploring minds: Modes of modelling and simulation in Artificial Intelligence”. Perspectives on Science 29, 4: 409–435. Special Issue on Exploratory Models and Exploratory Modelling in Science, Guest Editors: Axel Gelfert, Grant Fisher, Friedrich Steinle. doi:10.1162/posc_a_00377.

A self-archived manuscript version (not identical with the published version) is available free of charge at PhilSci Archive.

Abstract: The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the traditional dichotomies between symbolic and embodied AI, general intelligence and symbol and intelligence and cognitive simulation and human/non-human-like AI.
According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programs: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI).
In continuation of pragmatic views of the modes of modeling and simulating world affairs, this taxonomy of approaches to modeling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research—and what made that research program uniquely dependent on them.