Category Theorists in AI

Azimuth 2025-02-17

Applied category theorists are flocking to AI, because that’s where the money is. I avoid working on it, both because I have an instinctive dislike of ‘hot topics’, and because at present AI is mainly being used to make rich and powerful people richer and more powerful.

However, I like to pay some attention to how category theorists are getting jobs connected to AI, and what they’re doing. Many of these people are my friends, so I wonder what they will do for AI, and the world at large—and what working on AI will do to them.

Let me list a bit of what’s going on. I’ll start with a cautionary tale, and then turn to the most important program linking AI and category theory today.

Symbolica

When Musk and his AI head Andrej Karpathy didn’t listen to their engineer George Morgan’s worry that current techniques in deep learning couldn’t “scale to infinity and solve all problems,” Morgan left Tesla and started a company called Symbolica to work on symbolic reasoning. The billionaire Vinod Khosla gave him $2 million in seed money. He began with an approach based on hypergraphs, but then these researchers wrote a position paper that pushed the company in a different direction:

Bruno Gavranović, Paul Lessard, Andrew Dudzik, Tamara von Glehn, João G. M. Araújo and Petar Veličković, Position: Categorical Deep Learning is an Algebraic Theory of All Architectures

Kholsa liked the new direction and invested $30 million more in Symbolica. At Gavranovic and Lessard’s suggestion Morgan hired category theorists including Dominic Verity and Neil Ghani.

But Morgan was never fully sold on category theory: he still wanted to pursue his hypergraph approach. After a while, continued disagreements between Morgan and the category theorists took their toll. He fired some, even having one summarily removed from his office. Another resigned voluntarily. Due to nondisclosure agreements, these people no longer talk publicly about what went down.

So one moral for category theorists, or indeed anyone with a good idea: after your idea helps someone get a lot of money, they may be able to fire you.

ARIA

David Dalrymple is running a £59 million program on Mathematics for Safeguarded AI at the UK agency ARIA (the Advanced Research + Invention agency). He is very interested in using category theory for this purpose. Here are some category theory projects getting money from this program. I’m just quoting the website here.

Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes

Fabio Zanasi, University College London

This project aims to provide complete axiomatic theories of string diagrams for significant categories of probabilistic processes. Fabio will then use these theories to develop compositional methods of analysis for different kinds of probabilistic graphical models.

Safety: Core Representation Underlying Safeguarded AI

Ohad Kammar, University of Edinburgh Team: Justus Matthiesen; Jesse Sigal, University of Edinburgh

This project looks to design a calculus that utilises the semantic structure of quasi-Borel spaces, introduced in ‘A convenient category for higher-order probability theory‘. Ohad and his team will develop the internal language of quasi-Borel spaces as a “semantic universe” for stochastic processes, define syntax that is amenable to type-checking and versioning, and interface with other teams in the programme—either to embed other formalisms as sub-calculi in quasi-Borel spaces, or vice versa (e.g. for imprecise probability).

Philosophical Applied Category Theory

David Corfield, Independent Researcher

This project plans to overcome the limitations of traditional philosophical formalisms by integrating interdisciplinary knowledge through applied category theory. In collaboration with other TA1.1 Creators, David will explore graded modal logic, type theory, and causality, and look to develop the conceptual tools to support the broader Safeguarded AI programme.

Double Categorical Systems Theory for Safeguarded AI

David Jaz Myers, Topos Research UK Team: Owen Lynch; Sophie Libkind; David Spivak, Topos Research UK; James Fairbanks, University of Florida

This project aims to utilise Double Categorical Systems Theory (DCST) as a mathematical framework to facilitate collaboration between stakeholders, domain experts, and computer aides in co-designing an explainable and auditable model of an autonomous AI system’s deployment environment. David + team will expand this modelling framework to incorporate formal verification of the system’s safety and reliability, study the verification of model-surrogacy of hybrid discrete-continuous systems, and develop serialisable data formats for representing and operating on models, all with the goal of preparing the DCST framework for broader adoption across the Safeguarded AI Programme.

Monoidal Coalgebraic Metrics

Filippo Bonchi, University of Pisa

Filippo and team intend to establish a robust mathematical framework that extends beyond the metrics expressible in quantitative algebraic theories and coalgebras over metric spaces. By shifting from Cartesian to a monoidal setting, this group will examine these metrics using algebraic contexts (to enhance syntax foundations) and coalgebraic contexts provide robust quantitative semantics and effective techniques for establishing quantitative bounds on black-box behaviours), ultimately advancing the scope of quantitative reasoning in these domains.

Doubly Categorical Systems Logic: A Theory of Specification Languages

Matteo Capucci, Independent Researcher

This project aims to develop a logical framework to classify and interoperate various logical systems created to reason about complex systems and their behaviours, guided by Doubly Categorical Systems Theory (DCST). Matteo’s goal is to study the link between the compositional and morphological structure of systems and their behaviour, specifically in the way logic pertaining these two components works, accounting for both dynamic and temporal features. Such a path will combine categorical approaches to both logic and systems theory.

True Categorical Programming for Composable Systems Total

Jade Master and Zans Mihejevs, Glasgow Lab for AI Verification (GLAIVE) Team: André Videla; Dylan Braithwaite, GLAIVE

This project intends to develop a type theory for categorical programming that enables encoding of key mathematical structures not currently supported by existing languages. These structures include functors, universal properties, Kan extensions, lax (co)limits, and Grothendieck constructions. Jade + team are aiming to create a type theory that accurately translates categorical concepts into code without compromise, and then deploy this framework to develop critical theorems related to the mathematical foundations of Safeguarded AI.

Employing Categorical Probability Towards Safe AI

Sam Staton, University of Oxford Team: Pedro Amorim; Elena Di Lavore; Paolo Perrone; Mario Román; Ruben Van Belle; Younesse Kaddar; Jack Liell-Cock; Owen Lynch, University of Oxford

Sam + team will look to employ categorical probability toward key elements essential for world modelling in the Safeguarded AI Programme. They will investigate imprecise probability (which provides bounds on probabilities of unsafe behaviour), and stochastic dynamical systems for world modelling, and then look to create a robust foundation of semantic version control to support the above elements.

Syntax and Semantics for Multimodal Petri Nets

Amar Hadzihasanovic, Tallinn University of Technology Team: Diana Kessler, Tallinn University of Technology

Amar + team will develop a combinatorial and diagrammatic syntax, along with categorical semantics, for multimodal Petri Nets. These Nets will model dynamical systems that undergo mode or phase transitions, altering their possible places, events, and interactions. Their goal is to create a category-theoretic framework for mathematical modelling and safe-by-design specification of dynamical systems and process theories which exhibit multiple modes of operation.

Topos UK

As you can see above, ARIA is funding a new branch of the Topos Institute at Oxford, called Topos UK. The previous Topos Institute was a math research institute in Berkeley with a focus on category theory. Three young category theorists there—Sophie Libkind, David Jaz Myers and Owen Lynch—wrote a proposal called “Double categorical systems theory for safeguarded AI”, which got funded by ARIA.

It looks like David Jaz Myers, Owen Lynch, Tim Hosgood, José Siquiera and Xiaoyan Li and maybe others will now be working at Topos UK.

Glaive

The Glasgow Lab for AI Verification or ‘Glaive’ is a non-profit limited company focusing on applying category theory to AI verification. The people working for it include Dylan Braithwaite, Jade Master, Zans Mihejevs, André Videla, Neil Ghani and Jules Hedges.

Like the Topos UK, Glaive is supported by ARIA.

Conexus AI

Conexus AI, led by Ryan Wisnesky, and the associated open-source categorical data project, apply functorial data migration and related categorical and formal methods to problems in data integration (where the tech is known as generative symbolic AI) and so-called ‘safe AI’ (where the tech is used to validate LLM outputs).