Skip to main content

1 Session 2

Blog post by Mengyi Mei.

Following Session 1’s focus on the technical foundations of AI, Session 2 of the Innovation and Ownership series turned to philosophy and intellectual property (IP) theory. The central question was deceptively simple yet normatively complex: Who owns knowledge created by AI?

The session was organised by Dr Van Anh Le (Assistant Professor in IP Law), featured Dr Travis LaCroix (Assistant Professor, Department of Philosophy) as the invited speaker, with Dr Mengyi Mei (Career Development Fellow in IP Law) as collaborator and author of this blogpost. Approximately 20 participants attended, including students and researchers from law, philosophy, business, and modern languages, reflecting the interdisciplinary nature of contemporary AI governance debates.

From Legal Question to Philosophical Clarification

While Session 1 examined authorship and ownership from a technical perspective, Session 2 moved to a more foundational level. Before asking who owns AI-generated knowledge, we must ask: Does AI create knowledge at all?

This shift matters because concepts such as authorship, ownership, and responsibility in IP law are grounded in deeper philosophical assumptions about agency and knowledge.

Dr LaCroix reframed the debate accordingly. Rather than beginning with ownership, he posed a prior question: Does AI generate knowledge, or merely output?

AI as Optimisation, Not Epistemic Agency

Dr LaCroix explained that machine learning systems are mathematical models trained to optimise parameters over data. They approximate objective functions and generate outputs through statistical pattern recognition. What they do not possess are beliefs, understanding, or epistemic commitments.

Turning to epistemology, he distinguished two separate questions:

  1. Does an AI system itself have knowledge?
  2. Does an AI system produce knowledge for the user?

Evaluating AI against major theories of knowledge, including Justified True Belief accounts, modal theories, reliabilism, and contextualism, he argued that AI systems fail to meet core conditions for knowledge. Although their outputs may sometimes be accurate or truth-conducive, they lack belief, justification, and epistemic agency. They do not stand in a norm-governed relationship to truth.

The conclusion was provocative but clear: AI does not create knowledge.

Implications for IP Law

This philosophical clarification has important consequences for IP law.

As emphasised during the discussion, IP law does not protect knowledge as such. Instead, it protects:

  • original expression (copyright),
  • novel inventions (patents),
  • confidential information (trade secrets),
  • and certain forms of investment or selection (databases).

The more precise legal question, therefore, is not whether AI “owns knowledge,” but rather: Who, if anyone, owns the legally cognisable outputs generated by AI systems?

Across jurisdictions, ownership — where recognised — attaches to human intellectual achievement. If AI outputs are products of optimisation rather than intellectual agency, they do not qualify as intellectual achievements in their own right.

This reframing highlights a deeper tension: the difficulty in protecting AI-generated outputs is not merely doctrinal. It reflects a conceptual mismatch between IP law’s human-centred foundations and the nature of contemporary AI systems.

Beyond Doctrine: Accountability and the Future of Knowledge

The discussion extended beyond formal doctrine to broader concerns about authorship, accountability, and trust in an AI-driven intellectual ecosystem. If AI systems are not epistemic agents, then they cannot bear responsibility, nor can they meaningfully claim authorship. Yet their outputs increasingly shape research, commerce, and cultural production.

Session 2 demonstrated that debates about AI and ownership require conceptual clarity before regulatory solutions can be meaningfully assessed. Whether or not IP law ultimately adapts to AI-generated outputs, philosophical analysis reveals why these debates feel unsettled: they rest on assumptions about knowledge, agency, and responsibility that AI systems challenge at a fundamental level.

We thank all participants for their thoughtful contributions and look forward to continuing the conversation in Session 3.

Purple Divider

This series forms part of the IP and Innovation Initiative at Durham Law School, an emerging interdisciplinary platform exploring IP, AI, innovation and research impact.