Resources

A curated guide to understanding AI

The following is a list of materials that I’ve found particularly informative and useful for better understanding artificial intelligence.

The list is WIP.

I firmly believe that a solid grounding in the sciences of the mind and particularly the philosophy surrounding them is important for accurately assessing many of the claims that float around in the AI space, generating hype and spurring investor interest.1

Deep Learning & Neural Networks

  • Schmidhuber, Jürgen (2015): Deep Learning in Neural Networks: An Overview
  • Bengio, Yoshua (2009): Learning Deep Architectures for AI
  • Bengio, Yoshua; Courville, Aaron & Vincent, Pascal (2013): Representation Learning: A Review and New Perspectives
  • LeCun, Yann; Bengio, Yoshua & Hinton, Geoffrey (2015): Deep Learning
  • Krizhevsky, Alex; Sutskever, Ilya & Hinton, Geoffrey (2012): Imagenet Classification with Deep Convolutional Neural Networks
  • Silver, David et al. (2016): Mastering the Game of Go with Deep Neural Networks and Tree Search
  • Srivastava, Nitish et al. (2014): Dropout: A Simple Way to Prevent Neural Networks from Overfitting
  • Sainath, Tara et al. (2013): Deep Convolutional Neural Networks for LVCSR
  • Heaton, Jeff (2015): Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks
  • Rajagopal, Jagannath (n.d.): Convolutional Neural Networks - Ep. 8 (Deep Learning SIMPLIFIED)

Media, Ethics & Popular Sources on AI

  • Clark, Stuart (2014): AI Could Spell End of Human Race
  • Hardy, Quentin (2016): The AI Boom Is Real
  • Gershgorn, Dave (n.d.): Google’s AI Translation Tool Nearly Human-Level
  • Millar, Jason (2016): AI and Ethics
  • Parloff, Roger (2016): Why Deep Learning Is Suddenly Changing Your Life
  • Titcomb, James (2017): AI Is the Biggest Risk to Civilization – Musk
  • Lewis-Kraus, Gideon (2016): The Great A.I. Awakening

Philosophy of AI

  • Searle, John (1980): Minds, Brains, and Programs
  • Searle, John (1982a): The Chinese Room Revisited
  • Searle, John (1982b): The Myth of the Computer
  • Searle, John (1990): Is the Brain’s Mind a Computer Program?
  • Searle, John (2002): Consciousness and Language
  • Copeland, B. Jack (1993): Artificial Intelligence: A Philosophical Introduction
  • Copeland, B. Jack (2000): The Turing Test
  • Piccinini, Gualtiero (2000): Turing’s Rules for the Imitation Game
  • Piccinini, Gualtiero (2015): Physical Computation
  • Conitzer, Vincent (2016a): The Philosophy of AI
  • Conitzer, Vincent (2016b): Philosophical Engagement with AI
  • Block, Ned (1981): Psychologism and Behaviorism

Philosophy of Cognitive Science

  • Bermúdez, José Luis (2014): Cognitive Science: An Introduction
  • Pylyshyn, Zenon (1980): Computation and Cognition
  • Clark, Andy (1989): Microcognition
  • Ramsey, William (2007): Representation Reconsidered
  • Churchland, Patricia & Sejnowski, Terrence (1994): The Computational Brain
  • Marcus, Gary (2014): The Birth of the Mind
  • Clark, Andy (2014): Mindware

Philosophy of Mind

  • Block, Ned (1978): Troubles with Functionalism
  • Dennett, Daniel C. (1984): Elbow Room
  • Dennett, Daniel C. (1991): Consciousness Explained
  • Inwagen, Peter van (1993): Metaphysics

Symbolic AI & Cognitive Science Foundations

  • Russell, Stuart & Norvig, Peter (2009): Artificial Intelligence: A Modern Approach
  • Newell, Allen & Simon, Herbert (1976): Computer Science as Empirical Inquiry
  • Newell, Allen (1980): Physical Symbol Systems

Systematicity, Connectionism & Cognitive Architecture

  • Fodor, Jerry & Pylyshyn, Zenon (1988): Connectionism and Cognitive Architecture: A Critical Analysis
  • Smolensky, Paul (1988): On the Proper Treatment of Connectionism
  • Smolensky, Paul (1993): The Constituent Structure of Connectionist Mental States
  • Aizawa, Kenneth (2012): The Systematicity Arguments
  • Cummins, Robert (2010): Systematicity
  • Matthews, Robert J. (1997): Can Connectionists Explain Systematicity?
  • Calvo, Paco & Symons, John (2014): The Architecture of Cognition: Rethinking Fodor and Pylyshyn’s Systematicity Challenge
  • Marcus, Gary (2014): The Birth of the Mind
  • Marcus, Gary (2015): Deep Learning: A Critical Appraisal
  • Rumelhart, David E.; McClelland, James L. & Hinton, Geoffrey (1987): Distributed Representations
  • Pinker, Steven & Prince, Alan (1988): On Language and Connectionism

Systems Thinking

  • Kudina, Olga & van de Poel, Ibo (2024): A Sociotechnical System Perspective on AI

Turing Test & AI Evaluation

  • Turing, Alan (1950): Computing Machinery and Intelligence
  • Moor, James H. (1976): An Analysis of the Turing Test
  • Moor, James H. (2001): The Status and Future of the Turing Test
  • Harnad, Stevan (1992): The Turing Test Is Not a Trick
  • Bringsjord, Selmer; Bello, Paul & Ferrucci, David (2001): Creativity, the Turing Test, and the (Better) Lovelace Test
  • Hernández-Orallo, José & Dowe, David L. (2010): Measuring Universal Intelligence
  • Genova, Judith (1994): Turing’s Sexual Guessing Game
  • McDermott, Drew (2014): On the Claim That a Table-Lookup Program Could Pass the Turing Test
  • Oppy, Graham & Dowe, David (2016): The Turing Test

From Theory to Practice

  • Generative AI with Large Language Models by AWS
  • Hugging Face’s NLP Course

I started with Generative AI with Large Language Models but after having completed both the courses, I think that most people is probably better off starting with Hugging Face’s NLP course as it feels more gentle initially. Also, and importantly, the Generative AI with Large Language Models course leverages Huggging Face’s transformers library, which is introduced in the latter’s NLP course.

Perhaps the order should be: Hugging Face’s LLM course chapter 1 followed by the deeplearning.ai course.

The courses work well together, however, to build a foundational understanding because fundamental concepts pertaining to transformers and LLMs more generally are treated in both courses in slightly different but complementary ways.

Notes


  1. Without this foundation, it becomes difficult to distinguish between genuine technological advances and overblown marketing narratives. Concepts like consciousness, understanding, and intelligence have been explored for centuries by philosophers and cognitive scientists, providing crucial context for evaluating today’s AI capabilities and limitations. This knowledge helps us maintain a realistic perspective on what AI systems are actually doing versus what they appear to be doing, ultimately leading to more informed discussions about their development, deployment, and regulation. ↩︎