This page contains notes, further readings, and general references for the associated lecture.

The Lecture Slides for the talk are also available (use your arrow keys to advance the slides) or download a pdf of the slides.

If you’re looking for the other lecture notes, head here.

AI Scaling Laws

  • A great visualization of the current AI investment is available by Reuters. Be sure to keep scrolling through the page to get a real sense for the current size of the AI investment.

  • The Standford institute for Human-Centered Artificial Intelligence (HAI) put together the excellent graphic detailing how AI spending is broken down by category.

  • The big paper to read is by Kaplan et al. titled Scaling Laws for Neural Language Models. The authors were all OpenAI researchers and would go on (with significant investor support) to produce some of the best AI models ever created (one of the authors was Dario Amodei who later left OpenAI and cofounded Anthropic, another AI lab that currently produces the Claude AI models). This paper was very much a blend of hard Machine Learning research (note that it does at times get quite technical) and a pitch for financial investors that their money wouldn’t be wasted.

  • The Kaplan paper above found very specific scaling laws for AI improvement, however, later work by Google Deepmind (the AI side of Google) corrected these scaling laws and found that doubling the model size required a doubling in the data size. Unlike the Kaplan paper, this paper is quite technical and a little hard to read without some background knowledge in the subject.

Some History

  • Thomas Hobbes is perhaps best known for his politcal and social contract theories, much discussed in his magnum opus Leviathan, but fewer people are aware of his idea that “reasoning is just computation”. See the Stanford Philosophy Encyclopedia entry on it for more depth.

  • Full chapter V of Leviathan by Hobbes referenced in talk is available here and well worth a read if you haven’t read it in awhile.

  • Further reading on Leibniz ideas on Language and the Mind.

  • Further readings and analysis on Descartian dualism are available here

  • Further readings on the history, implementation, and critiques of the Turing Test are available here. It’s interesting to note that the original Imititation Game imagined by Turing was genedered, where instead of a human and a machine, the two secret people were a man and a woman, where the man attempted to be a woman, and then later switched with a machine. This was discussed more as a “parlor game” to be played amoung friends and less a rigorous test for artificial intelligence, however, in this same article by Turing, he does also describe the form of the imitation game that has become standard and used for the last 70 years.

  • Further reading about Eugene Goostmen, the 2014 chat bot to beat the “Turing Test” for the first time, specifically the 30% threshold first imagined by Alan Turing.

  • The Turing Test is truly defeated as discussed in the 2025 paper by Jones and Bergen titled Large Language Models Pass the Turing Test.

  • Full article by Dijkistra for his quote on thinking machines is here

The Perceptron

  • Excellent article detailing the history and workings of a basic Perceptron. Definitely something fun to play with and read when you have a chance.

  • Welch Labs makes excellent and accurate videos and recently even produced a new textbook. This video goes into quite a bit of detail on the Perceptron itself. It does sometimes get a little technical, but the history and production value are just great.