top of page
Search

Learning AI And SEO With NotebookLM: Key Terms & Techniques

  • Writer: Sean Barber
    Sean Barber
  • Jun 29
  • 2 min read

Updated: Sep 3


This is a living blog (last updated: 03/09/2025) where I explore key concepts and terminology at the intersection of AI, SEO, and language models. I’m using Google’s NotebookLM to learn by building and refining conversations around each topic.


From query fan-out to transformers, these entries are designed to simplify complex ideas through real-world context. Scroll down to explore each topic - and check back as I update with new insights.


This episode primarily covers Google's AI Mode and its "query fan-out" technique, detailing how they differ from traditional search and their implications for search engine optimization (SEO).



This conversation comprehensively covers tokenization, primarily focusing on its definition, importance, types, algorithms, practical implementations, challenges, benefits, and applications in Large Language Models (LLMs) and AI.


 This chat mainly covers what embeddings are, how they work, their importance in AI and machine learning, and their diverse applications across various data types like text, images, and audio.


This conversation covers attention mechanisms in AI, detailing their fundamental workings, historical development, impact on various AI applications (especially NLP), and advantages over traditional models, while also touching upon their computational challenges and future directions.


This conversation fully explains the definition, types, properties, and mathematical operations of vectors.



This conversation covers the definition, architecture, function, training, types, and applications of neural networks within the broader fields of artificial intelligence and deep learning.



This episode covers the various aspects of Transformer models and their significant impact on the field of Artificial Intelligence (AI), particularly in Natural Language Processing (NLP).



This episode explores that pre-training establishes a general understanding of language using vast, unlabelled datasets, while fine-tuning adapts this foundational knowledge for specific tasks with smaller, labelled datasets, making models specialized and efficient.



This episode covers Information Foraging Theory, which proposes that humans, like animals searching for food, seek and consume information (particularly on the web) by evaluating its information scent (perceived value) against the interaction cost (effort) to obtain it, striving to maximize their rate of gain.



This episode explains that Retrieval Augmented Generation (RAG) is a powerful technique that enhances large language models (LLMs) by integrating external knowledge bases and real-time data retrieval to generate more accurate, up-to-date, and contextually relevant responses, effectively addressing issues like hallucinations and knowledge cut-offs while offering a cost-efficient alternative or complement to fine-tuning.



In this episode, we cover cosine similarity, a widely used metric that quantifies the similarity between two non-zero vectors by calculating the cosine of the angle between them. It is especially effective in high-dimensional spaces where traditional distance-based metrics can struggle.



These MIT lectures cover the fundamental concepts, mechanisms, applications, and ethical considerations surrounding Foundation Models and Generative AI.

 
 
 

Comments


bottom of page