Learning AI And SEO With NotebookLM: Key Terms & Techniques
- Sean Barber
- Jun 29
- 3 min read
Updated: Oct 17
This is a living blog (last updated: 17/10/2025) where I explore key concepts and terminology at the intersection of AI, SEO, and language models. I’m using Google’s NotebookLM to learn by building and refining conversations around each topic.
From query fan-out to transformers, these entries are designed to simplify complex ideas through real-world context. Scroll down to explore each topic - and check back as I update with new insights.
This episode primarily covers Google's AI Mode and its "query fan-out" technique, detailing how they differ from traditional search and their implications for search engine optimization (SEO).
This conversation comprehensively covers tokenization, primarily focusing on its definition, importance, types, algorithms, practical implementations, challenges, benefits, and applications in Large Language Models (LLMs) and AI.
This chat mainly covers what embeddings are, how they work, their importance in AI and machine learning, and their diverse applications across various data types like text, images, and audio.
This conversation covers attention mechanisms in AI, detailing their fundamental workings, historical development, impact on various AI applications (especially NLP), and advantages over traditional models, while also touching upon their computational challenges and future directions.
This conversation fully explains the definition, types, properties, and mathematical operations of vectors.
This conversation covers the definition, architecture, function, training, types, and applications of neural networks within the broader fields of artificial intelligence and deep learning.
This episode covers the various aspects of Transformer models and their significant impact on the field of Artificial Intelligence (AI), particularly in Natural Language Processing (NLP).
This episode explores that pre-training establishes a general understanding of language using vast, unlabelled datasets, while fine-tuning adapts this foundational knowledge for specific tasks with smaller, labelled datasets, making models specialized and efficient.
This episode covers Information Foraging Theory, which proposes that humans, like animals searching for food, seek and consume information (particularly on the web) by evaluating its information scent (perceived value) against the interaction cost (effort) to obtain it, striving to maximize their rate of gain.
This episode explains that Retrieval Augmented Generation (RAG) is a powerful technique that enhances large language models (LLMs) by integrating external knowledge bases and real-time data retrieval to generate more accurate, up-to-date, and contextually relevant responses, effectively addressing issues like hallucinations and knowledge cut-offs while offering a cost-efficient alternative or complement to fine-tuning.
In this episode, we cover cosine similarity, a widely used metric that quantifies the similarity between two non-zero vectors by calculating the cosine of the angle between them. It is especially effective in high-dimensional spaces where traditional distance-based metrics can struggle.
All About Agentic AI
In this episode, the subject of discussion is Agentic AI, which is artificial intelligence that learns on the job and can perform multiple tasks and processes and is set to revolutionize the way we work. Agentic AI systems are defined as those that are capable of acting autonomously, making their own decisions without much human supervision, allowing them to pursue complex goals.
Unlike generative AI, which is AI that says something, Agentic AI is AI that does something, employing a proactive approach rather than merely reacting to user input. This system operates with a degree of independence, using multi-step reasoning and planning to achieve a goal without being told exactly what to do at every step, which facilitates the true automation of end-to-end processes.
These MIT lectures cover the fundamental concepts, mechanisms, applications, and ethical considerations surrounding Foundation Models and Generative AI.
![Navigating SEO & AI in 2025: Trends, Tools, and the Future [Survey Results]](https://static.wixstatic.com/media/9865f7_63aaf9e3cc644f2390ff79f61cb7ff02~mv2.png/v1/fill/w_201,h_231,al_c,q_85,enc_avif,quality_auto/9865f7_63aaf9e3cc644f2390ff79f61cb7ff02~mv2.png)
Comments