Edited, memorised or added to reading queue

on 28-Jan-2026 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#deep-learning #keras #lstm #python #sequence
Standardization assumes that your observations fit a Gaussian distribution (bell curve) with a well behaved mean and standard deviation. You can still standardize your time series data if this expectation is not met, but you may not get reliable results.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
mean value or centering the data. Like normalization, standardization can be useful, and even required in some machine learning algorithms when your data has input values with differing scales. <span>Standardization assumes that your observations fit a Gaussian distribution (bell curve) with a well behaved mean and standard deviation. You can still standardize your time series data if this expectation is not met, but you may not get reliable results. <span>

Original toplevel document (pdf)

cannot see any pdfs




Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
els are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and increasing operational flexibility. <span>Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. <span>

Original toplevel document

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
ntic AI has reshaped how enterprises, developers, and entire industries think about automation and digital productivity. From software development workflows to enterprise process orchestration, <span>AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks. Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI. Titled Small Language Models are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and increasing operational flexibility. Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI age