Edited, memorised or added to reading queue

on 05-Sep-2025 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

The rapid rise of agentic AI has reshaped how enterprises, developers, and entire industries think about automation and digital productivity. From software development workflows to enterprise process orchestration, AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
and LLMs, enabling enterprises to improve efficiency, reduce costs, and scale responsibly. AI-generated content may summarize information incompletely. Verify important information. Learn more <span>The rapid rise of agentic AI has reshaped how enterprises, developers, and entire industries think about automation and digital productivity. From software development workflows to enterprise process orchestration, AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks. Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But whe




AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks.

Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI. Titled Small Language Models are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and increasing operational flexibility.

Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
ntic AI has reshaped how enterprises, developers, and entire industries think about automation and digital productivity. From software development workflows to enterprise process orchestration, <span>AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks. Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI. Titled Small Language Models are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and increasing operational flexibility. Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI age




Flashcard 7750775606540

Tags
#tensorflow #tensorflow-certificate
Question

another_matrix = tf.constant([[10. ,66.],
                              [5. , 9.],
                              [13. , 4.]], dtype=tf.float16)
another_matrix

Out:
<tf.Tensor: shape=(3, 2), dtype=float16, [...]=
array([[10., 66.],
       [ 5.,  9.],
       [13.,  4.]], dtype=float16)>

Answer
numpy

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow fundamentals
another_matrix = tf.constant([[10. ,66.], [5. , 9.], [13. , 4.]], dtype=tf.float16) another_matrix Out: <tf.Tensor: shape=(3, 2), dtype=float16, numpy= array([[10., 66.], [ 5., 9.], [13., 4.]], dtype=float16)>







Flashcard 7750777179404

Tags
#abm #agent-based #machine-learning #model #priority
Question

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) [...](4) Application

Answer
Training

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Universal Framework for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application

Original toplevel document (pdf)

cannot see any pdfs







NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI agent lifecycle. Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
erogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—<span>NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI agent lifecycle. Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs. Why are SLMs beneficial to agentic AI tasks? SLMs are well-positioned for the agentic era because they use a narrow slice of LLM functionality for any single language model errand. LLMs




Flashcard 7750842715404

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question

The learning mechanism of the recurrent neural network thus involves:

(1) the forward propagation step where the [...] loss is calculated;

Answer
cross- entropy

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The learning mechanism of the recurrent neural network thus involves: (1) the forward propagation step where the cross- entropy loss is calculated;

Original toplevel document (pdf)

cannot see any pdfs