Edited, memorised or added to reading queue

on 03-Dec-2025 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7772782333196

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
A response model relying exclusively on seniority, recency, and frequency would [...] to distinguish between customers who have similar features but different behavioral sequence
Answer
not be able

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7774132112652

Tags
#tensorflow #tensorflow-certificate
Question

# Get the patterns of a layer in our network

[...], biases = model_35.layers[1].get_weights()

Answer
weights

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
# Get the patterns of a layer in our network weights, biases = model_35.layers[1].get_weights()

Original toplevel document

TfC_02_classification-PART_2
tant: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategoricalCrossentropy <span># Get the patterns of a layer in our network weights, biases = model_35.layers[1].get_weights() <span>







#causality #statistics
if the treatment specification is simply “get a dog” or “don’t get a dog,” this can be too coarse to yield consistency. It might be that if I were to get a puppy, I would observe 𝑌 = 1 (happiness) because I needed an energetic friend, but if I were to get an old, low-energy dog, I would observe 𝑌 = 0 (unhappiness). However, both of these treatments fall under the category of “get a dog,” so both correspond to 𝑇 = 1 . This means that 𝑌(1) is not well defined, since it will be 1 or 0, depending on something that is not captured by the treatment specification.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7774135258380

Tags
#abm #agent-based #machine-learning #model #priority #synergistic-integration
Question
The ABM became popular two decades ago. Axelrod [5] contended that ABM is a [...] of carrying out science in addition to classical deductive and inductive reasoning.
Answer
third way

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The ABM became popular two decades ago. Axelrod [5] contended that ABM is a third way of carrying out science in addition to classical deductive and inductive reasoning.

Original toplevel document (pdf)

cannot see any pdfs







#abm #agent-based #machine-learning #model #priority #synergistic-integration
Axelrod [5] contended that ABM is a third way of carrying out science in addition to classical deductive and inductive reasoning.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The ABM became popular two decades ago. Axelrod [5] contended that ABM is a third way of carrying out science in addition to classical deductive and inductive reasoning.

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7774138141964

Tags
#abm #agent-based #machine-learning #model #priority #synergistic-integration
Question
Axelrod [5] contended that [...] is a third way of carrying out science in addition to classical deductive and inductive reasoning.
Answer
ABM

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Axelrod [5] contended that ABM is a third way of carrying out science in addition to classical deductive and inductive reasoning.

Original toplevel document (pdf)

cannot see any pdfs







Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI agent lifecycle. Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs.

Original toplevel document

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
erogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—<span>NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI agent lifecycle. Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs. Why are SLMs beneficial to agentic AI tasks? SLMs are well-positioned for the agentic era because they use a narrow slice of LLM functionality for any single language model errand. LLMs




AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks.

Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks. Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI. Titled Small Language Models are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and in

Original toplevel document

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
ntic AI has reshaped how enterprises, developers, and entire industries think about automation and digital productivity. From software development workflows to enterprise process orchestration, <span>AI agents are increasingly helping to power enterprises’ core operations, especially in areas that have previously been deemed plagued by repetitive tasks. Most of these agents depend heavily on large language models (LLMs). LLMs are often recognized for their general reasoning, fluency, and capacity to support open-ended dialogue. But when they’re embedded inside agents, they may not always be the most efficient or economical choice. In our recent position paper, we outline our observations about the role small language models (SLMs) play in agentic AI. Titled Small Language Models are the Future of Agentic AI, we highlight the growing opportunities for integrating SLMs in place of LLMs in agentic applications, decreasing costs, and increasing operational flexibility. Our stance isn’t that LLMs will stop being useful in the context of agents. Instead, we point to the rise of heterogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI age