Edited, memorised or added to reading queue

on 27-Jan-2026 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7788721736972

Tags
#causal #inference
Question
Inverse probability matching (a.k.a [...] matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach
Answer
propensity score

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Inverse probability matching (a.k.a propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation

Original toplevel document

Open it
The G-methods family, as developed by James Robins and colleagues, specifically includes: Inverse probability weighting (IPW) G-computation G-estimation of structural nested models Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach







Flashcard 7792381791500

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
Each RNN module in the sequence is sometimes referred to as [...] based on their position in the sequence.
Answer
timesteps

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Each RNN module in the sequence is sometimes referred to as timesteps based on their position in the sequence.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7792384150796

Question
Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned [...]s for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs.
Answer
SLM

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs.

Original toplevel document

How Small Language Models Are Key to Scalable Agentic AI | NVIDIA Technical Blog
erogeneous ecosystems where SLMs play a central operational role while LLMs are reserved for situations where their generalist capabilities are indispensable. This future path isn’t speculative—<span>NVIDIA already offers a suite of products, from open NVIDIA Nemotron reasoning models to the NVIDIA NeMo software suite for managing the entire AI agent lifecycle. Enterprises equipped with these tools can build heterogeneous systems of AI models deploying fine-tuned SLMs for core workloads while using LLMs for occasional multi-step strategic tasks. This approach will improve results with substantially reduced power and costs. Why are SLMs beneficial to agentic AI tasks? SLMs are well-positioned for the agentic era because they use a narrow slice of LLM functionality for any single language model errand. LLMs