Edited, memorised or added to reading queue

on 04-Sep-2019 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 150890417

Tags
#m249 #mathematics #open-university #statistics #time-series
Question
Transformations of time series that are commonly used include the power transformations:
Yt = Xta , where a = [...].
Answer
... 1/4, 1/3, 1/2, 2, 3, 4, ...

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Transformations of time series that are commonly used include the power transformations: Y t = X t a , where a = ... 1/4, 1/3, 2, 3, 4, ....

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1332009897228

Tags
#quantitative-methods-basic-concepts #statistics
Question
A [...] is defined as a numerical quantity (such as the mean) calculated in a sample.
Answer
statistic

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A statistic is defined as a numerical quantity (such as the mean) calculated in a sample.

Original toplevel document

Subject 1. The Nature of Statistics
Estimates of these parameters taken from a sample are called statistics. Much of the field of statistics is devoted to drawing inferences from a sample concerning the value of a population parameter. <span>A statistic is defined as a numerical quantity (such as the mean) calculated in a sample. It has two different meanings. Most commonly, statistics refers to numerical data such as a company's earnings per share or average returns over the past five years. Statistics can also refer to the process of collecting, organizing, presenting, analyzing, and interpreting numerical data for the purpose of making decisions. Note that we will always know the exact composition of our sample, and by definition, we will always know the values within our sample. Ascertaining this information is the purpose of samples. Sample statistics will always be known, and can be used to estimate unknown population parameters. Hint: One way to easily remember these terms is to recall that "population" and "parameter" both start with a "p," and "sample" and "statisti







Flashcard 1332022742284

Tags
#quantitative-methods-basic-concepts #statistics
Question
A [...] is a numerical quantity measuring some aspect of a population of scores.
Answer
parameter

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A parameter is a numerical quantity measuring some aspect of a population of scores.

Original toplevel document

Subject 1. The Nature of Statistics
ve values associated with them, such as the average of all values in a sample and the average of all population values. Values from a population are called parameters, and values from a sample are called statistics. <span>A parameter is a numerical quantity measuring some aspect of a population of scores. The mean, for example, is a measure of central tendency. Greek letters are used to designate parameters. Parameters are rarely known and are usually estimated by statistics computed in samples. Populations can have many parameters, but investment analysts are usually only concerned with a few, such as the mean return or the standard deviation of returns. Estimates of these parameters taken from a sample are called statistics. Much of the field of statistics is devoted to drawing inferences from a sample concerning the value of a population parameter. A statistic is defined as a numerical quantity (such as the mean) calculated in a sample. It has two different meanings. Most commonly, statistics refers to







Flashcard 4344775576844

Question
When you pass a tuple as the first argument in an [...] statement, the [...] always evaluates as true and therefore never fails.
Answer
assert, assertion

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
When you pass a tuple as the first argument in an assert statement, the assertion always evaluates as true and therefore never fails.

Original toplevel document

Assert Statements in Python – dbader.org
valuate to true. I’ve been bitten by this myself in the past. I wrote a longer article about this specific issue you can check out by clicking here. Alternatively, here’s the executive summary: <span>When you pass a tuple as the first argument in an assert statement, the assertion always evaluates as true and therefore never fails. For example, this assertion will never fail: assert(1 == 2, 'This should fail') This has to do with non-empty tuples always being truthy in Python. If you pass a tuple to an assert stat







Flashcard 4344779509004

Question
When you pass a tuple as the first argument in an [...] statement, the [...] always evaluates as true and therefore never fails.
Answer
assert, assertion

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
When you pass a tuple as the first argument in an assert statement, the assertion always evaluates as true and therefore never fails.

Original toplevel document

Assert Statements in Python – dbader.org
valuate to true. I’ve been bitten by this myself in the past. I wrote a longer article about this specific issue you can check out by clicking here. Alternatively, here’s the executive summary: <span>When you pass a tuple as the first argument in an assert statement, the assertion always evaluates as true and therefore never fails. For example, this assertion will never fail: assert(1 == 2, 'This should fail') This has to do with non-empty tuples always being truthy in Python. If you pass a tuple to an assert stat







Article 4364208835852

Machine Learning
#has-images #learning #machine #statistics

Machine learning From Wikipedia, the free encyclopedia Jump to navigationJump to search For the journal, see Machine Learning (journal). "Statistical learning" redirects here. For statistical learning in linguistics, see statistical learning in language acquisition. Machine learning and data mining Problems[show] Supervised learning (classification • regression) [show] Clustering[show] Dimensionality reduction[show] Structured prediction[show] Anomaly detection[show] Artificial neural networks[show] Reinforcement learning[show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence[show] Related articles[show] v t e Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, rely



#learning #machine #statistics

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.[1][2]:2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.

Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.[3][4] In its application across business problems, machine learning is also referred to as predictive analytics.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac




Flashcard 4369499950348

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369504931084

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369509911820

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369515416844

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369520397580

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369525378316

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369530359052

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369535339788

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369540320524

Tags
#bones #bony #has-images #landmarks #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369546611980

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369551592716

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369561029900

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4369569418508

Tags
#bones #bony #has-images #landmarks #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






#learning #machine #statistics
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac




#learning #machine #statistics
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
ional statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. <span>Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . <span>

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac




Augmenting Long-term Memory
#SRS #anki #has-images #incremental #memory #reading

Augmenting Long-term Memory

Michael Nielsen | Y Combinator Research | July 2018

Related Resources
Michael Nielsen on Twitter
Michael Nielsen's project announcement mailing list
cognitivemedium.com



By Michael Nielsen

One day in the mid-1920s, a Moscow newspaper reporter named Solomon Shereshevsky entered the laboratory of the psychologist Alexander Luria. Shereshevsky's boss at the newspaper had noticed that Shereshevsky never needed to take any notes, but somehow still remembered all he was told, and had suggested he get his memory checked by an expert.

Luria began testing Shereshevsky's memory. He began with simple tests, short strings of words and of numbers. Shereshevsky remembered these with ease, and so Luria gradually increased the length of the strings. But no matter how long they got, Shereshevsky could recite them back. Fascinated, Luria went on to study Shereshevsky's memory for the next 30 years. In a book summing up his research** Alexander Luria, “The Mind of a Mnemonist”, Harvard University Press (1968)., Luria reported that:

[I]t appeared that there was no limit either to the capacity of S.'s memory or to the durability of the traces he retained. Experiments indicated that he had no difficulty reproducing any lengthy series of words whatever, even though these had originally been presented to him a week, a month, a year, or even many years earlier. In fact, some of these experiments designed to test his retention were performed (without his being given any warning) fifteen or sixteen years after the session in which he had originally recalled the words. Yet invariably they were successful.

Such stories are fascinating. Memory is fundamental to our thinking, and the notion of having a perfect memory is seductive. At the same time, many people feel ambivalent about their own memory. I've often heard people say “I don't have a very good memory”, sometimes sheepishly, sometimes apologetically, sometimes even defiantly.

Given how central memory is to our thinking, it's natural to ask whether computers can be used as tools to help improve our memory. This question turns out to be highly generative of good ideas, and pursuing it has led to many of the most important vision documents in the history of computing. One early example was Vannevar Bush's 1945 proposal** Vannevar Bush, As We May Think, The Atlantic (1945). for a mechanical memory extender, the memex. Bush wrote:

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

The memex vision inspired many later computer pioneers, including Douglas Engelbart's ideas about the augmentation of human intelligence, Ted Nelson's ideas about hypertext, and, indirectly, Tim Berners-Lee's conception of the world wide web** See, for example: Douglas Engelbart,

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




#SRS #anki #incremental #memory #reading

One day in the mid-1920s, a Moscow newspaper reporter named Solomon Shereshevsky entered the laboratory of the psychologist Alexander Luria. Shereshevsky's boss at the newspaper had noticed that Shereshevsky never needed to take any notes, but somehow still remembered all he was told, and had suggested he get his memory checked by an expert.

Fascinated, Luria went on to study Shereshevsky's memory for the next 30 years. In a book summing up his research** Alexander Luria, “The Mind of a Mnemonist”, Harvard University Press (1968)., Luria reported that:

[I]t appeared that there was no limit either to the capacity of S.'s memory or to the durability of the traces he retained. ... .
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
l Nielsen | Y Combinator Research | July 2018 Related Resources Michael Nielsen on Twitter Michael Nielsen's project announcement mailing list cognitivemedium.com [imagelink] By Michael Nielsen <span>One day in the mid-1920s, a Moscow newspaper reporter named Solomon Shereshevsky entered the laboratory of the psychologist Alexander Luria. Shereshevsky's boss at the newspaper had noticed that Shereshevsky never needed to take any notes, but somehow still remembered all he was told, and had suggested he get his memory checked by an expert. Luria began testing Shereshevsky's memory. He began with simple tests, short strings of words and of numbers. Shereshevsky remembered these with ease, and so Luria gradually increased the length of the strings. But no matter how long they got, Shereshevsky could recite them back. Fascinated, Luria went on to study Shereshevsky's memory for the next 30 years. In a book summing up his research** Alexander Luria, “The Mind of a Mnemonist”, Harvard University Press (1968)., Luria reported that: [I]t appeared that there was no limit either to the capacity of S.'s memory or to the durability of the traces he retained. Experiments indicated that he had no difficulty reproducing any lengthy series of words whatever, even though these had originally been presented to him a week, a month, a year, or even many years earlier. In fact, some of these experiments designed to test his retention were performed (without his being given any warning) fifteen or sixteen years after the session in which he had originally recalled the words. Yet invariably they were successful. Such stories are fascinating. Memory is fundamental to our thinking, and the notion of having a perfect memory is seductive. At the same time, many people feel ambivalent about their ow




#SRS #anki #incremental #memory #reading
To conclude this introduction, a few words on what the essay won't cover. I will only briefly discuss visualization techniques such as memory palaces and the method of loci. And the essay won't describe the use of pharmaceuticals to improve memory, nor possible future brain-computer interfaces to augment memory. Those all need a separate treatment.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
aimed at helping develop virtuoso skills with personal memory systems. But since writing such a guide wasn't my primary purpose, it may come across as a more-than-you-ever-wanted-to-know guide. <span>To conclude this introduction, a few words on what the essay won't cover. I will only briefly discuss visualization techniques such as memory palaces and the method of loci. And the essay won't describe the use of pharmaceuticals to improve memory, nor possible future brain-computer interfaces to augment memory. Those all need a separate treatment. But, as we shall see, there are already powerful ideas about personal memory systems based solely on the structuring and presentation of information. Part I: How to remember almost anyt




#SRS #anki #incremental #memory #reading
SuperMemo runs only on Windows, and I haven't had an opportunity to use it, though I have been influenced by essays on the SuperMemo website.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
with the personal memory system Anki ** I've no affiliation at all with Anki. Other similar systems include Mnemosyne and SuperMemo . My limited use suggests Mnemosyne is very similar to Anki. <span>SuperMemo runs only on Windows, and I haven't had an opportunity to use it, though I have been influenced by essays on the SuperMemo website . I won't try to hide my enthusiasm for Anki behind a respectable facade of impartiality: it's a significant part of my life. Still, it has many limitations, and I'll mention some of them




Flashcard 4369613458700

Tags
#learning #machine #statistics
Question
[...] learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
Answer
Machine

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369614507276

Tags
#learning #machine #statistics
Question
Machine [...] is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
Answer
learning

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and infer

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369618963724

Tags
#learning #machine #statistics
Question
Machine learning is the [...] study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
Answer

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead </s

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369620274444

Tags
#learning #machine #statistics
Question
Machine learning is the scientific study of [...] and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
Answer
algorithms

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369621847308

Tags
#learning #machine #statistics
Question
Machine learning is the scientific study of algorithms and [...] models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead
Answer
statistical

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369623420172

Tags
#learning #machine #statistics
Question
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit [...], relying on patterns and inference instead
Answer
instructions

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







#SRS #anki #incremental #memory #reading
That's a factor of more than 20 in savings over the more than 2 hours required with conventional flashcards.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
estimate that for an average card, I'll only need 4 to 7 minutes of total review time over the entire 20 years. Those estimates allow for occasional failed reviews, resetting the time interval. <span>That's a factor of more than 20 in savings over the more than 2 hours required with conventional flashcards. I therefore have two rules of thumb. First, if memorizing a fact seems worth 10 minutes of my time in the future, then I do it** I first saw an analysis along these lines in Gwern Branw




#SRS #anki #incremental #memory #reading
I therefore have two rules of thumb. First, if memorizing a fact seems worth 10 minutes of my time in the future, then I do it** I first saw an analysis along these lines in Gwern Branwen's review of spaced repetition: Gwern Branwen, Spaced-Repetition. His numbers are slightly more optimistic than mine – he arrives at a 5-minute rule of thumb, rather than 10 minutes – but broadly consistent. Branwen's analysis is based, in turn, on an analysis in: Piotr Wozniak, Theoretical aspects of spaced repetition in learning.. Second, and superseding the first, if a fact seems striking then into Anki it goes, regardless of whether it seems worth 10 minutes of my future time or not. The reason for the exception is that many of the most important things we know are things we're not sure are going to be important, but which our intuitions tell us matter. This doesn't mean we should memorize everything. But it's worth cultivating taste in what to memorize.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
s. Those estimates allow for occasional failed reviews, resetting the time interval. That's a factor of more than 20 in savings over the more than 2 hours required with conventional flashcards. <span>I therefore have two rules of thumb. First, if memorizing a fact seems worth 10 minutes of my time in the future, then I do it** I first saw an analysis along these lines in Gwern Branwen's review of spaced repetition: Gwern Branwen, Spaced-Repetition . His numbers are slightly more optimistic than mine – he arrives at a 5-minute rule of thumb, rather than 10 minutes – but broadly consistent. Branwen's analysis is based, in turn, on an analysis in: Piotr Wozniak, Theoretical aspects of spaced repetition in learning .. Second, and superseding the first, if a fact seems striking then into Anki it goes, regardless of whether it seems worth 10 minutes of my future time or not. The reason for the exception is that many of the most important things we know are things we're not sure are going to be important, but which our intuitions tell us matter. This doesn't mean we should memorize everything. But it's worth cultivating taste in what to memorize. The single biggest change that Anki brings about is that it means memory is no longer a haphazard event, to be left to chance. Rather, it guarantees I will remember something, with mini




#SRS #anki #incremental #memory #reading
This is important: I find Anki works much better when used in service to some personal creative project.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. <span>This is important: I find Anki works much better when used in service to some personal creative project. It's tempting instead to use Anki to stockpile knowledge against some future day, to think “Oh, I should learn about the geography of Africa, or learn about World War II, or […]”. These




#SRS #anki #incremental #memory #reading
It's tempting instead to use Anki to stockpile knowledge against some future day, to think “Oh, I should learn about the geography of Africa, or learn about World War II, or […]”. These are goals which, for me, are intellectually appealing, but which I'm not emotionally invested in. I've tried this a bunch of times. It tends to generate cold and lifeless Anki questions, questions which I find hard to connect to upon later review, and where it's difficult to really, deeply internalize the answers. The problem is somehow in that initial idea I “should” learn about these things: intellectually, it seems like a good idea, but I've little emotional commitment.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much better when used in service to some personal creative project. <span>It's tempting instead to use Anki to stockpile knowledge against some future day, to think “Oh, I should learn about the geography of Africa, or learn about World War II, or […]”. These are goals which, for me, are intellectually appealing, but which I'm not emotionally invested in. I've tried this a bunch of times. It tends to generate cold and lifeless Anki questions, questions which I find hard to connect to upon later review, and where it's difficult to really, deeply internalize the answers. The problem is somehow in that initial idea I “should” learn about these things: intellectually, it seems like a good idea, but I've little emotional commitment. Study hard what interests you the most in the most undisciplined, irreverent and original manner possible. – Richard Feynman By contrast, when I'm reading in support of some creative pr




#SRS #anki #incremental #memory #reading
It is tempting to use Anki cards to study for a hypothetical future use but it's better to use Anki as part of a real world active creative project.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
ort of some creative project, I ask much better Anki questions. I find it easier to connect to the questions and answers emotionally. I simply care more about them, and that makes a difference. <span>So while it's tempting to use Anki cards to study in preparation for some (possibly hypothetical) future use, it's better to find a way to use Anki as part of some creative project. Using Anki to do shallow reads of papers Most of my Anki-based reading is much shallower than my read of the AlphaGo paper. Rather than spending days on a paper, I'll typically spend 10




por eso los espacios con buenas condiciones de accesibilidad son más atractivos para la inversión que aquellos en las que los costes de interacción son elevados, tanto para empresas como para la población.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4369709141260

Tags
#learning #machine #statistics
Question
Data mining is a field of study [...] machine learning, and focuses on exploratory data analysis through unsupervised learning.
Answer
within

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369710714124

Tags
#learning #machine #statistics
Question
Data mining is a field of study within [...] learning, and focuses on exploratory data analysis through unsupervised learning.
Answer
machine

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369712286988

Tags
#learning #machine #statistics
Question
Data mining is a field of study within machine [...], and focuses on exploratory data analysis through unsupervised learning.
Answer
learning

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369713859852

Tags
#learning #machine #statistics
Question
Data mining is a field of study within machine learning, and focuses on [...] data analysis through unsupervised learning.
Answer
exploratory

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369715432716

Tags
#learning #machine #statistics
Question
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through [supervised or unsupervised] learning.
Answer
unsupervised

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .

Original toplevel document

Machine Learning
tection [show] Artificial neural networks [show] Reinforcement learning [show] Theory[show] Machine-learning venues[show] Glossary of artificial intelligence [show] Related articles[show] v t e <span>Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence . Machine learning algorithms build a mathematical model based on sample data, known as "training data ", in order to make predictions or decisions without being explicitly programmed to perform the task.[1] [2] :2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision , where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task. Machine learning is closely related to computational statistics , which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning .[3] [4] In its application across business problems, machine learning is also referred to as predictive analytics . Contents 1Overview 1.1Machine learning tasks 2History and relationships to other fields 2.1Relation to data mining 2.2Relation to optimization 2.3Relation to statistics 3Theory 4Approac







Flashcard 4369718840588

Tags
#SRS #anki #incremental #memory #reading
Question
SuperMemo runs only on [...],
Answer
Windows

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
SuperMemo runs only on Windows, and I haven't had an opportunity to use it, though I have been influenced by essays on the SuperMemo website .

Original toplevel document

Augmenting Long-term Memory
with the personal memory system Anki ** I've no affiliation at all with Anki. Other similar systems include Mnemosyne and SuperMemo . My limited use suggests Mnemosyne is very similar to Anki. <span>SuperMemo runs only on Windows, and I haven't had an opportunity to use it, though I have been influenced by essays on the SuperMemo website . I won't try to hide my enthusiasm for Anki behind a respectable facade of impartiality: it's a significant part of my life. Still, it has many limitations, and I'll mention some of them







#computer-science #machine-learning #reinforcement-learning
The idea of storing some estimates separately and then combini n g them with samples is a good one and is also used in Gradient-TD methods. Gradient-TD methods estimate and store the product of the second two factors in (11.27) . These factors are a d ⇥ d matrix and a d -vector , so their pro du ct is j us t a d -vector , l ike w itself. We denote t h is second learned vector as v: v ⇡ E ⇥ x t x > t ⇤ 1 E[⇢ t t x t ]
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4369723559180

Tags
#SRS #anki #incremental #memory #reading
Question
It is tempting to use Anki cards to study for a [...] future use but it's better to use Anki as part of a real world active creative project.
Answer
hypothetical

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
It is tempting to use Anki cards to study for a hypothetical future use but it's better to use Anki as part of a real world active creative project.

Original toplevel document

Augmenting Long-term Memory
ort of some creative project, I ask much better Anki questions. I find it easier to connect to the questions and answers emotionally. I simply care more about them, and that makes a difference. <span>So while it's tempting to use Anki cards to study in preparation for some (possibly hypothetical) future use, it's better to find a way to use Anki as part of some creative project. Using Anki to do shallow reads of papers Most of my Anki-based reading is much shallower than my read of the AlphaGo paper. Rather than spending days on a paper, I'll typically spend 10







Flashcard 4369725132044

Tags
#SRS #anki #incremental #memory #reading
Question
It is tempting to use Anki cards to study for a hypothetical future use but it's better to use Anki as part of a real world [...] creative project.
Answer
active

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
It is tempting to use Anki cards to study for a hypothetical future use but it's better to use Anki as part of a real world active creative project.

Original toplevel document

Augmenting Long-term Memory
ort of some creative project, I ask much better Anki questions. I find it easier to connect to the questions and answers emotionally. I simply care more about them, and that makes a difference. <span>So while it's tempting to use Anki cards to study in preparation for some (possibly hypothetical) future use, it's better to find a way to use Anki as part of some creative project. Using Anki to do shallow reads of papers Most of my Anki-based reading is much shallower than my read of the AlphaGo paper. Rather than spending days on a paper, I'll typically spend 10







#computer-science #machine-learning #reinforcement-learning
A slightly better algorithm can be derived by doing a few more analytic steps before substituting in v t . Continuing from (11.29): w t+1 = w t + ↵E ⇥ ⇢ t (x t x t+1 )x > t ⇤ E ⇥ x t x > t ⇤ 1 E[⇢ t t x t ] = w t + ↵ E ⇥ ⇢ t x t x > t ⇤ E ⇥ ⇢ t x t+1 x > t ⇤ E ⇥ x t x > t ⇤ 1 E[⇢ t t x t ] = w t + ↵ E ⇥ x t x > t ⇤ E ⇥ ⇢ t x t+1 x > t ⇤ E ⇥ x t x > t ⇤ 1 E[⇢ t t x t ] = w t + ↵ ⇣ E[x t ⇢ t t ] E ⇥ ⇢ t x t+1 x > t ⇤ E ⇥ x t x > t ⇤ 1 E[⇢ t t x t ] ⌘ ⇡ w t + ↵ E[x t ⇢ t t ] E ⇥ ⇢ t x t+1 x > t ⇤ v t (based on (11.28)) ⇡ w t + ↵⇢ t t x t x t+1 x > t v t , (sampling) which again is O ( d ) if the final product ( x > t v t ) is done first. This algorithm is known as either TD(0) with gradient correction (TDC) or, alternatively, as GTD(0)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#computer-science #machine-learning #reinforcement-learning
In o↵-policy learning, we reweight the state transitions using importance sampling so that they become appropriate for learni n g about the target policy, but the state distribution is still that of the behavior policy. There is a mismatch. A natural idea is to somehow reweight the states, emphasizing some an d de-emphasizing others, so as to return the distribution of updates to th e on-policy distribution. There would then be a match, and stability and convergence would follow from ex i st i ng result s.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning
#to-read
In this paper we introduce the idea of improving the performance of parametric temporal-difference (TD) learning algorithms by selectively emphasizing or de-emphasizing their updates on different time steps. In particular, we show that varying the emphasis of linear TD( λ \lambda )'s updates in a particular way causes its expected update to become stable under off-policy training. The only prior model-free TD methods to achieve this with per-step computation linear in the number of function approximation parameters are the gradient-TD family of methods including TDC, GTD( λ \lambda ), and GQ( λ \lambda ). Compared to these methods, our _emphatic TD( λ \lambda )_ is simpler and easier to use; it has only one learned parameter vector and one step-size parameter. Our treatment includes general state-dependent discounting and bootstrapping functions, and a way of specifying varying degrees of interest in accurately valuing different states.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




#SRS #anki #incremental #memory #reading

So, for fun, I wondered if it might be possible to use Anki to essentially completely memorize a (short) book about the Unix command line.

It was!

I chose O'Reilly Media's “Macintosh Terminal Pocket Guide”, by Daniel Barrett. I don't mean I literally memorized the entire text of the book** I later did an experiment with Charles Dickens' “A Tale of Two Cities”, seeing if it might actually be possible to memorize the entire text. After a few weeks I concluded that it would be possible, but would not be worth the time. So I deleted all the cards. An interesting thing has occurred post-deletion: the first few sentences of the book have gradually decayed in my memory, and I now have no more than fragments. I occasionally wonder what the impact would be of memorizing a good book in its entirety; I wouldn't be surprised if it greatly influenced my own language and writing.. But I did memorize much of the conceptual knowledge in the book, as well as the names, syntax, and options for most of the commands in the book. The exceptions were things I had no frame of reference to imagine using. But I did memorize most things I could imagine using. In the end I covered perhaps 60 to 70 percent of the book, skipping or skimming pieces that didn't seem relevant to me. Still, my knowledge of the command line increased enormously.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
ally learning the Unix command line. I'd only ever learned the most basic commands. Learning the command line is a superpower for people who program, so it seemed highly desirable to know well. <span>So, for fun, I wondered if it might be possible to use Anki to essentially completely memorize a (short) book about the Unix command line. It was! I chose O'Reilly Media's “Macintosh Terminal Pocket Guide”, by Daniel Barrett. I don't mean I literally memorized the entire text of the book** I later did an experiment with Charles Dickens' “A Tale of Two Cities”, seeing if it might actually be possible to memorize the entire text. After a few weeks I concluded that it would be possible, but would not be worth the time. So I deleted all the cards. An interesting thing has occurred post-deletion: the first few sentences of the book have gradually decayed in my memory, and I now have no more than fragments. I occasionally wonder what the impact would be of memorizing a good book in its entirety; I wouldn't be surprised if it greatly influenced my own language and writing.. But I did memorize much of the conceptual knowledge in the book, as well as the names, syntax, and options for most of the commands in the book. The exceptions were things I had no frame of reference to imagine using. But I did memorize most things I could imagine using. In the end I covered perhaps 60 to 70 percent of the book, skipping or skimming pieces that didn't seem relevant to me. Still, my knowledge of the command line increased enormously. Choosing this rather ludicrous, albeit extremely useful, goal gave me a great deal of confidence in Anki. It was exciting, making it obvious that Anki would make it easy to learn things




#SRS #anki #incremental #memory #reading
Choosing this rather ludicrous, albeit extremely useful, goal gave me a great deal of confidence in Anki. It was exciting, making it obvious that Anki would make it easy to learn things that would formerly have been quite tedious and difficult for me to learn. This confidence, in turn, made it much easier to build an Anki habit. At the same time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
gine using. In the end I covered perhaps 60 to 70 percent of the book, skipping or skimming pieces that didn't seem relevant to me. Still, my knowledge of the command line increased enormously. <span>Choosing this rather ludicrous, albeit extremely useful, goal gave me a great deal of confidence in Anki. It was exciting, making it obvious that Anki would make it easy to learn things that would formerly have been quite tedious and difficult for me to learn. This confidence, in turn, made it much easier to build an Anki habit. At the same time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example o




#SRS #anki #incremental #memory #reading

Using Anki to thoroughly read a research paper in an unfamiliar field

I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search, Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go.

After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal?, Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color.

I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines.

While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition.

While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material.

Here's how I went about it.

I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on.

Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading

Using Anki to do shallow reads of papers

Most of my Anki-based reading is much shallower than my read of the AlphaGo paper. Rather than spending days on a paper, I'll typically spend 10 to 60 minutes, sometimes longer for very good papers. Here's a few notes on some patterns I've found useful in shallow reading.

As mentioned above, I'm usually doing such reading as part of the background research for some project. I will find a new article (or set of articles), and typically spend a few minutes assessing it. Does the article seem likely to contain substantial insight or provocation relevant to my project – new questions, new ideas, new methods, new results? If so, I'll have a read.

This doesn't mean reading every word in the paper. Rather, I'll add to Anki questions about the core claims, core questions, and core ideas of the paper. It's particularly helpful to extract Anki questions from the abstract, introduction, conclusion, figures, and figure captions. Typically I will extract anywhere from 5 to 20 Anki questions from the paper. It's usually a bad idea to extract fewer than 5 questions – doing so tends to leave the paper as a kind of isolated orphan in my memory. Later I find it difficult to feel much connection to those questions. Put another way: if a paper is so uninteresting that it's not possible to add 5 good questions about it, it's usually better to add no questions at all.

One failure mode of this process is if you Ankify** I.e., enter into Anki. Also useful are forms such as Ankification etc. misleading work. Many papers contain wrong or misleading statements, and if you commit such items to memory, you're actively making yourself stupider.

How to avoid Ankifying misleading work?

As an example, let me describe how I Ankified a paper I recently read, by the economists Benjamin Jones and Bruce Weinberg** Benjamin F. Jones and Bruce A. Weinberg, Age Dynamics in Scientific Creativity, Proceedings of the National Academy of Sciences (2011).. The paper studies the ages at which scientists make their greatest discoveries.

I should say at the outset: I have no reason to think this paper is misleading! But it's also worth being cautious. As an example of that caution, one of the questions I added to Anki was: “What does Jones 2011 claim is the average age at which physics Nobelists made their prizewinning discovery, over 1980-2011?” (Answer: 48). Another variant question was: “Which paper claimed that physics Nobelists made their prizewinning discovery at average age 48, over the period 1980-2011?” (Answer: Jones 2011). And so on.

Such questions qualify the underlying claim: we now know it was a claim made in Jones 2011, and that we're relying on the quality of Jones and Weinberg's data analysis. In fact, I haven't examined that analysis carefully enough to regard it as a fact that the average age of those Nobelists is 48. But it is certainly a fact that their paper claimed it was 48. Those are different things, and the latter is better to Ankify.

If I'm particularly concerned about the quality of the analysis, I may add one or more questions about what makes such work difficult, e.g.: “What's one challenge in determining the age of Nobel winners at the time of their discovery, as discussed in Jones 2011?” Good answers include: the difficulty of figuring out which paper contained the Nobel-winning work; the fact that publication of papers is sometimes delayed by years; that sometimes work is spread over multiple papers; and so on. Thinking about such challenges reminds me that if Jones and Weinberg were sloppy, or simply made an understandable mistake, their numbers might be off. Now, it so happens that for this particular paper, I'm not too worried about such issues. And so I didn't Ankify any such question. But it's worth being careful in framing questions so you're not misleading yourself.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
difference. So while it's tempting to use Anki cards to study in preparation for some (possibly hypothetical) future use, it's better to find a way to use Anki as part of some creative project. <span>Using Anki to do shallow reads of papers Most of my Anki-based reading is much shallower than my read of the AlphaGo paper. Rather than spending days on a paper, I'll typically spend 10 to 60 minutes, sometimes longer for very good papers. Here's a few notes on some patterns I've found useful in shallow reading. As mentioned above, I'm usually doing such reading as part of the background research for some project. I will find a new article (or set of articles), and typically spend a few minutes assessing it. Does the article seem likely to contain substantial insight or provocation relevant to my project – new questions, new ideas, new methods, new results? If so, I'll have a read. This doesn't mean reading every word in the paper. Rather, I'll add to Anki questions about the core claims, core questions, and core ideas of the paper. It's particularly helpful to extract Anki questions from the abstract, introduction, conclusion, figures, and figure captions. Typically I will extract anywhere from 5 to 20 Anki questions from the paper. It's usually a bad idea to extract fewer than 5 questions – doing so tends to leave the paper as a kind of isolated orphan in my memory. Later I find it difficult to feel much connection to those questions. Put another way: if a paper is so uninteresting that it's not possible to add 5 good questions about it, it's usually better to add no questions at all. One failure mode of this process is if you Ankify** I.e., enter into Anki. Also useful are forms such as Ankification etc. misleading work. Many papers contain wrong or misleading statements, and if you commit such items to memory, you're actively making yourself stupider. How to avoid Ankifying misleading work? As an example, let me describe how I Ankified a paper I recently read, by the economists Benjamin Jones and Bruce Weinberg** Benjamin F. Jones and Bruce A. Weinberg, Age Dynamics in Scientific Creativity , Proceedings of the National Academy of Sciences (2011).. The paper studies the ages at which scientists make their greatest discoveries. I should say at the outset: I have no reason to think this paper is misleading! But it's also worth being cautious. As an example of that caution, one of the questions I added to Anki was: “What does Jones 2011 claim is the average age at which physics Nobelists made their prizewinning discovery, over 1980-2011?” (Answer: 48). Another variant question was: “Which paper claimed that physics Nobelists made their prizewinning discovery at average age 48, over the period 1980-2011?” (Answer: Jones 2011). And so on. Such questions qualify the underlying claim: we now know it was a claim made in Jones 2011, and that we're relying on the quality of Jones and Weinberg's data analysis. In fact, I haven't examined that analysis carefully enough to regard it as a fact that the average age of those Nobelists is 48. But it is certainly a fact that their paper claimed it was 48. Those are different things, and the latter is better to Ankify. If I'm particularly concerned about the quality of the analysis, I may add one or more questions about what makes such work difficult, e.g.: “What's one challenge in determining the age of Nobel winners at the time of their discovery, as discussed in Jones 2011?” Good answers include: the difficulty of figuring out which paper contained the Nobel-winning work; the fact that publication of papers is sometimes delayed by years; that sometimes work is spread over multiple papers; and so on. Thinking about such challenges reminds me that if Jones and Weinberg were sloppy, or simply made an understandable mistake, their numbers might be off. Now, it so happens that for this particular paper, I'm not too worried about such issues. And so I didn't Ankify any such question. But it's worth being careful in framing questions so you're not misleading yourself. Another useful pattern while reading papers is Ankifying figures. For instance, here's a graph from Jones 2011 showing the probability a physicist made their prizewinning discovery by a




#SRS #anki #has-images #incremental #memory #reading

Another useful pattern while reading papers is Ankifying figures. For instance, here's a graph from Jones 2011 showing the probability a physicist made their prizewinning discovery by age 40 (blue line) and by age 30 (black line):

I have an Anki question which simply says: “Visualize the graph Jones 2011 made of the probability curves for physicists making their prizewinning discoveries by age 30 and 40”. The answer is the image shown above, and I count myself as successful if my mental image is roughly along those lines. I could deepen my engagement with the graph by adding questions such as: “In Jones 2011's graph of physics prizewinning discoveries, what is the peak probability of great achievement by age 40 [i.e., the highest point in the blue line in the graph above]?” (Answer: about 0.8.) Indeed, one could easily add dozens of interesting questions about this graph. I haven't done that, because of the time commitment associated to such questions. But I do find the broad shape of the graph fascinating, and it's also useful to know the graph exists, and where to consult it if I want more details.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Augmenting Long-term Memory
for this particular paper, I'm not too worried about such issues. And so I didn't Ankify any such question. But it's worth being careful in framing questions so you're not misleading yourself. <span>Another useful pattern while reading papers is Ankifying figures. For instance, here's a graph from Jones 2011 showing the probability a physicist made their prizewinning discovery by age 40 (blue line) and by age 30 (black line): I have an Anki question which simply says: “Visualize the graph Jones 2011 made of the probability curves for physicists making their prizewinning discoveries by age 30 and 40”. The answer is the image shown above, and I count myself as successful if my mental image is roughly along those lines. I could deepen my engagement with the graph by adding questions such as: “In Jones 2011's graph of physics prizewinning discoveries, what is the peak probability of great achievement by age 40 [i.e., the highest point in the blue line in the graph above]?” (Answer: about 0.8.) Indeed, one could easily add dozens of interesting questions about this graph. I haven't done that, because of the time commitment associated to such questions. But I do find the broad shape of the graph fascinating, and it's also useful to know the graph exists, and where to consult it if I want more details. I said above that I typically spend 10 to 60 minutes Ankifying a paper, with the duration depending on my judgment of the value I'm getting from the paper. However, if I'm learning a gr




The most important clinical questions are location of lesions (arms, head, legs, etc.), symptoms (pruritus, pain, etc.), dura- tion (acute or chronic), arrangement of lesions (solitary, gener- alized, annular, linear, etc.), morphology (macules, papules, plaques, vesicles, etc.), and color (red, blue, brown, black, white, yellow, etc.). The smart pathologist will not read out a skin biopsy of an inflammatory condition without calling for clinical information. Some pseudomalignancies are distin- guished from bona fide malignancies mainly by clinical dif- ferences (1.118). The difference between a lichenoid keratosis and lichen planus, which may be nearly identical histologi- cally, for example, primarily rests upon the former being a solitary papule and the latter being a rash with more than one lesion
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Pathologists organize most of the histologic findings of skin lesions according to epidermal changes, dermal changes, adipose changes (panniculitis, if inflammatory), patterns or arrange- ments of inflammatory or neoplastic cells, and specific types of cells found. Architecture of skin lesions (arrangements of cells) is considered along with cytologic changes such as atypia. Cytologic atypia (hyperchromatism, pleomorphism, prominent nucleoli, increased nuclear to cytoplasmic ratio, and abnormal mitoses) is subjective, and the precise quanti- fication of atypia into mild, moderate or severe is in the eye of the beholder to some extent. Atypia is often used as evi- dence of malignancy, but it must be considered along with clinical findings and the lesion architecture. Dermatopatholo- gists with a background in clinical dermatology tend to emphasize the clinical information and cellular architecture over the cytologic features. Those with a pathology back- ground often stress cytology at the expense of clinical features and architecture. If the histologic findings do not fit the clini- cal situation, then the disparity must be rectified!
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




The home- run hitters try to “force” a diagnosis, and give only one most likely diagnosis. They are either very, very correct, and look very smart, or else they strike out and miss the diagnosis completely.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Hedger pathologists, by contrast, seldom make a specific diagnosis, and instead often give a long differential diagnos- tic list, even to the point of listing histologic possibilities that are ridiculous from a clinical standpoint. They rarely strike out, but they are sometimes not very helpful, and are not appreciated by clinicians.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




1.37  Epidermotropism and pagetoid cells (see also Clear cell neoplasms 1.22) Epidermotropism refers to migration of malignant cells into the epidermis, usually without significant spongiosis. Exocytosis refers to the migration of cells (usually lymphocytes, neu- trophils, or eosinophils) into the epidermis, usually in asso- ciation with spongiosis (1.132), and usually used for benign conditions. Pagetoid cells are cells in the epidermis, often pale or atypical, resembling Paget’s disease. Pagetoid cells may or may not arrive in the epidermis by means of epidermotro- pism. Pagetoid melanocytes (Chapter 20) are best known in melanoma, but are also seen in benign melanocytic neoplasms such as Spitz nevus, pigmented spindle cell nevus, congenital nevus in neonates, recurrent nevus, and acral nevus. Care must be taken not to mistake melanocytes in tangentially sec- tioned epidermis for pagetoid cells, 160 nor to mistake common artifactual vacuoles around keratinocytes for pagetoid cells (1.144).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#computer-science #machine-learning #reinforcement-learning
If =0 . 9, then we can consider that with probability 0 . 1 the process terminates on every time step and then immediately restarts in the state th at is transitioned to.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Paget’s disease: pale cells with adenocarcinoma staining features. Carcinoembryonic antigen (CEA), EMA, CK-7, CK-8, usually positive. Mucin in cytoplasm is often positive with mucicarmine, Alcian blue, colloidal iron, and PAS with and without diastase. Basal cells are often compressed and uninvolved. No dyskeratosis Bowen’s disease (squamous cell carcinoma in situ, 18.10): pale keratinocytes may be present which contain glycogen (PAS positive, diastase labile) with frequent dyskeratosis (1.27). Full-thickness atypia often involves basal cells also. High molecular weight keratin positive. Mucin stains, CEA, EMA, CK-7, CK-8 and low molecular weight keratin negative. However, cases have been published with exceptions, such as positive staining for CK-7 and EMA. Usually no pagetoid cells in the stratum corneum, which sometimes occurs with Paget’s disease and melanoma Borst–Jadassohn phenomenon: discrete clones of basaloid, squamatized, or pale keratinocytes in the epidermis that appear different than their neighbors. This can be benign or malignant. It is mainly seen with irritated seborrheic keratosis (18.2) or Bowen’s disease (18.10), and rarely with hidroacanthoma simplex (a form of eccrine poroma limited to the epidermis, 23.10). Melanoma (20.11): S-100 (very sensitive, but not specific), HMB-45 and MART-1 (both very specific, but not sensitive) stains positive. Fontana melanin stain is also positive, but is less useful because keratinocytes may contain melanin transferred from melanocytes. Melanosomes by electron microscopy Mycosis fungoides (24.1): malignant T-lymphocytes (sometimes cerebriform) in spaces called Pautrier microabscesses. Stains such as CD45 (leukocyte common antigen, used for screening), CD4 (T-helper cells), and CD3 (pan-T cells) usually positive Lymphomatoid papulosis (24.5): CD30 positive large atypical lymphocytes, and most smaller ones in the epidermis stain with T-cell markers Langerhans cell histiocytosis (24.18): malignant Langerhans cells, often kidney-shaped nuclei, CDla or S-100 positive
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Sebaceous carcinoma (21.5): oil red-O stain positive (need frozen section). Sometimes Bowenoid changes (oil red-O negative carcinoma) in the epidermis coexist with sebaceous carcinoma in the dermis. EMA positivity is useful if frozen section not available

Merkel cell tumor (26.7): small cell tumor (1.130) is almost always present also in the dermis, in addition to the epidermotropic cells. Sometimes Bowen’s disease can coexist, or the small cells enter the epidermis. Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase

Clear cell acanthoma (18.6): discrete clone of pale keratinocytes in a psoriasiform epidermis, positive for glycogen and keratin

Hidroacanthoma simplex (23.10): this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present

Pagetoid dyskeratosis 164 (1.27) Epidermotropic adnexal carcinoma (23.13): rare

Epidermotropic metastatic carcinoma or melanoma: very rare. Usually carcinoma or melanoma cells within the epidermis imply that a neoplasm is primary, but in this case metastatic melanocytes or epithelial cells actually infiltrate the epidermis. This usually can only be diagnosed with certainty when more than one lesion is present, making it more apparent that the lesions are metastatic

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#computer-science #machine-learning #reinforcement-learning
A simple complete example of divergence is Baird’s counterexample. Consider the episodic seven-state , two-action MDP shown in Figure 11.1. The dashed action takes the system to one of the six upper states with equal probability, whereas the solid action takes the system to the seventh state. The behavior policy b selects the dashed and solid actions with probabilities 6 7 and 1 7 , so that the next-state distribution under it is uniform (the same for all nonterminal states), which is also the starting distribution for each episode. The target policy ⇡ always takes the solid action, and so the on-policy distribution (for ⇡ ) is concentrated in the seventh state. The reward is ze ro on all t r ans i ti on s. The discount rate is =0.99. Consider estimating the state-value under the linear parameterization indicated by the expression shown in each state circle. For example, the est im at ed value of the leftmost state is 2 w 1 + w 8 , where the s ub sc ri p t corresponds to the component of the 2w 2 +w 8 2w 1 +w 8 2w 3 +w 8 2w 4 +w 8 2w 5 +w 8 2w 6 +w 8 w 7 +2w 8 b(dashed|·)=6/7 b(solid|·)=1/7 ⇡(solid|·)=1 =0.99
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#SRS #anki #incremental #memory #reading

Here's how I went about it.

I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on.

Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?”

As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper.

I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. <span>Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the backg

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading

After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise.

After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. <span>After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would reta

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




Flashcard 4369913351436

Tags
#reinforcement-learning
Question
Why is the conventional Euclidean norm not an appropriate norm for calculating errors of value functions in RL?
Answer
Some states are more important than others because they occur more frequently or because we are more interested in them ( Section 9.11)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The conventional Euclidean norm is not appropriate because, as discussed in Section 9.2, some states are more important than ot he rs because they occur more frequently or because we are more interested in them ( S ect i on 9.11)

Original toplevel document (pdf)

cannot see any pdfs







Building a Second Brain
#information #managent

Building a Second Brain:

An Overview Building a Second Brain: An Overview

Posted by Tiago Forte | Feb 20, 2019 |

https://tyler.is/wp-content/uploads/2019/03/Building-a-Second-Brain_-An-Overview-_-Praxis-1.pdf

This is a summary of Building a Second Brain, my online course on capturing, organizing, and sharing your knowledge using digital notes. How many brilliant ideas have you had and forgotten? How many insights have you failed to take action on? How much useful advice have you slowly forgotten as the years have passed? We feel a constant pressure to be learning, improving ourselves, and making progress. We spend countless hours every year reading, listening, and watching informational content. And yet, where has all that valuable knowledge gone? Where is it when we need it? Our brain can only store a few thoughts at any one time. Our brain is for having ideas, not storing them. Building A Second Brain is a methodology for saving and systematically reminding us of the ideas, inspirations, insights, and connections we’ve gained through our experience. It expands our memory and our intellect using the modern tools of technology and networks. This methodology is not only for preserving those ideas, but turning them into reality. It provides a clear, actionable path to creating a “second brain” – an external, centralized, digital repository for the things you learn and the resources from which they come. Being effective in the world today requires managing many different kinds of information – emails, text messages, messaging apps, online articles, books, podcasts, webinars, memos, and many others. All of these kinds of content have value, but trying to remember all of it is overwhelming and impractical. By consolidating ideas from these sources, you’ll develop a valuable body of work to advance your projects and goals. You’ll have an ongoing record of personal discoveries, lessons learned, and actionable insights for any situation. We are already doing most of the work required to consume this content. We spend a significant portion of our careers creating snippets of text, outlines, photos, videos, sketches, diagrams, webpages, notes, or documents. Yet without a little extra care to preserve these valuable resources, our precious knowledge remains siloed and scattered across dozens of different locations. We fail to build a collection of knowledge that both appreciates in value and can be reused again and again. By offloading our thinking onto a “second brain,” we free our biological brain to imagine, create, and simply be present. We can move through life confident that we will remember everything that matters, instead of floundering through our days struggling to keep track of every detail. Your second brain will serve as an extension of your mind, not only protecting you from the ravages of forgetfulness but also amplifying your efforts as you take on creative challenges. The Building a Second Brain methodology will teach you how to: 1. Consistently move your projects and goals to completion by organizing and accessing your knowledge in a results-oriented way 2. Transform your personal knowledge into income, taking advantage of a rapidly growing knowledge economy 3. Uncover unexpected patterns and connections between ideas 4. Reduce stress and “information overload” by expertly curating and managing your personal information stream 5. Develop valuable expertise, specialized knowledge, and the skills to deploy it in a new job, career, or business 6. Cultivate a collection of valuable knowledge and insights over time without having to follow rigid, time-consuming rules 7. Unlock the full value of the wealth of learning resources all around you, such as online courses, webinars, books, articles, forums, and podcasts Part I: Remember The first step in building a second brain is “capturing” the ideas and insights you think are worth saving

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




#SRS #anki #incremental #memory #reading
As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search,
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol,

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. <span>Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
around 2011 progress began to speed up, driven by advances in deep neural networks
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
fort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, <span>around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
gan to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. <span>By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
Anki gave me confidence I would retain much of the understanding over the long term.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
y this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using <span>Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan,

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so <span>using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




#SRS #anki #incremental #memory #reading
this foundational kind of understanding is a good basis on which to build deeper expertise.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But <span>this foundational kind of understanding is a good basis on which to build deeper expertise. <span>

Original toplevel document

Augmenting Long-term Memory
e time, the project also helped me learn the Anki interface, and got me to experiment with different ways of posing questions. That is, it helped me build the skills necessary to use Anki well. <span>Using Anki to thoroughly read a research paper in an unfamiliar field I find Anki a great help when reading research papers, particularly in fields outside my expertise. As an example of how this can work, I'll describe my experience reading a 2016 paper** David Silver, Aja Huang, Chris J. Maddison, Arthur Guez et al, Mastering the game of Go with deep neural networks and tree search , Nature (2016). describing AlphaGo, the computer system from Google DeepMind that beat some of the world's strongest players of the game Go. After the match where AlphaGo beat Lee Sedol, one of the strongest human Go players in history, I suggested to Quanta Magazine that I write an article about the system** Michael Nielsen, Is AlphaGo Really Such a Big Deal? , Quanta (2016).. AlphaGo was a hot media topic at the time, and the most common angle in stories was human interest, viewing AlphaGo as part of a long-standing human-versus-machine narrative, with a few technical details filled in, mostly as color. I wanted to take a different angle. Through the 1990s and first decade of the 2000s, I believed human-or-better general artificial intelligence was far, far away. The reason was that over that time researchers made only slow progress building systems to do intuitive pattern matching, of the kind that underlies human sight and hearing, as well as in playing games such as Go. Despite enormous effort by AI researchers, many pattern-matching feats which humans find effortless remained impossible for machines. While we made only very slow progress on this set of problems for a long time, around 2011 progress began to speed up, driven by advances in deep neural networks. For instance, machine vision systems rapidly went from being terrible to being comparable to human beings for certain limited tasks. By the time AlphaGo was released, it was no longer correct to say we had no idea how to build computer systems to do intuitive pattern matching. While we hadn't yet nailed the problem, we were making rapid progress. AlphaGo was a big part of that story, and I wanted my article to explore this notion of building computer systems to capture human intuition. While I was excited, writing such an article was going to be difficult. It was going to require a deeper understanding of the technical details of AlphaGo than a typical journalistic article. Fortunately, I knew a fair amount about neural networks – I'd written a book about them** Michael A. Nielsen, "Neural Networks and Deep Learning" , Determination Press (2015).. But I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material. Here's how I went about it. I began with the AlphaGo paper itself. I began reading it quickly, almost skimming. I wasn't looking for a comprehensive understanding. Rather, I was doing two things. One, I was trying to simply identify the most important ideas in the paper. What were the names of the key techniques I'd need to learn about? Second, there was a kind of hoovering process, looking for basic facts that I could understand easily, and that would obviously benefit me. Things like basic terminology, the rules of Go, and so on. Here's a few examples of the kind of question I entered into Anki at this stage: “What's the size of a Go board?”; “Who plays first in Go?”; “How many human game positions did AlphaGo learn from?”; “Where did AlphaGo get its training data?”; “What were the names of the two main types of neural network AlphaGo used?” As you can see, these are all elementary questions. They're the kind of thing that are very easily picked up during an initial pass over the paper, with occasional digressions to search Google and Wikipedia, and so on. Furthermore, while these facts were easy to pick up in isolation, they also seemed likely to be useful in building a deeper understanding of other material in the paper. I made several rapid passes over the paper in this way, each time getting deeper and deeper. At this stage I wasn't trying to obtain anything like a complete understanding of AlphaGo. Rather, I was trying to build up my background understanding. At all times, if something wasn't easy to understand, I didn't worry about it, I just keep going. But as I made repeat passes, the range of things that were easy to understand grew and grew. I found myself adding questions about the types of features used as inputs to AlphaGo's neural networks, basic facts about the structure of the networks, and so on. After five or six such passes over the paper, I went back and attempted a thorough read. This time the purpose was to understand AlphaGo in detail. By now I understood much of the background context, and it was relatively easy to do a thorough read, certainly far easier than coming into the paper cold. Don't get me wrong: it was still challenging. But it was far easier than it would have been otherwise. After doing one thorough pass over the AlphaGo paper, I made a second thorough pass, in a similar vein. Yet more fell into place. By this time, I understood the AlphaGo system reasonably well. Many of the questions I was putting into Anki were high level, sometimes on the verge of original research directions. I certainly understood AlphaGo well enough that I was confident I could write the sections of my article dealing with it. (In practice, my article ranged over several systems, not just AlphaGo, and I had to learn about those as well, using a similar process, though I didn't go as deep.) I continued to add questions as I wrote my article, ending up adding several hundred questions in total. But by this point the hardest work had been done. Of course, instead of using Anki I could have taken conventional notes, using a similar process to build up an understanding of the paper. But using Anki gave me confidence I would retain much of the understanding over the long term. A year or so later DeepMind released papers describing followup systems, known as AlphaGo Zero and AlphaZero** For AlphaGo Zero, see: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou et al, Mastering the game of Go without human knowledge , Nature (2017). For AlphaZero, see: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou et al, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm (2017).. Despite the fact that I'd thought little about AlphaGo or reinforcement learning in the intervening time, I found I could read those followup papers with ease. While I didn't attempt to understand those papers as thoroughly as the initial AlphaGo paper, I found I could get a pretty good understanding of the papers in less than an hour. I'd retained much of my earlier understanding! By contrast, had I used conventional note-taking in my original reading of the AlphaGo paper, my understanding would have more rapidly evaporated, and it would have taken longer to read the later papers. And so using Anki in this way gives confidence you will retain understanding over the long term. This confidence, in turn, makes the initial act of understanding more pleasurable, since you believe you're learning something for the long haul, not something you'll forget in a day or a week. OK, but what does one do with it? … [N]ow that I have all this power – a mechanical golem that will never forget and never let me forget whatever I chose to – what do I choose to remember? – Gwern Branwen This entire process took a few days of my time, spread over a few weeks. That's a lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise. It's notable that I was reading the AlphaGo paper in support of a creative project of my own, namely, writing an article for Quanta Magazine. This is important: I find Anki works much b




Flashcard 4370012966156

Question
The [...] hitters try to “force” a diagnosis, and give only one most likely diagnosis. They are either very, very correct, and look very smart, or else they strike out and miss the diagnosis completely.
Answer
home- run

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The home- run hitters try to “force” a diagnosis, and give only one most likely diagnosis. They are either very, very correct, and look very smart, or else they strike out and miss the diagnosis comp

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370014539020

Question
The home- run hitters try to “[...]” a diagnosis, and give only one most likely diagnosis. They are either very, very correct, and look very smart, or else they strike out and miss the diagnosis completely.
Answer
force

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The home- run hitters try to “force” a diagnosis, and give only one most likely diagnosis. They are either very, very correct, and look very smart, or else they strike out and miss the diagnosis completely.

Original toplevel document (pdf)

cannot see any pdfs







Sebaceous carcinoma (21.5): oil red-O stain positive (need frozen section). Sometimes Bowenoid changes (oil red-O negative carcinoma) in the epidermis coexist with sebaceous carcinoma in the dermis. EMA positivity is useful if frozen section not available
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Sebaceous carcinoma (21.5): oil red-O stain positive (need frozen section). Sometimes Bowenoid changes (oil red-O negative carcinoma) in the epidermis coexist with sebaceous carcinoma in the dermis. EMA positivity is useful if frozen section not available Merkel cell tumor (26.7): small cell tumor (1.130) is almost always present also in the dermis, in addition to the epidermotropic cells. Sometimes Bowen’s disease can coexist, or the sm

Original toplevel document (pdf)

cannot see any pdfs




Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
zen section). Sometimes Bowenoid changes (oil red-O negative carcinoma) in the epidermis coexist with sebaceous carcinoma in the dermis. EMA positivity is useful if frozen section not available <span>Merkel cell tumor (26.7): small cell tumor (1.130) is almost always present also in the dermis, in addition to the epidermotropic cells. Sometimes Bowen’s disease can coexist, or the small cells enter the epidermis. Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase Clear cell acanthoma (18.6): discrete clone of pale keratinocytes in a psoriasiform epidermis, positive for glycogen and keratin Hidroacanthoma simplex (23.10): this is an eccrine porom

Original toplevel document (pdf)

cannot see any pdfs




Clear cell acanthoma (18.6): discrete clone of pale keratinocytes in a psoriasiform epidermis, positive for glycogen and keratin
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
s. Sometimes Bowen’s disease can coexist, or the small cells enter the epidermis. Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase <span>Clear cell acanthoma (18.6): discrete clone of pale keratinocytes in a psoriasiform epidermis, positive for glycogen and keratin Hidroacanthoma simplex (23.10): this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present Pagetoid dyskeratosis 164 (1.27) Epidermotropic adnexal carcinoma (2

Original toplevel document (pdf)

cannot see any pdfs




Hidroacanthoma simplex: this is an eccrine poroma with Borst–Jadassohn phenomenon, and sweat ducts are present
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
e for CK20 (perinuclear dot pattern) and neuron-specific enolase Clear cell acanthoma (18.6): discrete clone of pale keratinocytes in a psoriasiform epidermis, positive for glycogen and keratin <span>Hidroacanthoma simplex (23.10): this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present Pagetoid dyskeratosis 164 (1.27) Epidermotropic adnexal carcinoma (23.13): rare Epidermotropic metastatic carcinoma or melanoma: very rare. Usually carcinoma or melanoma cells within th

Original toplevel document (pdf)

cannot see any pdfs




Getting one's hands dirty in programming

This is what i do…

I’ve learnt a new programming language and now I want my skill in action. I pick up a project which I wrote in another language and started creating a new project with this new skill.

But sometime I don’t want to waste my time and re-writing the same project. So I pick up a online challenge and show my skills.

But we don’t get these challenges daily, then in that case I solve earlier challenges question and show how I strong I became in programming…..

Ah na… I don’t want to solve older challenge. I need real life challenge, so I go thru the old the ideas I had in past and put them in action. Create a new app or website and updated my git hub repository.

Oh sit.. I never got any such idea so I log into stack overflow and solve other peoples problems.

In short there are various way you can get your hands dirty.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

After finishing a programming language course, how do I get real dirty with coding real-world-like apps? - Quora
ly language, I still do not know where to start on a project. How do I approach it and figure out wha... Rajesh Upadhayaya, Web developer, Program digger, technology lover Answered Mar 14, 2018 <span>This is what i do… I’ve learnt a new programming language and now I want my skill in action. I pick up a project which I wrote in another language and started creating a new project with this new skill. But sometime I don’t want to waste my time and re-writing the same project. So I pick up a online challenge and show my skills. But we don’t get these challenges daily, then in that case I solve earlier challenges question and show how I strong I became in programming….. Ah na… I don’t want to solve older challenge. I need real life challenge, so I go thru the old the ideas I had in past and put them in action. Create a new app or website and updated my git hub repository. Oh sit.. I never got any such idea so I log into stack overflow and solve other peoples problems. In short there are various way you can get your hands dirty. 125 views · View 3 Upvoters pxrVaoVmbTERoDQzSstnFzeNdJMqEe cbjLyKYGSA lNcDXHZpduDdcXmoPkVejLDckDuNCxcfSkKYdUXGkEoneJ What are the biggest tracker networks and what can I do about them?




Flashcard 4370028956940

Question
[...] carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase
Answer
Merkel cell

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370030529804

Question
Merkel cell carcinoma cells are usually positive for [...] (perinuclear dot pattern) and neuron-specific enolase
Answer
CK20

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370032102668

Question
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear [...] pattern) and neuron-specific enolase
Answer
dot

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370033675532

Question
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and [non-specific marker]
Answer
neuron-specific enolase

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Merkel cell carcinoma cells are usually positive for CK20 (perinuclear dot pattern) and neuron-specific enolase

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370039442700

Tags
#reinforcement-learning
Question
What is the form of a universal successor features approximator (USFA) ?
Answer
Universal successor features are defined as \(\psi(s, a, \pi)\equiv \psi^{\pi}(s, a) \equiv \mathrm{E}^{\pi}\left[\sum_{i=t}^{\infty} \gamma^{i-t} \boldsymbol{\phi}_{i+1} | S_{t}=s, A_{t}=a\right]\), that is, as distinct from SFs by taking the policy as an additional argument. Based on such definition, we call \(\tilde{\boldsymbol{\psi}}(\boldsymbol{s}, \boldsymbol{a}, \pi) \approx \boldsymbol{\psi}(\boldsymbol{s}, \boldsymbol{a}, \pi)\) a universal successor features approximator (USFA).

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 4370042326284

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370047307020

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370052287756

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370059103500

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370064084236

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370069851404

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370074832140

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370079812876

Tags
#bones #bony #has-images #landmarks #pelvis



statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370084793612

Tags
#bones #bony #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370089774348

Tags
#bones #bony #has-images #landmarks #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370094755084

Tags
#bones #bony #femur #has-images #landmarks #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370099735820

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370104716556

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370109697292

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370114678028

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370119658764

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370124639500

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370129620236

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370134600972

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370139581708


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370140892428

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370147708172

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370152688908

Tags
#bones #bony #femur #has-images #landmarks #left #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370157669644

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370162650380

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370167631116

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370172611852

Tags
#2019-08-29_12-21-02 #has-images





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370177592588

Tags
#2019-08-29_12-21-02 #has-images





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370182573324

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370187554060

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370192534796

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370197515532

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370202496268

Tags
#bones #bony #femur #has-images #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370207477004

Tags
#bones #bony #femur #has-images #landmarks #pelvis #right





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370212457740

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370217438476

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370222419212

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370227399948

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370232380684

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370237361420

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370242342156

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370249157900

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370254138636

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370259119372

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370265935116

Tags
#bones #bony #has-images #inferior #landmarks #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370270915852

Tags
#bones #bony #has-images #inferior #landmarks #master #pelvis





statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4370275896588

Tags
#reinforcement-learning
Question

Algorithm 1 Learn USFA with ε-greedy Q-learning

Require: ε, training tasks M, distribution Dz over Rd, number of policies nz

  1. 1: select initial state s ∈ S

  2. 2: forn s steps do

  3. 3: sample w uniformly at random from M

  4. 4: {sample policies, possibly based on current task}

  5. 5: for i ← 1,2,...,nz do zi ∼ Dz(·|w)

  6. 6: if Bernoulli(ε)=1 then a ← Uniform(A)

  7. 7: else a ← [GPI]

  8. 8: Execute action a and observe φ and s′

  9. 9: for i ← 1,2,...,nz do {Update ψ ̃}

10: a′ ← [\(a' \equiv \pi_i(s')\)]

  1. 11: θ←− φ+γψ(s′,a′,zi)−ψ(s,a,zi) ∇_θψ

  2. 12: s←s′

13: returnθ

Answer

\(\operatorname{argmax}_{b} \max _{i} \tilde{\boldsymbol{\psi}}\left(s, b, \mathbf{z}_{i}\right)^{\top} \mathbf{w}\)

\(\operatorname{argmax}_{b}\tilde{\boldsymbol{\psi}}\left(s, b, \mathbf{z}_{i}\right)^{\top} \mathbf{z_i}\)


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 4370286382348

Question
[...]: this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present
Answer
Hidroacanthoma simplex

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Hidroacanthoma simplex: this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370287955212

Question
Hidroacanthoma simplex: this is an eccrine [...] with Borst–Jadassohn features, and sweat ducts are present
Answer
poroma

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Hidroacanthoma simplex: this is an eccrine poroma with Borst–Jadassohn features, and sweat ducts are present

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370290576652

Question
Hidroacanthoma simplex: this is an eccrine poroma with [..phenomenon.], and sweat ducts are present
Answer
Borst–Jadassohn phenomenon

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Hidroacanthoma simplex: this is an eccrine poroma with Borst–Jadassohn phenomenon, and sweat ducts are present

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4370298965260

Question
context
Answer
a shared object, which can contains states, across all processors of strategy pipeline

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
Unknown title
late Strategy 1.1 Terminologies If you are not familiar with strategy pipeline, please take a look at this video https://broadcast.amazon.com/videos/46919 (from 0:00 till 15:00). Terminologies: <span>context a shared object, which can contains states, across all processors of strategy pipeline sources anything that is retrieved by calling dependency services and is used for generating and/o







Flashcard 4370302373132

Question
sources
Answer
anything that is retrieved by calling dependency services and is used for generating and/or post processing/filter candidates (recommendations)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
Unknown title
ook at this video https://broadcast.amazon.com/videos/46919 (from 0:00 till 15:00). Terminologies: context a shared object, which can contains states, across all processors of strategy pipeline <span>sources anything that is retrieved by calling dependency services and is used for generating and/or post processing/filter candidates (recommendations) candidates recommendations that are retur







Flashcard 4370305781004

Question
candidates
Answer
recommendations that are returned to client

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
Unknown title
cross all processors of strategy pipeline sources anything that is retrieved by calling dependency services and is used for generating and/or post processing/filter candidates (recommendations) <span>candidates recommendations that are returned to client. widgets a container for list of recommendations and additional metadata (title, reftag, etc.) A brief description of the stages in the pipel







Flashcard 4370308664588

Question
widgets
Answer
a container for list of recommendations and additional metadata (title, reftag, etc.)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
Unknown title
ng that is retrieved by calling dependency services and is used for generating and/or post processing/filter candidates (recommendations) candidates recommendations that are returned to client. <span>widgets a container for list of recommendations and additional metadata (title, reftag, etc.) A brief description of the stages in the pipeline (in the order of being executed) contextProcessor