Edited, memorised or added to reading queue

on 15-Dec-2020 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 6073230953740

Tags
#SVM
Question
advantages
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
4.8. Implementation details 1.4. Support Vector Machines¶ Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The <span>advantages of support vector machines are: Effective in high dimensional spaces. Still effective in cases where number of dimensions is greater than the number of samples. Uses a subset of trainin







Flashcard 6073232002316

Tags
#SVM
Question
[default - edit me]
Answer
Still effective in cases where number of dimensions is greater than the number of samples.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
(SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The advantages of support vector machines are: Effective in high dimensional spaces. <span>Still effective in cases where number of dimensions is greater than the number of samples. Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. Versatile: different Kernel functions can be specified for the decisi







#SVM
If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
ernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vector machines include: <span>If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial. SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). The support vector machines




Flashcard 6073236983052

Tags
#SVM
Question
probability estimate
Answer
probability estimate

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
s include: If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial. SVMs do not directly provide <span>probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). The support vector machines in scikit-learn support both dense (numpy.ndarray







Flashcard 6073238293772

Tags
#SVM
Question
Versatile
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
ve in cases where number of dimensions is greater than the number of samples. Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. <span>Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vect







Flashcard 6073239342348

Tags
#SVM
Question
[default - edit me]
Answer
different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
1.4. Support Vector Machines — scikit-learn 0.23.2 documentation
where number of dimensions is greater than the number of samples. Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. Versatile: <span>different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vector machines include: If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regulariz







Flashcard 6073245633804

Question
Scalability
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
2.3. Clustering — scikit-learn 0.23.2 documentation
n be obtained from the functions in the sklearn.metrics.pairwise module. 2.3.1. Overview of clustering methods¶ A comparison of the clustering algorithms in scikit-learn¶ Method name Parameters <span>Scalability Usecase Geometry (metric used) K-Means number of clusters Very large n_samples, medium n_clusters with MiniBatch code General-purpose, even cluster size, flat geometry, not too many clu