#Autoregressive #BERT #nlp
Unsupervised representation learning has been highly successful in the domain of natural language processing [ 7 , 19 , 24 , 25 , 10 ]. Typically, these methods first pretrain neural networks on large-scale unlabeled text corpora, and then finetune the models or representations on downstream tasks.
If you want to change selection, open document below and click on "Move attachment"
pdf
owner:
ronaldokun - (no access) - XLNet: Generalized Autoregressive Pretraining for Language Understanding, p1
Summary
status | not read | | reprioritisations | |
---|
last reprioritisation on | | | suggested re-reading day | |
---|
started reading on | | | finished reading on | |
---|
Details