GitHub – labmlai/annotated_deep_learning_paper_implementations: 🧑‍🏫 Implementation s/tutorials of deep learning papers with side-by-side notes 📝; including transformers ( original, xl, switch, feedback, vit), optimizers (adam, radam, adabelief), gans(…

https://github.com/labmlai/annotated_deep_learning_paper_implementations

2 ans ago
Commentaires fermés sur GitHub – labmlai/annotated_deep_learning_paper_implementations: 🧑‍🏫 Implementation s/tutorials of deep learning papers with side-by-side notes 📝; including transformers ( original, xl, switch, feedback, vit), optimizers (adam, radam, adabelief), gans(…
96

Deep learning on graph for nlp

https://drive.google.com/file/d/1A9Gtzyan4tqFTgmNsNfwOkO4ELR77iNh/view

2 ans ago
Commentaires fermés sur Deep learning on graph for nlp
124

Do Wide and Deep Networks Learn the Same Things?

https://ai.googleblog.com/2021/05/do-wide-and-deep-networks-learn-same.html Google AI Blog: Do Wide and Deep Networks Learn the Same Things? ..

2 ans ago
Commentaires fermés sur Do Wide and Deep Networks Learn the Same Things?
93

DeepMind, Microsoft, Allen AI & UW Researchers Convert Pretrained Transformers into RNNs, Lowering Memory Cost While Retaining High Accuracy | by Synced | SyncedReview | Apr, 2021 | Medium

https://medium.com/syncedreview/deepmind-microsoft-allen-ai-uw-researchers-convert-pretrained-transformers-into-rnns-lowering-806b94bf0521

2 ans ago
Commentaires fermés sur DeepMind, Microsoft, Allen AI & UW Researchers Convert Pretrained Transformers into RNNs, Lowering Memory Cost While Retaining High Accuracy | by Synced | SyncedReview | Apr, 2021 | Medium
91

DeepMoji

https://medium.com/@bjarkefelbo/what-can-we-learn-from-emojis-6beb165a5ea0 https://github.com/bfelbo/DeepMoji bfelbo/DeepMoji State-of-the-art ..

3 ans ago
Commentaires fermés sur DeepMoji
172

Satoshi Iizuka — DeepRemaster

http://iizuka.cs.tsukuba.ac.jp/projects/remastering/en/index.html

4 ans ago
Commentaires fermés sur Satoshi Iizuka — DeepRemaster
88