Catégorie : Notes
-
2105.13626 ByT5: Towards a token-free future with pre-trained byte-to-byte models
https://arxiv.org/abs/2105.13626
-
1 June, 2021 07:29
https://yashuseth.blog/2019/06/12/bert-explained-faqs-understand-bert-working/
-
2102.09130v1 Entity-level Factual Consistency of Abstractive Text Summarization
https://arxiv.org/abs/2102.09130v1
-
Windows Package Manager 1.0
Demitrius May 26th, 2021 We started a journey to build a native package manager for Windows 10 when we announced the Windows Package Manager preview at Microsoft Build 2020. We released the project on GitHub as an open-source collaborative effort and the community engagement has been wonderful to experience! Here we are today at Microsoft…
-
Windows Package Manager 1.0 | Windows Command Line
https://devblogs.microsoft.com/commandline/windows-package-manager-1-0/
-
Infographic: Sentiment Scale Reveals Which Words Pack the Most Punch
https://www.visualcapitalist.com/word-sentiment-scale/
-
GitHub – Jcharis/DataScienceTools: Useful Data Science and Machine Learning Tools,Libraries and Packages
https://github.com/Jcharis/DataScienceTools
-
Zero shot classification
https://nlp.town/blog/zero-shot-classification/
-
Modern Digital Infrastructure
-
Machine Learning Projects – YouTube
https://m.youtube.com/playlist?list=PL3N9eeOlCrP45DNfnYOiEOyFfv8Jihcok
-
DeepSpeed: Accelerating large-scale model inference and training via system optimizations and compression – Microsoft Research
https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/
-
Essential Parameter Estimation Techniques in Machine Learning, Data Science, and Signal Processing
https://towardsdatascience.com/essential-parameter-estimation-techniques-in-machine-learning-and-signal-processing-d671c6607aa0 Essential Parameter Estimation Techniques in Machine Learning, Data Science, and Signal Processing | by MANIE TADAYON | Towards Data Science Parameter estimation plays a vital role in machine learning, statistics, communication system, radar, and many other domains. For example, in a digital communication system, you sometimes need to… towardsdatascience.com
-
24 May, 2021 08:54
https://betterprogramming.pub/stop-using-python-lists-everywhere-consider-using-deques-instead-74d37441be4e Stop Using Python Lists Everywhere — Consider Using Deques Instead | by Yong Cui | May, 2021 | Better Programming Lists for FIFO implementation. We use a list object (clients) to hold data.When a client enters the system, we append the client to the end of the waiting list. Whenever an associate becomes available,…
-
23 May, 2021 11:22
https://syncedreview.com/2021/05/19/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-22/ Intelligent Graphic Design: Adobe’s Directional GAN Automates Image Content Generation for Marketing Campaigns | Synced The design and content of contemporary marketing campaigns, websites and banners have become increasingly targeted and sophisticated, and compelling image content is crucial for companies striving to stand out from the competition. Human graphic designers can spend a great…
-
23 May, 2021 11:14
https://syncedreview.com/2021/05/21/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-24/ ETH Zürich & Microsoft Study: Demystifying Serverless ML Training | Synced Serverless computing is a new type of cloud-based computation infrastructure initially developed for web microservices and IoT applications. As it frees model developers from concerns regarding capacity planning, configuration, management, maintenance, operating and scaling of containers, VMs and physical servers, serverless computing has…
-
How to get started with Reinforcement Learning (RL)
https://gordicaleksa.medium.com/how-to-get-started-with-reinforcement-learning-rl-4922fafeaf8c How to get started with Reinforcement Learning (RL) | by Aleksa Gordić | May, 2021 | Medium RL framework = an agent acts in the environment and learns from scalar rewards. You have an agent interacting with the environment.It makes some actions and the environment sends back the reward for that particular action and…
-
AI has cracked a key mathematical puzzle for understanding our world
https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/ AI has cracked a key mathematical puzzle for understanding our world | MIT Technology Review Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while … www.technologyreview.com
-
Here are 15 Common Data Fallacies to Avoid
https://www.visualcapitalist.com/here-are-15-common-data-fallacies-to-avoid/ Here are 15 Common Data Fallacies to Avoid With so much data available, it’s easy to make big mistakes when analyzing and interpreting it. Here are 15 of the most common data fallacies to avoid. www.visualcapitalist.com
-
Killer Combo: Softmax and Cross Entropy | by Paolo Perrotta | Level Up Coding
https://levelup.gitconnected.com/killer-combo-softmax-and-cross-entropy-5907442f60ba?gi=e320910f8809
-
(PDF) Logic and Complexity in Cognitive Science | Rineke Verbrugge – Academia.edu
https://www.academia.edu/17703714/Logic_and_Complexity_in_Cognitive_Science
-
GitHub – cdpierse/transformers-interpret: Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
https://github.com/cdpierse/transformers-interpret
-
22 May, 2021 06:19
https://medium.com/syncedreview/eth-z%C3%BCrich-identifies-priors-that-boost-bayesian-deep-learning-models-311b07f457e4
-
For language models, analogies are a tough nut to crack, study shows
https://venturebeat.com/2021/05/13/for-language-models-analogies-are-a-tough-nut-to-crack-study-shows/
-
Language Modelling as a Multi-Task Problem – Facebook Research
https://research.fb.com/publications/language-modelling-as-a-multi-task-problem/
-
GitHub OCTO | Flat Data
https://octo.github.com/projects/flat-data
-
Linked brushing with HoloViz · GitHub
https://gist.github.com/MarcSkovMadsen/c31afa6db9e55e3f50f582b24ad60a34
-
Vaex: Pandas but 1000x faster – KDnuggets
https://www.kdnuggets.com/2021/05/vaex-pandas-1000x-faster.html#.YKKMNq4wlA8.linkedin
-
Google Cloud launches Vertex AI, unified platform for MLOps | Google Cloud Blog
https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-launches-vertex-ai-unified-platform-for-mlops
-
How Airbnb Achieved Metric Consistency at Scale
https://medium.com/airbnb-engineering/how-airbnb-achieved-metric-consistency-at-scale-f23cc53dea70
-
18 May, 2021 08:38
https://bdtechtalks.com/2021/05/17/ibms-codenet-machine-learning-programming/
-
RPubs – Dynamic Time Warping (DTW) and time series clustering
https://rpubs.com/esobolewska/dtw-time-series
-
GitHub – vinta/awesome-python: A curated list of awesome Python frameworks, libraries, software and resources
https://github.com/vinta/awesome-python
-
Intel a rendu GTA V photoraliste grce lapprentissage machine
https://www.01net.com/actualites/intel-a-rendu-gta-v-photorealiste-grace-a-l-apprentissage-machine-2042736.html
-
For language models, analogies are a tough nut to crack, study shows | VentureBeat
https://venturebeat.com/2021/05/13/for-language-models-analogies-are-a-tough-nut-to-crack-study-shows/
-
Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs | Synced
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19/amp/
-
How to Estimate and Lower the Costs of Machine Learning Products | by Sven Balnojan | Towards Data Science
https://towardsdatascience.com/how-to-estimate-and-lower-the-costs-of-machine-learning-products-c3cc32f13a05
-
12 May, 2021 07:03
https://medium.com/@tjwaterman99/web-scraping-is-now-legal-6bf0e5730a78
-
Must-have Chrome Extensions For Machine Learning Engineers And Data Scientists
https://www.kdnuggets.com/2021/05/chrome-extensions-machine-learning-engineers-data-scientists.html
-
GitHub – nyu-mll/jiant: jiant is an NLP toolkit
https://github.com/nyu-mll/jiant
-
Are Pre-trained Convolutions Better than Pre-trained Transformers?
https://arxiv.org/abs/2105.03322
-
AutoCluster AutoML for clustering models in scikit learn
https://github.com/wywongbd/autocluster GitHub – wywongbd/autocluster: AutoML for clustering models in sklearn. autocluster. autocluster is an automated machine learning (AutoML) toolkit for performing clustering tasks.. Report and presentation slides can be found here and here.. Prerequisites. Python 3.5 or above; Linux OS, or Windows WSL is also possible; How to get started? First, install SMAC:; sudo apt-get…
-
Continuous Machine Learning (CML)
https://github.com/iterative/cml GitHub – iterative/cml: ♾️ CML – Continuous Machine Learning | CI/CD for ML In GitHub, open up a Pull Request to compare the experiment branch to master.; Shortly, you should see a comment from github-actions appear in the Pull Request with your CML report. This is a result of the cml-send-comment function in your…
-
Postman I ❤️ you, but I met Thunder Client for Visual Studio Code ❤️❤️❤️ – Anthony Giretti’s .NET blog
https://anthonygiretti.com/2021/05/05/postman-i-love-you-but-i-met-thunder-client-for-visual-studio-code/
-
Realistic Lighting on Different Backgrounds
https://www.louisbouchard.ai/backgrounds-with-lighting/
-
Transformers Explained Visually
https://towardsdatascience.com/transformers-explained-visually-part-3-multi-head-attention-deep-dive-1c1ff1024853 Transformers Explained Visually (Part 3): Multi-head Attention, deep dive | by Ketan Doshi | Towards Data Science This is the third article in my series on Transformers. We are covering its functionality in a top-down manner. In the previous articles, we learned what a Transformer is, its architecture, and how it works. towardsdatascience.com
-
GitHub – priceloop/conventions: $B!g(B Priceloop Engineering Conventions for Pytho n, Golang, Git Workflow etc
https://github.com/priceloop/conventions
-
Train Your GAN With 1/10th of the Data! NVIDIA ADA Explained
https://www.louisbouchard.ai/nvidia-ada Train Your GAN With 1/10th of the Data! NVIDIA ADA Explained With this new training method developed by NVIDIA, you can train a powerful generative model with one-tenth of the images! Making possible many applications that do not have access to so many images! www.louisbouchard.ai https://github.com/NVlabs/stylegan2-ada GitHub – NVlabs/stylegan2-ada: StyleGAN2 with adaptive discriminator augmentation…
-
Do Wide and Deep Networks Learn the Same Things?
https://ai.googleblog.com/2021/05/do-wide-and-deep-networks-learn-same.html Google AI Blog: Do Wide and Deep Networks Learn the Same Things? Posted by Thao Nguyen, AI Resident, Google Research. A common practice to improve a neural network’s performance and tailor it to available computational resources is to adjust the architecture depth and width.Indeed, popular families of neural networks, including EfficientNet, ResNet and Transformers,…
Vous devez être connecté pour poster un commentaire.