Catégorie : Machine Learning
-
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU
https://github.com/salesforce/warp-drive GitHub – salesforce/warp-drive WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit). Using the extreme parallelization capability of GPUs, WarpDrive enables orders-of-magnitude faster RL compared … github.com
-
Accelerate Transformers on State of the Art Hardware
https://huggingface.co/hardware Optimum: the ML Hardware Optimization Toolkit for Production We’re on a journey to advance and democratize artificial intelligence through open source and open science. huggingface.co
-
SummPip: Unsupervised Multi-Document Summarization with Sentence Graph Compression
https://arxiv.org/abs/2007.08954 [2007.08954] SummPip: Unsupervised Multi-Document Summarization with Sentence Graph Compression – arXiv.org Obtaining training data for multi-document summarization (MDS) is time consuming and resource-intensive, so recent neural models can only be trained for limited domains. In this paper, we propose SummPip: an unsupervised method for multi-document summarization, in which we convert the original documents to…
-
Inferrd – Deploy your AI models on a GPU 10x faster than any cloud for up to 90% cheaper.
https://inferrd.com/ Inferrd | Deploy AI on GPUs Deploy API on GPUs, in less than a minute, without cold starts, starting at $10 for a 1GB model. inferrd.com
-
FLAML – Fast and Lightweight AutoML
https://github.com/microsoft/FLAML GitHub – microsoft/FLAML: A fast and lightweight AutoML library. Advantages. For common machine learning tasks like classification and regression, find quality models with small computational resources. Users can choose their desired customizability: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), full customization (arbitrary training and evaluation code). github.com
-
The Machine & Deep Learning Compendium
https://github.com/orico/www.mlcompendium.com/
-
GitHub – Yale-LILY/SummerTime: An open-source text summarization toolkit for non-experts.
https://github.com/Yale-LILY/SummerTime
-
GitHub – labmlai/annotated_deep_learning_paper_implementations: 🧑🏫 Implementation s/tutorials of deep learning papers with side-by-side notes 📝; including transformers ( original, xl, switch, feedback, vit), optimizers (adam, radam, adabelief), gans(…
https://github.com/labmlai/annotated_deep_learning_paper_implementations
-
9 August, 2021 21:20
https://medium.com/analytics-vidhya/serverless-your-machine-learning-model-with-pycaret-and-aws-lambda-c33334ee6011
-
Small text: Active learning for text classification in Python
https://github.com/webis-de/small-text GitHub – webis-de/small-text: Active learning for text classification in Python Requires Python 3.7 or newer. For using the GPU, CUDA 10.1 or newer is required. Quick Start. For a quick start, see the provided examples for binary classification, pytorch multi-class classification, or transformer-based multi-class classification. Docs github.com
-
Open-Ended Learning Leads to Generally Capable Agents | DeepMind
https://deepmind.com/research/publications/open-ended-learning-leads-to-generally-capable-agents
-
Introducing Triton: Open-Source GPU Programming for Neural Networks
https://openai.com/blog/triton/
-
NLP needs to be open. 500+ researchers are trying to make it happen | VentureBeat
https://venturebeat-com.cdn.ampproject.org/c/s/venturebeat.com/2021/07/14/nlp-needs-to-be-open-500-researchers-are-trying-to-make-it-happen/amp/
-
artefactory/NLPretext: All the goto functions you need to handle NLP use-cases, integrated in NLPretext
https://github.com/artefactory/NLPretext
-
Deep learning on graph for nlp
https://drive.google.com/file/d/1A9Gtzyan4tqFTgmNsNfwOkO4ELR77iNh/view
-
Dataiku – Analyze text data with ontology tagging
https://www.dataiku.com/product/plugins/nlp-analysis/
-
BERT as a service
https://github.com/dimitreOliveira/bert-as-a-service_TFX GitHub – dimitreOliveira/bert-as-a-service_TFX: End-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis. BERT as a service This repository is designed to demonstrate a simple yet complete machine learning solution that uses a BERT model for text sentiment analysis using a TensorFlow Extended end-to-end pipeline, and making use of some…
-
BERT as a service
https://github.com/dimitreOliveira/bert-as-a-service_TFX GitHub – dimitreOliveira/bert-as-a-service_TFX: End-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis. BERT as a service This repository is designed to demonstrate a simple yet complete machine learning solution that uses a BERT model for text sentiment analysis using a TensorFlow Extended end-to-end pipeline, and making use of some…
-
Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API
https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api
-
An introduction to Recommendation Systems: an overview of machine and deep learning architectures
https://theaisummer.com/recommendation-systems An introduction to Recommendation Systems: an overview of machine and deep learning architectures | AI Summer Learn about the SOTA recommender system models. From collaborative filtering and factorization machines to DCN and DLRM theaisummer.com
-
2102.09130v1 Entity-level Factual Consistency of Abstractive Text Summarization
https://arxiv.org/abs/2102.09130v1
-
Infographic: Sentiment Scale Reveals Which Words Pack the Most Punch
https://www.visualcapitalist.com/word-sentiment-scale/
-
DeepSpeed: Accelerating large-scale model inference and training via system optimizations and compression – Microsoft Research
https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/
-
Google Cloud launches Vertex AI, unified platform for MLOps | Google Cloud Blog
https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-launches-vertex-ai-unified-platform-for-mlops
-
Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs | Synced
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19/amp/
-
GitHub – nyu-mll/jiant: jiant is an NLP toolkit
https://github.com/nyu-mll/jiant
-
AutoCluster AutoML for clustering models in scikit learn
https://github.com/wywongbd/autocluster GitHub – wywongbd/autocluster: AutoML for clustering models in sklearn. autocluster. autocluster is an automated machine learning (AutoML) toolkit for performing clustering tasks.. Report and presentation slides can be found here and here.. Prerequisites. Python 3.5 or above; Linux OS, or Windows WSL is also possible; How to get started? First, install SMAC:; sudo apt-get…
-
Do Wide and Deep Networks Learn the Same Things?
https://ai.googleblog.com/2021/05/do-wide-and-deep-networks-learn-same.html Google AI Blog: Do Wide and Deep Networks Learn the Same Things? Posted by Thao Nguyen, AI Resident, Google Research. A common practice to improve a neural network’s performance and tailor it to available computational resources is to adjust the architecture depth and width.Indeed, popular families of neural networks, including EfficientNet, ResNet and Transformers,…
-
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon S ageMaker
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker Tutorial We will use the new Hugging Face DLCs and Amazon SageMaker extension to train a distributed Seq2Seq-transformer model on the summarization task using the transformers and datasets libraries, and then upload the model to huggingface.co and test it.. As distributed training…
-
Summer of Language Models 21
Summer of Language Models 21 https://bigscience.huggingface.co/en/#!index.md
-
Deploy T5 transformer model as a serverless FastAPI service on Google Cloud Run – YouTube
https://m.youtube.com/watch?v=OzV21spbCfI
-
22 April, 2021 07:23
https://huggingface.co/blog/bert-cpu-scaling-part-1
-
NVIDIA, Stanford & Microsoft Propose Efficient Trillion-Parameter Language Model Training on GPU Clusters | by Synced | SyncedReview | Apr, 2021 | Medium
https://medium.com/syncedreview/nvidia-stanford-microsoft-propose-efficient-trillion-parameter-language-model-training-on-gpu-7e415235313c
-
GooAQ 🥑: Google Answers to Google Questions!
https://github.com/allenai/gooaq GitHub – allenai/gooaq: Question-answers, collected from Google where the questions question are collected via Google auto-complete. The answers responses (short_answer and answer) were collected from Google’s answer boxes.The answer types (answer_type) are inferred based on the html content of Google’s response.Here is the dominant types in the current dataset: feat_snip: explanatory responses; the majoriy…
-
PyCaret is an open-source, low-code machine learning library and end-to-end model management tool built in Python for automating ML worflows | Towards Data Science
https://towardsdatascience.com/multiple-time-series-forecasting-with-pycaret-bc0a779a22fe
-
15 April, 2021 20:26
https://towardsdatascience.com/pycaret-2-2-is-here-whats-new-ad7612ca63b
-
textflint/textflint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing
https://github.com/textflint/textflint
-
The Super Duper NLP Repo
https://notebooks.quantumstat.com/
-
open-mmlab/mmocr: OpenMMLab Text Detection, Recognition and Understanding Toolbox
https://github.com/open-mmlab/mmocr
-
neo4j/graphql: A GraphQL to Cypher query execution layer for Neo4j and JavaScript GraphQL implementations.
https://github.com/neo4j/graphql
-
What is MLOps? Machine Learning Operations Explained
https://www-freecodecamp-org.cdn.ampproject.org/c/s/www.freecodecamp.org/news/what-is-mlops-machine-learning-operations-explained/amp/
-
Words in context: tracking context-processing during language comprehension using computational language models and MEG
https://www.biorxiv.org/content/10.1101/2020.06.19.161190v1.full Words in context: tracking context-processing during language comprehension using computational language models and MEG www.biorxiv.org
-
DeepMind, Microsoft, Allen AI & UW Researchers Convert Pretrained Transformers into RNNs, Lowering Memory Cost While Retaining High Accuracy | by Synced | SyncedReview | Apr, 2021 | Medium
https://medium.com/syncedreview/deepmind-microsoft-allen-ai-uw-researchers-convert-pretrained-transformers-into-rnns-lowering-806b94bf0521
-
PAIR-code/lit: The Language Interpretability Tool: Interactively analyze NLP models for model understanding in an extensible and framework agnostic interface.
https://github.com/PAIR-code/lit/
-
TextFlint
Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing https://github.com/textflint/textflint
-
Dodrio – An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge.
https://github.com/poloclub/dodrio GitHub – poloclub/dodrio: Exploring attention weights in transformer-based models with linguistic knowledge. Dodrio . An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge. github.com
-
GokuMohandas/mlops: https://madewithml.com/
https://github.com/GokuMohandas/mlops
-
Nlp Cypher news
https://pub.towardsai.net/the-nlp-cypher-04-04-21-9964ab34df17?source=rss—-98111c9905da—4?source=social.tw