Catégorie : GPU
-
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU
https://github.com/salesforce/warp-drive GitHub – salesforce/warp-drive WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit). Using the extreme parallelization capability of GPUs, WarpDrive enables orders-of-magnitude faster RL compared … github.com
-
Inferrd – Deploy your AI models on a GPU 10x faster than any cloud for up to 90% cheaper.
https://inferrd.com/ Inferrd | Deploy AI on GPUs Deploy API on GPUs, in less than a minute, without cold starts, starting at $10 for a 1GB model. inferrd.com
-
Small text: Active learning for text classification in Python
https://github.com/webis-de/small-text GitHub – webis-de/small-text: Active learning for text classification in Python Requires Python 3.7 or newer. For using the GPU, CUDA 10.1 or newer is required. Quick Start. For a quick start, see the provided examples for binary classification, pytorch multi-class classification, or transformer-based multi-class classification. Docs github.com
-
Introducing Triton: Open-Source GPU Programming for Neural Networks
https://openai.com/blog/triton/
-
Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs | Synced
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19/amp/
-
NVIDIA, Stanford & Microsoft Propose Efficient Trillion-Parameter Language Model Training on GPU Clusters | by Synced | SyncedReview | Apr, 2021 | Medium
https://medium.com/syncedreview/nvidia-stanford-microsoft-propose-efficient-trillion-parameter-language-model-training-on-gpu-7e415235313c
-
Open GPU Data Science | RAPIDS
https://rapids.ai/
-
turbo_transformers: a fast and user-friendly runtime for transformer inference on CPU and GPU
Transformer is the most critical alogrithm innovation in the NLP field in recent years. It brings higher model accuracy while introduces more calculations. The efficient deployment of online Transformer-based services faces enormous challenges. In order to make the costly Transformer online service more efficient, the WeChat AI open-sourced a Transformer inference acceleration tool called TurboTransformers,…
-
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
https://github.com/google-research/torchsde GitHub – google-research/torchsde: Differentiable SDE solvers with GPU support and efficient sensitivity analysis. Examples. demo.ipynb in the examples folder is a short guide on how one may use the codebase for solving SDEs without considering gradient computation. It covers subtle points such as fixing the randomness in the solver and the consequence of noise…
-
Microsoft’s OpenAI supercomputer has 285,000 CPU cores, 10,000 GPUs | Engadget
https://www-engadget-com.cdn.ampproject.org/c/s/www.engadget.com/amp/microsoft-openai-supercomputer-azure-150001119.html
-
Monk_Object_Detection/Example – Indoor Image Object Detection and Tagging.ipynb at master · Tessellate-Imaging/Monk_Object_Detection · GitHub
Experimented with multi-gpu training of indoor object detector using RetinaNet and Open-Images – V5 dataset The detector consists of 24 classes such as table, bed, sofas, home and kitchen appliances, etc. The training ran on AWS P3.x large instances4 Nvidia V100 GPUS 244 GB CPU RAM 32 CPUs Training time – 5 hours for 10…
-
https://www.zdnet.com/google-amp/article/facebooks-latest-giant-language-ai-hits-computing-wall-at-500-nvidia-gpus/
https://www.zdnet.com/google-amp/article/facebooks-latest-giant-language-ai-hits-computing-wall-at-500-nvidia-gpus/
-
The Best 4-GPU Deep Learning Rig only costs $7000 not $11,000.
https://l7.curtisnorthcutt.com/the-best-4-gpu-deep-learning-rig
-
GraphVite A general and high-performance graph embedding system for various applications Designed for CPU-GPU hybrid architecture
https://graphvite.io/
-
Which GPU(s) to Get for Deep Learning
https://timdettmers.com/2019/04/03/which-gpu-for-deep-learning/
-
The Best 4-GPU Deep Learning Rig only costs $7000 not $11,000.
http://l7.curtisnorthcutt.com/the-best-4-gpu-deep-learning-rig
-
Uber Introduces AresDB: GPU-Powered, Open-Source, Real-Time Analytics Engine
https://www.infoq.com/news/2019/02/uber-aresdb-analytics
-
Google’s on-device text classification AI achieves 86.7% accuracy | VentureBeat
In a paper presented this week at the Conference on Empirical Methods in Natural Language Processing in Brussels, Belgium, Google researchers described offline, on-device AI systems — Self-Governing Neural Networks (SGNNs) — that achieve state-of-the-air results in specific dialog-related tasks. “The main challenges with developing and deploying deep neural network models on-device are (1) the…
-
Startup’s AI Chip Beats GPU
Habana outruns Nvidia in inference https://www.eetimes.com/document.asp?doc_id=1333719
-
GPU-Enabled Docker Container
https://www.nvidia.com/object/docker-container.html