In a paper presented this week at the Conference on Empirical Methods in Natural Language Processing in Brussels, Belgium, Google researchers described offline, on-device AI systems — Self-Governing Neural Networks (SGNNs) — that achieve state-of-the-air results in specific dialog-related tasks.
“The main challenges with developing and deploying deep neural network models on-device are (1) the tiny memory footprint, (2) inference latency and (3) significantly low computational capacity compared to high-performance computing systems, such as CPUs, GPUs, and TPUs on the cloud,” the team wrote.
Afshine Amidi (Ecole Centrale Paris, MIT) et Shervine Amidi (Ecole Centrale Paris, Stanford University) nous offre ici la traduction en français des cheatsheet du cours de machine learning de Stanford (https://stanford.edu/%7Eshervine/teaching/cs-229.html)
|How is GloVe different from word2vec? – Quora
Both models learn geometrical encodings (vectors) of words from their co-occurrence information (how frequently they appear together in large text corpora). They differ in that word2vec is a « predictive » model, whereas GloVe is a « count-based » mod…
The Amazon Echo as an anatomical map of human labor, data and planetary resources
By Kate Crawford 1 and Vladan Joler 2 (2018)
The GeniSys NLU Engine includes a combination of a custom trained DNN (Deep Learning Neural Network) built using TFLearn for intent classification, and a custom trained MITIE model for entity classification.