Google Research: BERT, or Bidirectional Encoder Representations from Transformers

BERT

BERT, or Bidirectional Encoder Representations from Transformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.

Our academic paper which describes BERT in detail and provides full results on a number of tasks can be found here: https://arxiv.org/abs/1810.04805.

https://github.com/google-research/bert/blob/master/README.md

Google’s on-device text classification AI achieves 86.7% accuracy | VentureBeat

In a paper presented this week at the Conference on Empirical Methods in Natural Language Processing in Brussels, Belgium, Google researchers described offline, on-device AI systems — Self-Governing Neural Networks (SGNNs) — that achieve state-of-the-air results in specific dialog-related tasks.

“The main challenges with developing and deploying deep neural network models on-device are (1) the tiny memory footprint, (2) inference latency and (3) significantly low computational capacity compared to high-performance computing systems, such as CPUs, GPUs, and TPUs on the cloud,” the team wrote.

https://venturebeat.com/2018/11/01/googles-on-device-text-classification-ai-achieves-86-7-percent-accuracy/amp/?__twitter_impression=true