GPT-Neo 2.7B

GPT-Neo 2.7B is a transformer model designed using EleutherAI’s replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. This model is the same size as OpenAI’s "Ada" model.

https://huggingface.co/EleutherAI/gpt-neo-2.7B

EleutherAI/gpt-neo-2.7B · Hugging Face
GPT Neo 2.7B. The GPT Neo model is pretrained on the The Pile dataset: an 825GiB English text corpus targeted at training large-scale language models. It is an open source replication of OpenAI’s GPT-3 model and is released in several checkpoints: the 1.3B and 2.7B variants.
huggingface.co

Publié

dans

par

Étiquettes :