We have hosted the application gpt neo in order to run this application in our online workstations with Wine or directly.


Quick description about gpt neo:

An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very inefficient at those scales. This, as well as the fact that many GPUs became available to us, among other things, prompted us to move development over to GPT-NeoX. All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness.

Features:
  • Sign up for Google Cloud Platform, and create a storage bucket
  • You can also choose to train GPTNeo locally on your GPUs
  • Download one of our pre-trained models
  • Generating text is as simple as running the main.py script
  • Create your Tokenizer
  • Tokenize your dataset


Programming Language: Python.
Categories:
Large Language Models, ChatGPT Apps, Generative AI

Page navigation:

©2024. Winfy. All Rights Reserved.

By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.