TrainMyAI currently uses three LLMs (large language models) – GPT-4o, GPT-4o mini and Llama 3 (8B parameters). With some simple customization, older GPT models such as GPT-3.5 and GPT-4 can also be used.
For GPT models, TrainMyAI communicates with the external ChatGPT API, provided either by OpenAI or Azure, so server requirements are minimal. For Llama 3, TrainMyAI works 100% locally on your own server and uses no external APIs, so server requirements are higher, and an NVIDIA GPU is recommended for best performance.
Use the comparison table below to decide which language model best suits your needs.
GPT-4o | GPT-4o mini | Llama 3 (8B) | |
---|---|---|---|
Content and chat privacy | Fragments sent to OpenAI or Azure in random order | Absolute privacy | |
Languages | Approx 100 languages | Optimized for English (contact us for other languages) |
|
TrainMyAI license price | $5,000/year for all GPT models | $6,000/year (GPT models included) | |
Cost per question | 0.5–2 US cents * (to OpenAI or Azure) |
0.015–0.06 US cents * (to OpenAI or Azure) |
None |
Linux Version | Ubuntu 18+, Debian 10+, CentOS 8+ | Ubuntu 22 | |
Minimum server RAM | 4 GB | 16 GB | |
Disk requirement | 20 GB SSD | 40 GB SSD | |
GPU/CPU recommendations | None | Best: NVIDIA GPU (12+ GB RAM) Reasonable: 16+ CPU cores |
* Estimated cost. For GPT models, the cost per question is set by the OpenAI or Azure API. It depends primarily on the length of the references used per question, the length of the conversation so far, and the number of tokens per word in the language used. More on tokens.