The grok-beta, grok-2-1212, gemini-2.0-flash-exp models are now available to premium users only
😢3👀2
Today, there may be errors as the transfer to another server will be taking place
👍1🔥1🤩1
🔥 New update
⚡️ New models gpt-4.1 (premium), gpt-4.1-mini (free), gpt-4.1-nano (free) added
⚡️ New models gpt-4.1 (premium), gpt-4.1-mini (free), gpt-4.1-nano (free) added
👍5❤1
🔥 Small update
⚡️ A lot of open-source models have been removed (check at api.gpt4-all.xyz/v1/models). Also image generation has been permanently removed
⚡️ A lot of open-source models have been removed (check at api.gpt4-all.xyz/v1/models). Also image generation has been permanently removed
🔥3👍1
🔥 New update
Deleted models gemini-1.5-pro, gemini-1.5-flash, gemini-1.5-flash-8b, gemini-2.0-flash-exp, gpt-3.5-turbo
Added new models gemini-2.5-pro, gemini-2.5-flash
And API now is more stable
Deleted models gemini-1.5-pro, gemini-1.5-flash, gemini-1.5-flash-8b, gemini-2.0-flash-exp, gpt-3.5-turbo
Added new models gemini-2.5-pro, gemini-2.5-flash
And API now is more stable
🔥4
🔥 New update
Added new models gpt-oss-20b, gpt-oss-120b, open source models from OpenAI
(for now o3 works bad, but you can use new model)
Added new models gpt-oss-20b, gpt-oss-120b, open source models from OpenAI
(for now o3 works bad, but you can use new model)
🔥5
🔥 Small change
Default value for max_tokens is now 1024
If you got an empty response from the thinking model, just increase max_tokens. Think tokens are also counted
Default value for max_tokens is now 1024
If you got an empty response from the thinking model, just increase max_tokens. Think tokens are also counted
🔥 New update
Added new model gpt-5, gpt-5-mini, gpt-5-nano
Only gpt-5-nano is free, but maybe gpt-5-mini will be free too
Added new model gpt-5, gpt-5-mini, gpt-5-nano
Only gpt-5-nano is free, but maybe gpt-5-mini will be free too
🔥5❤2
GPT4All
🔥 New update Added new model gpt-5, gpt-5-mini, gpt-5-nano Only gpt-5-nano is free, but maybe gpt-5-mini will be free too
error with this model has been fixed, sorry!
GPT4All
🔥 Small change Default value for max_tokens is now 1024 If you got an empty response from the thinking model, just increase max_tokens. Think tokens are also counted
default limit on max_tokens was deleted, for some reasons, i would ask you dont spend tokens too many, just add in system prompt "answer briefly without unnecessary words", or something like this
👌1