useapi.net
This weekend a few users reported that they were getting 401 errors when calling the /kolors endpoints: { "result": -401, "error_msg": "token value error", "error_id": "E1234567890" } That 401 is from the Kling internal API. After re-adding the…
We implemented Kling
We tested it extensively and hope this will be a solid solution going forward.
If you notice anything odd or out of order, please let us know.
forced token expiration
mitigation logic, which will handle the case above and will refresh the token for you.We tested it extensively and hope this will be a solid solution going forward.
If you notice anything odd or out of order, please let us know.
❤1
Midjourney API v2 added GET proxy/cdn-midjourney endpoint to proxy asset retrieval from Midjourney CDN. Use this endpoint to download
imageUx
and videoUx
URLs returned by GET jobs/jobid.Experimental API for AI services
Midjourney API v2
Experimental API for popular AI services by useapi.net
We have a working prototype of the Real-Time Midjourney API.
You read that right—it provides the same experience as your browser, allowing you to instantly see results without any delay.
We're hoping to release it in the very near future and are trying to collect feedback.
This will be an addition to our long-standing and very popular third-party Midjourney API v2.
The API returns an SSE text/event-stream stream with a live data feed, so we are not planning to release a callback feature, as it is somewhat pointless in this case.
We also want to ask how important the webhook/callback feature is for you, as we're not sure how well SSE streaming is supported by make.com or n8n—both have almost no documentation on that.
It is similar to MiniMax v1 LLM API and is commonly used by LLM APIs.
The API is extremely fast and will allow you to post multiple jobs to the same channel as fast as the Discord rate limit allows (usually 1–2 interaction events per second). This can be a big advantage for folks looking for ultimate performance, but also poses challenges, as this may be too fast and cause bans. Naturally, we can optionally allow you to apply desired throttling if you choose.
This API will also support uploading images directly to the Discord CDN, so you can host your images there and refer to those hosted URLs from your prompts.
It would be very helpful for us if you could answer the questions below:
1. I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback feature at all.
2. I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
3. I do not care for the Real-Time Midjourney API at all.
You read that right—it provides the same experience as your browser, allowing you to instantly see results without any delay.
We're hoping to release it in the very near future and are trying to collect feedback.
This will be an addition to our long-standing and very popular third-party Midjourney API v2.
The API returns an SSE text/event-stream stream with a live data feed, so we are not planning to release a callback feature, as it is somewhat pointless in this case.
We also want to ask how important the webhook/callback feature is for you, as we're not sure how well SSE streaming is supported by make.com or n8n—both have almost no documentation on that.
It is similar to MiniMax v1 LLM API and is commonly used by LLM APIs.
The API is extremely fast and will allow you to post multiple jobs to the same channel as fast as the Discord rate limit allows (usually 1–2 interaction events per second). This can be a big advantage for folks looking for ultimate performance, but also poses challenges, as this may be too fast and cause bans. Naturally, we can optionally allow you to apply desired throttling if you choose.
This API will also support uploading images directly to the Discord CDN, so you can host your images there and refer to those hosted URLs from your prompts.
It would be very helpful for us if you could answer the questions below:
1. I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback feature at all.
2. I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
3. I do not care for the Real-Time Midjourney API at all.
Experimental API for AI services
Midjourney API v2
Experimental API for popular AI services by useapi.net
Real-Time Midjourney API
Anonymous Poll
33%
I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback at all.
47%
I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
20%
I do not care for the Real-Time Midjourney API at all.
MiniMax released changes to their website. We're working on the fix, I'll keep you posted on our progress.
😢3
PixVerse recently added the ability to generate images rather than videos.
The GET videos/effects endpoint now returns two new fields:
•
•
There are only 3 image response templates for now:
We added a quick fix to deal with the new
You can use the returned
We will monitor how PixVerse plans to support image generation—they will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
The GET videos/effects endpoint now returns two new fields:
•
template_type
: 1
| 2
, 1
- video response, 2
- image response (new)•
template_model
: `` | image_v6
, image_v6
(new) for template_type: 2
There are only 3 image response templates for now:
Sharkie Plush Delivery
, Self-Figurine Roast
and Your game of life, archived forever.
We added a quick fix to deal with the new
template_type: 2
. The POST videos/create endpoint, when an image template_id
is used, will properly place the image generation task and respond with "image_id": "user:12345-pixverse:user@email.com-image:123456789"
You can use the returned
image_id
value to retrieve the generated image via GET videos/<image_id> or delete it via DELETE videos/<image_id>.We will monitor how PixVerse plans to support image generation—they will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
pixverse/images/
endpoints.Experimental API for AI services
GET videos/effects
Experimental API for popular AI services by useapi.net
Kling API v1 has been updated to support model
• Added support for model
• Added support for models
Examples.
v2.5
:• Added support for model
kling-v2-5
to POST videos/text2video.• Added support for models
kling-v2-5
to POST videos/image2video-frames.Examples.
Mureka released a website update, attempting to generate V7 songs currently returns 400. We'll be releasing an update to fix this issue today. Meanwhile you can use V6.
Mureka API v1 has been updated:
• Model
• POST music/create-advanced supports an optional parameter
• POST music/instrumental supports models
Examples
• Model
V7.5
is now the default for all music generation endpoints, replacing V7
.• POST music/create-advanced supports an optional parameter
voice_gender
, which can be set to female
or male
.• POST music/instrumental supports models
V7.5
, O1
and V6
.Examples
Experimental API for AI services
Mureka API v1
Experimental API for popular AI services by useapi.net