We have a working prototype of the Real-Time Midjourney API.
You read that right—it provides the same experience as your browser, allowing you to instantly see results without any delay.
We're hoping to release it in the very near future and are trying to collect feedback.
This will be an addition to our long-standing and very popular third-party Midjourney API v2.
The API returns an SSE text/event-stream stream with a live data feed, so we are not planning to release a callback feature, as it is somewhat pointless in this case.
We also want to ask how important the webhook/callback feature is for you, as we're not sure how well SSE streaming is supported by make.com or n8n—both have almost no documentation on that.
It is similar to MiniMax v1 LLM API and is commonly used by LLM APIs.
The API is extremely fast and will allow you to post multiple jobs to the same channel as fast as the Discord rate limit allows (usually 1–2 interaction events per second). This can be a big advantage for folks looking for ultimate performance, but also poses challenges, as this may be too fast and cause bans. Naturally, we can optionally allow you to apply desired throttling if you choose.
This API will also support uploading images directly to the Discord CDN, so you can host your images there and refer to those hosted URLs from your prompts.
It would be very helpful for us if you could answer the questions below:
1. I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback feature at all.
2. I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
3. I do not care for the Real-Time Midjourney API at all.
You read that right—it provides the same experience as your browser, allowing you to instantly see results without any delay.
We're hoping to release it in the very near future and are trying to collect feedback.
This will be an addition to our long-standing and very popular third-party Midjourney API v2.
The API returns an SSE text/event-stream stream with a live data feed, so we are not planning to release a callback feature, as it is somewhat pointless in this case.
We also want to ask how important the webhook/callback feature is for you, as we're not sure how well SSE streaming is supported by make.com or n8n—both have almost no documentation on that.
It is similar to MiniMax v1 LLM API and is commonly used by LLM APIs.
The API is extremely fast and will allow you to post multiple jobs to the same channel as fast as the Discord rate limit allows (usually 1–2 interaction events per second). This can be a big advantage for folks looking for ultimate performance, but also poses challenges, as this may be too fast and cause bans. Naturally, we can optionally allow you to apply desired throttling if you choose.
This API will also support uploading images directly to the Discord CDN, so you can host your images there and refer to those hosted URLs from your prompts.
It would be very helpful for us if you could answer the questions below:
1. I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback feature at all.
2. I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
3. I do not care for the Real-Time Midjourney API at all.
Experimental API for AI services
Midjourney API v2
Experimental API for popular AI services by useapi.net
Real-Time Midjourney API
Anonymous Poll
31%
I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback at all.
50%
I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
19%
I do not care for the Real-Time Midjourney API at all.
MiniMax released changes to their website. We're working on the fix, I'll keep you posted on our progress.
😢3
PixVerse recently added the ability to generate images rather than videos.
The GET videos/effects endpoint now returns two new fields:
•
•
There are only 3 image response templates for now:
We added a quick fix to deal with the new
You can use the returned
We will monitor how PixVerse plans to support image generation—they will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
The GET videos/effects endpoint now returns two new fields:
•
template_type: 1 | 2, 1 - video response, 2 - image response (new)•
template_model: `` | image_v6, image_v6 (new) for template_type: 2There are only 3 image response templates for now:
Sharkie Plush Delivery, Self-Figurine Roast and Your game of life, archived forever.We added a quick fix to deal with the new
template_type: 2. The POST videos/create endpoint, when an image template_id is used, will properly place the image generation task and respond with "image_id": "user:12345-pixverse:user@email.com-image:123456789"You can use the returned
image_id value to retrieve the generated image via GET videos/<image_id> or delete it via DELETE videos/<image_id>.We will monitor how PixVerse plans to support image generation—they will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
pixverse/images/ endpoints.Experimental API for AI services
GET videos/effects
Experimental API for popular AI services by useapi.net
Kling API v1 has been updated to support model
• Added support for model
• Added support for models
Examples.
v2.5:• Added support for model
kling-v2-5 to POST videos/text2video.• Added support for models
kling-v2-5 to POST videos/image2video-frames.Examples.
Mureka released a website update, attempting to generate V7 songs currently returns 400. We'll be releasing an update to fix this issue today. Meanwhile you can use V6.
Mureka API v1 has been updated:
• Model
• POST music/create-advanced supports an optional parameter
• POST music/instrumental supports models
Examples
• Model
V7.5 is now the default for all music generation endpoints, replacing V7.• POST music/create-advanced supports an optional parameter
voice_gender, which can be set to female or male.• POST music/instrumental supports models
V7.5, O1 and V6.Examples
Experimental API for AI services
Mureka API v1
Experimental API for popular AI services by useapi.net
👉 LTX Studio released a website update.
We're planning to release a LTX Studio API update tomorrow to align with the recent website changes.
As of now the LTX Studio API may experience issues and not function properly.
We're planning to release a LTX Studio API update tomorrow to align with the recent website changes.
As of now the LTX Studio API may experience issues and not function properly.
👉 LTX Studio FLUX.1 Kontext image generations fixed, please give it a try.
We're planning to release an update to support both FLUX.1 Kontext Premium and Nano Banana this week as well since both are now available via LTX.
We're planning to release an update to support both FLUX.1 Kontext Premium and Nano Banana this week as well since both are now available via LTX.
LTX Studio API v1 major update:
• New endpoint POST images/upscale (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-upscale) to upscale previously generated images to higher resolution
• New AI models for image generation and editing:
∘ FLUX.1 Kontext (https://bfl.ai/models/flux-kontext) -
∘ FLUX.1 Kontext Max (https://bfl.ai/models/flux-kontext) -
∘ Nano Banana / Gemini 2.5 Flash Image (https://deepmind.google/technologies/gemini/flash/) -
• New parameters for both edit and create endpoints:
∘
∘
∘
∘
∘
∘
• Endpoint renames, backward compatible:
∘
∘
Examples: https://useapi.net/blog/251003
• New endpoint POST images/upscale (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-upscale) to upscale previously generated images to higher resolution
• New AI models for image generation and editing:
∘ FLUX.1 Kontext (https://bfl.ai/models/flux-kontext) -
flux - default model∘ FLUX.1 Kontext Max (https://bfl.ai/models/flux-kontext) -
flux-premium - supports 3 reference images∘ Nano Banana / Gemini 2.5 Flash Image (https://deepmind.google/technologies/gemini/flash/) -
nano-banana - supports 3 reference images• New parameters for both edit and create endpoints:
∘
model - select AI model: flux, flux-premium, nano-banana∘
location - location description, max 2000 chars∘
weather - weather conditions description, max 2000 chars∘
lighting - lighting description, max 2000 chars∘
referenceAssetId2 - second reference image, all models∘
referenceAssetId3 - third reference image, flux-premium and nano-banana only• Endpoint renames, backward compatible:
∘
images/flux-edit renamed to images/edit (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-flux-edit), old name still supported∘
images/flux-create renamed to images/create (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-flux-create), old name still supportedExamples: https://useapi.net/blog/251003
Experimental API for AI services
POST images/upscale
Experimental API for popular AI services by useapi.net