Real-Time Midjourney API
Anonymous Poll
31%
I'm interested in the Real-Time Midjourney API and do not care about the webhook/callback at all.
50%
I'm interested in the Real-Time Midjourney API, but the webhook/callback feature is a must for me.
19%
I do not care for the Real-Time Midjourney API at all.
MiniMax released changes to their website. We're working on the fix, I'll keep you posted on our progress.
π’3
PixVerse recently added the ability to generate images rather than videos.
The GET videos/effects endpoint now returns two new fields:
β’
β’
There are only 3 image response templates for now:
We added a quick fix to deal with the new
You can use the returned
We will monitor how PixVerse plans to support image generationβthey will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
The GET videos/effects endpoint now returns two new fields:
β’
template_type: 1 | 2, 1 - video response, 2 - image response (new)β’
template_model: `` | image_v6, image_v6 (new) for template_type: 2There are only 3 image response templates for now:
Sharkie Plush Delivery, Self-Figurine Roast and Your game of life, archived forever.We added a quick fix to deal with the new
template_type: 2. The POST videos/create endpoint, when an image template_id is used, will properly place the image generation task and respond with "image_id": "user:12345-pixverse:user@email.com-image:123456789"You can use the returned
image_id value to retrieve the generated image via GET videos/<image_id> or delete it via DELETE videos/<image_id>.We will monitor how PixVerse plans to support image generationβthey will most likely add more soon, so we will move the above temporary image retrieval/delete endpoints to designated
pixverse/images/ endpoints.Experimental API for AI services
GET videos/effects
Experimental API for popular AI services by useapi.net
Kling API v1 has been updated to support model
β’ Added support for model
β’ Added support for models
Examples.
v2.5:β’ Added support for model
kling-v2-5 to POST videos/text2video.β’ Added support for models
kling-v2-5 to POST videos/image2video-frames.Examples.
Mureka released a website update, attempting to generate V7 songs currently returns 400. We'll be releasing an update to fix this issue today. Meanwhile you can use V6.
Mureka API v1 has been updated:
β’ Model
β’ POST music/create-advanced supports an optional parameter
β’ POST music/instrumental supports models
Examples
β’ Model
V7.5 is now the default for all music generation endpoints, replacing V7.β’ POST music/create-advanced supports an optional parameter
voice_gender, which can be set to female or male.β’ POST music/instrumental supports models
V7.5, O1 and V6.Examples
Experimental API for AI services
Mureka API v1
Experimental API for popular AI services by useapi.net
π LTX Studio released a website update.
We're planning to release a LTX Studio API update tomorrow to align with the recent website changes.
As of now the LTX Studio API may experience issues and not function properly.
We're planning to release a LTX Studio API update tomorrow to align with the recent website changes.
As of now the LTX Studio API may experience issues and not function properly.
π LTX Studio FLUX.1 Kontext image generations fixed, please give it a try.
We're planning to release an update to support both FLUX.1 Kontext Premium and Nano Banana this week as well since both are now available via LTX.
We're planning to release an update to support both FLUX.1 Kontext Premium and Nano Banana this week as well since both are now available via LTX.
LTX Studio API v1 major update:
β’ New endpoint POST images/upscale (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-upscale) to upscale previously generated images to higher resolution
β’ New AI models for image generation and editing:
β FLUX.1 Kontext (https://bfl.ai/models/flux-kontext) -
β FLUX.1 Kontext Max (https://bfl.ai/models/flux-kontext) -
β Nano Banana / Gemini 2.5 Flash Image (https://deepmind.google/technologies/gemini/flash/) -
β’ New parameters for both edit and create endpoints:
β
β
β
β
β
β
β’ Endpoint renames, backward compatible:
β
β
Examples: https://useapi.net/blog/251003
β’ New endpoint POST images/upscale (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-upscale) to upscale previously generated images to higher resolution
β’ New AI models for image generation and editing:
β FLUX.1 Kontext (https://bfl.ai/models/flux-kontext) -
flux - default modelβ FLUX.1 Kontext Max (https://bfl.ai/models/flux-kontext) -
flux-premium - supports 3 reference imagesβ Nano Banana / Gemini 2.5 Flash Image (https://deepmind.google/technologies/gemini/flash/) -
nano-banana - supports 3 reference imagesβ’ New parameters for both edit and create endpoints:
β
model - select AI model: flux, flux-premium, nano-bananaβ
location - location description, max 2000 charsβ
weather - weather conditions description, max 2000 charsβ
lighting - lighting description, max 2000 charsβ
referenceAssetId2 - second reference image, all modelsβ
referenceAssetId3 - third reference image, flux-premium and nano-banana onlyβ’ Endpoint renames, backward compatible:
β
images/flux-edit renamed to images/edit (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-flux-edit), old name still supportedβ
images/flux-create renamed to images/create (https://useapi.net/docs/api-ltxstudio-v1/post-ltxstudio-images-flux-create), old name still supportedExamples: https://useapi.net/blog/251003
Experimental API for AI services
POST images/upscale
Experimental API for popular AI services by useapi.net