LTX Studio API v1 enhancements:
• Added
• Enhanced
• Added
- POST images/flux-create
- POST images/flux-edit
• Extended LTX video duration options to include
Example
• Added
veo3-fast
model support to POST videos/veo-create for faster text-to-video generation• Enhanced
audioSFX
parameter support across all Veo models (veo2, veo3, veo3-fast) for automatic sound effects generation• Added
shotType
and shotAngle
parameters to Flux image endpoints for enhanced composition control:- POST images/flux-create
- POST images/flux-edit
• Extended LTX video duration options to include
15
and 30
seconds in POST videos/ltx-createExample
Kling API v1 video effects endpoints POST videos/image2video-effects and GET videos/effects updated:
• Removed image_tail parameter - no longer supported in the current API implementation
• Added mode parameter - select between std (standard) and pro (professional) quality modes
• Enhanced effect validation - effects now include effectSupported and promptSupported flags to indicate capabilities
• Updated response structure - reflects current API with detailed user points, tickets, and task information
Currently unsupported effects (require image preprocessing):
• felt (FeltFelt)
• furry (PlushCut)
• jelly (JellyJelly)
• pixel (PixelPixel)
• yearbook (Yearbook)
• polaroid (Instant Film)
• spring_blossoms (BloomBloom)
Note: This list may change and may not be complete. Effects marked as effectSupported: false in the API response require multi-step preprocessing and are not yet available through this endpoint.
Examples
• Removed image_tail parameter - no longer supported in the current API implementation
• Added mode parameter - select between std (standard) and pro (professional) quality modes
• Enhanced effect validation - effects now include effectSupported and promptSupported flags to indicate capabilities
• Updated response structure - reflects current API with detailed user points, tickets, and task information
Currently unsupported effects (require image preprocessing):
• felt (FeltFelt)
• furry (PlushCut)
• jelly (JellyJelly)
• pixel (PixelPixel)
• yearbook (Yearbook)
• polaroid (Instant Film)
• spring_blossoms (BloomBloom)
Note: This list may change and may not be complete. Effects marked as effectSupported: false in the API response require multi-step preprocessing and are not yet available through this endpoint.
Examples
Midjourney API v2 now provides full support for video generation, including all video-specific parameters such as
The following endpoints were updated:
• POST jobs/imagine
• POST jobs/button
Examples.
--video
, --motion low
, --motion high
, --raw
, --loop
, and --end
, as well as the buttons Animate (High motion)
, Animate (Low motion)
, Extend (High motion)
, and Extend (Low motion)
. Please refer to the official documentation for details.The following endpoints were updated:
• POST jobs/imagine
• POST jobs/button
Examples.
👍1
The article Discord CDN Proxy was updated to reflect changes made to support preserving additional query parameters like
animated=true
for Midjourney video generation.Experimental API for AI services
Discord CDN Proxy
Experimental API for popular AI services by useapi.net
Mureka API v1 has been updated with model improvements and increased character limits:
• Model
• Increased character limits:
- Prompt maximum length increased from 300 to 1000 characters for POST music/create and POST music/create-instrumental
- Lyrics maximum length increased from 2000 to 3000 characters for POST music/create-advanced and POST music/extend
• For instrumental generation, older models (
Examples
• Model
V7
is now the default for all music generation endpoints, replacing V6
• Increased character limits:
- Prompt maximum length increased from 300 to 1000 characters for POST music/create and POST music/create-instrumental
- Lyrics maximum length increased from 2000 to 3000 characters for POST music/create-advanced and POST music/extend
• For instrumental generation, older models (
O1
, V6
, V5.5
) now redirect to V7
, we will be removing retired models on September 1, 2025.Examples
Experimental API for AI services
Mureka API v1
Experimental API for popular AI services by useapi.net
This media is not supported in your browser
VIEW IN TELEGRAM
Runway API v1 has been updated to support Gen-4 Aleph (video-to-video transformation) with the new POST gen4/video endpoint for video-to-video generation using text prompts and optional image conditioning.
Example
Example
This media is not supported in your browser
VIEW IN TELEGRAM
MiniMax API v1 endpoint POST videos/create updated to support new image-to-video resolution options for the Hailuo 02 model:
•
•
Example
•
512p-6sec
option (image-to-video only, fileID is required)•
512p-10sec
option (image-to-video only, fileID is required)Example
LTX Studio API v1 endpoint POST videos/veo-create updated: the
veo3
model now supports an optional starting frame. Cost estimate updated to reflect current LTX pricing.Experimental API for AI services
LTX Studio API v1
Experimental API for popular AI services by useapi.net
spuds_oxley_test.wav
334.3 KB
HeyGen API v1 endpoint POST tts/create updated: added support for voice engine selection with new
Example
engine
parameter. Engine options are voice-specific and include: auto
, aws
, azure
, elevenLabs
, elevenLabsV3
, fish
, google
, openai
, openaiEmo
, panda
, starfish
. Defaults to elevenLabs
with fallback to voice's default engine. The elevenLabsV3
model supports audio tags for enhanced emotional context.Example
👌2
Kling released a website update earlier today. We're working on aligning our API with the recent changes.
Kling API v1 updated: removed deprecated
kling-v2-0-master
model from POST videos/text2video and POST videos/image2video-frames endpoints.Experimental API for AI services
Kling API v1
Experimental API for popular AI services by useapi.net
This media is not supported in your browser
VIEW IN TELEGRAM
Kling API v1 enhancements:
• Added new POST videos/add-sound endpoint for AI-generated video-to-audio conversion with optional original sound preservation
• Added new GET assets/uploaded endpoint to retrieve user-uploaded assets with filename filtering support
• Enhanced POST assets response to include
Example
• Added new POST videos/add-sound endpoint for AI-generated video-to-audio conversion with optional original sound preservation
• Added new GET assets/uploaded endpoint to retrieve user-uploaded assets with filename filtering support
• Enhanced POST assets response to include
fileName
field with the generated unique filenameExample
Midjourney API v1 (legacy) will be fully retired on September 1st, 2025
Starting September 1st, 2025, the Midjourney API will require a configuration to be present in order to make calls. The v1 (legacy) calls, where you were able to call the API without creating a config and by passing
To see a list of the currently configured Midjourney accounts, please visit GET account/midjourney.
Post your Discord token to POST account/midjourney to create an API configuration for your Midjourney bot. Please note that the configuration will use your Midjourney Direct Messages channel. If you used your own server or channel and have multiple accounts, you will need to update your code to use the new channel(s). If you only have a single Midjourney account, there’s no need to pass the
For your reference:
• Setup Midjourney.
• Account Management & Subscription.
If you have any questions or concerns please contact
PS
We have customers who joined in 2023 and early 2024 and still, according to our stats, use v1-style API calls.
We sent you email(s) with the notification above, so this is a duplicate, just in case you missed the email.
Starting September 1st, 2025, the Midjourney API will require a configuration to be present in order to make calls. The v1 (legacy) calls, where you were able to call the API without creating a config and by passing
discord
and channel
values with every call, will no longer be supported. To see a list of the currently configured Midjourney accounts, please visit GET account/midjourney.
Post your Discord token to POST account/midjourney to create an API configuration for your Midjourney bot. Please note that the configuration will use your Midjourney Direct Messages channel. If you used your own server or channel and have multiple accounts, you will need to update your code to use the new channel(s). If you only have a single Midjourney account, there’s no need to pass the
channel
value when making API calls. For your reference:
• Setup Midjourney.
• Account Management & Subscription.
If you have any questions or concerns please contact
support@useapi.net
.PS
We have customers who joined in 2023 and early 2024 and still, according to our stats, use v1-style API calls.
We sent you email(s) with the notification above, so this is a duplicate, just in case you missed the email.
👍1
Audio
Mureka API v1 TTS/speech generation endpoints added:
• POST speech - generate speech from text with voice cloning and multi-speaker conversations
• GET speech - retrieve generated speech recordings
• DELETE speech - delete speech recordings
• GET speech/voices - list available voices including cloned ones
• POST speech/voice - clone voice from MP3 audio samples
• DELETE speech/voice - delete cloned voices
Example
• POST speech - generate speech from text with voice cloning and multi-speaker conversations
• GET speech - retrieve generated speech recordings
• DELETE speech - delete speech recordings
• GET speech/voices - list available voices including cloned ones
• POST speech/voice - clone voice from MP3 audio samples
• DELETE speech/voice - delete cloned voices
Example
This media is not supported in your browser
VIEW IN TELEGRAM
MiniMax API POST videos/agent-create enhanced with real-time streaming response using Server-Sent Events (SSE) for live progress updates during video generation.
Examples.
HeyGen API v1 has been decommissioned, consider switching to Mureka API for TTS/speech generation instead.
MiniMax music and TTS endpoints have been deprecated and will be removed, consider switching to Mureka API for speech/music generation.
Examples.
HeyGen API v1 has been decommissioned, consider switching to Mureka API for TTS/speech generation instead.
MiniMax music and TTS endpoints have been deprecated and will be removed, consider switching to Mureka API for speech/music generation.