horseee/DeepCache
DeepCache: Accelerating Diffusion Models for Free
Language: Python
#diffusion_models #efficient_inference #model_compression #stable_diffusion #training_free
Stars: 177 Issues: 5 Forks: 5
https://github.com/horseee/DeepCache
DeepCache: Accelerating Diffusion Models for Free
Language: Python
#diffusion_models #efficient_inference #model_compression #stable_diffusion #training_free
Stars: 177 Issues: 5 Forks: 5
https://github.com/horseee/DeepCache
GitHub
GitHub - horseee/DeepCache: [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free - horseee/DeepCache
SqueezeAILab/LLMCompiler
LLMCompiler: An LLM Compiler for Parallel Function Calling
Language: Python
#efficient_inference #function_calling #large_language_models #llama #llama2 #llm #llm_agent #llm_agents #llm_framework #llms #natural_language_processing #nlp #parallel_function_call #transformer
Stars: 216 Issues: 0 Forks: 11
https://github.com/SqueezeAILab/LLMCompiler
LLMCompiler: An LLM Compiler for Parallel Function Calling
Language: Python
#efficient_inference #function_calling #large_language_models #llama #llama2 #llm #llm_agent #llm_agents #llm_framework #llms #natural_language_processing #nlp #parallel_function_call #transformer
Stars: 216 Issues: 0 Forks: 11
https://github.com/SqueezeAILab/LLMCompiler
GitHub
GitHub - SqueezeAILab/LLMCompiler: [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling - SqueezeAILab/LLMCompiler