https://arxiv.org/pdf/2408.04840v1
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
#Paper
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
#Paper
https://encord.com/blog/dimentionality-reduction-techniques-machine-learning/
Dimensionality reduction techniques in one place #FYI #Tips
Dimensionality reduction techniques in one place #FYI #Tips
Encord
Top 12 Dimensionality Reduction Techniques for Machine Learning
Dimensionality reduction is a fundamental technique in machine learning (ML) that simplifies datasets by reducing the number of input variables
🔥1
NVidia: Traditional machine learning on GPU: various clustering, UMAP, TSNE, PCA, etc. #FYI #library
https://github.com/rapidsai/cuml
https://docs.rapids.ai/api/cuml/stable/
https://github.com/rapidsai/cuml
https://docs.rapids.ai/api/cuml/stable/
GitHub
GitHub - rapidsai/cuml: cuML - RAPIDS Machine Learning Library
cuML - RAPIDS Machine Learning Library. Contribute to rapidsai/cuml development by creating an account on GitHub.
Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code.
https://numba.pydata.org/ #Frameworks #library
https://numba.pydata.org/ #Frameworks #library
❤1👏1
Dino/Dino v2 explained: Self-distillation with no labels & etc. #FYI #Tips #Explained #Tutorial
1. https://medium.com/@anuj.dutt9/emerging-properties-in-self-supervised-vision-transformers-dino-paper-summary-4c7a6ed68161 Original Dino
2. https://encord.com/blog/dinov2-self-supervised-learning-explained/
3. https://www.picsellia.com/post/dinov2-steps-by-steps-explanations-picsellia
4. https://www.ai-bites.net/dino-v2-learning-robust-visual-features-without-supervision-model-explained/
5. https://blog.marvik.ai/2023/05/16/dinov2-exploring-self-supervised-vision-transformers/
Original papers:
1. https://arxiv.org/abs/2104.14294 Emerging Properties in Self-Supervised Vision Transformers (Dino)
2. https://arxiv.org/abs/2304.07193 DINOv2: Learning Robust Visual Features without Supervision
3. https://arxiv.org/abs/2309.16588 Vision Transformers Need Registers
1. https://medium.com/@anuj.dutt9/emerging-properties-in-self-supervised-vision-transformers-dino-paper-summary-4c7a6ed68161 Original Dino
2. https://encord.com/blog/dinov2-self-supervised-learning-explained/
3. https://www.picsellia.com/post/dinov2-steps-by-steps-explanations-picsellia
4. https://www.ai-bites.net/dino-v2-learning-robust-visual-features-without-supervision-model-explained/
5. https://blog.marvik.ai/2023/05/16/dinov2-exploring-self-supervised-vision-transformers/
Original papers:
1. https://arxiv.org/abs/2104.14294 Emerging Properties in Self-Supervised Vision Transformers (Dino)
2. https://arxiv.org/abs/2304.07193 DINOv2: Learning Robust Visual Features without Supervision
3. https://arxiv.org/abs/2309.16588 Vision Transformers Need Registers
Medium
Emerging Properties in Self-Supervised Vision Transformers (DINO) — Paper Summary
Hi Everyone! Today, we’ll unravel the complexities of an intriguing approach in the realm of self-supervised learning, delving into a groundbreaking paper titled “Emerging Properties in…
https://arxiv.org/html/2405.18886v1 Compressing Large Language Models using Low Rank and Low Precision Decomposition #paper
https://github.com/staghado/vit.cpp Inference Vision Transformer (ViT) in plain C/C++ with ggml
https://github.com/ggerganov/ggml Tensor library for machine learning with Low-level cross-platform implementation
#Frameworks
https://github.com/ggerganov/ggml Tensor library for machine learning with Low-level cross-platform implementation
#Frameworks
GitHub
GitHub - staghado/vit.cpp: Inference Vision Transformer (ViT) in plain C/C++ with ggml
Inference Vision Transformer (ViT) in plain C/C++ with ggml - staghado/vit.cpp
https://arxiv.org/abs/2412.11768
https://github.com/AnonymousAlethiometer/SGD_SaI/
#Paper #Frameworks
https://github.com/AnonymousAlethiometer/SGD_SaI/
#Paper #Frameworks
arXiv.org
No More Adam: Learning Rate Scaling at Initialization is All You Need
In this work, we question the necessity of adaptive gradient methods for training deep neural networks. SGD-SaI is a simple yet effective enhancement to stochastic gradient descent with momentum...
Deep Learning
Dino/Dino v2 explained: Self-distillation with no labels & etc. #FYI #Tips #Explained #Tutorial 1. https://medium.com/@anuj.dutt9/emerging-properties-in-self-supervised-vision-transformers-dino-paper-summary-4c7a6ed68161 Original Dino 2. https://encord.com/blog/dinov2…
https://www.samarkhanna.com/ExPLoRA/ Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts
#Paper #Framework
#Paper #Framework
Samarkhanna
ExPLoRA
ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts
https://medium.com/version-1/the-rise-of-large-action-models-lams-how-ai-can-understand-and-execute-human-intentions-f59c8e78bc09
Large Action Models
Large Action Models
Medium
The Rise of Large Action Models, LAMs: How AI Can Understand and Execute Human Intentions?
A hot topic and development in the realm artificial intelligence (AI) is Large Action Models, also referred as Large Agentic Models or LAMs…
https://arxiv.org/pdf/2411.07975
JanusFlow: Harmonizing Autoregression and Rectified Flow
for Unified Multimodal Understanding and Generation
#Paper
Finally multimodality on input and output!
JanusFlow: Harmonizing Autoregression and Rectified Flow
for Unified Multimodal Understanding and Generation
#Paper
Finally multimodality on input and output!
https://github.com/trent-b/iterative-stratification scikit-learn compatible cross validators and splitting with stratification for multilabel data. #library
GitHub
GitHub - trent-b/iterative-stratification: scikit-learn cross validators for iterative stratification of multilabel data
scikit-learn cross validators for iterative stratification of multilabel data - trent-b/iterative-stratification