https://github.com/yzhao062/pyod outlier/anomaly detection - many methods in one library #framework #library
GitHub
GitHub - yzhao062/pyod: A Python Library for Outlier and Anomaly Detection, Integrating Classical and Deep Learning Techniques
A Python Library for Outlier and Anomaly Detection, Integrating Classical and Deep Learning Techniques - yzhao062/pyod
https://faker.readthedocs.io/en/stable/index.html
https://sdv.dev/
https://gretel.ai/synthetics
Synthetic Data Generators! #Frameworks #Library
https://sdv.dev/
https://gretel.ai/synthetics
Synthetic Data Generators! #Frameworks #Library
sdv.dev
The Synthetic Data Vault. Put synthetic data to work!
The Synthetic Data Vault (SDV) enables end users to easily generate synthetic data for different data modalities, including single table, relational and time series data.
https://medmnist.com/
MedMNIST: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification
#Dataset
MedMNIST: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification
#Dataset
https://arxiv.org/pdf/2208.07339.pdf
https://huggingface.co/blog/hf-bitsandbytes-integration
#Performance
https://huggingface.co/blog/hf-bitsandbytes-integration
#Performance
huggingface.co
A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using transformers, accelerate and bitsandbytes
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
High-Performance Large-Scale Image Recognition Without Normalization
https://arxiv.org/pdf/2102.06171.pdf #Paper
https://arxiv.org/pdf/2102.06171.pdf #Paper
https://tf-explain.readthedocs.io/en/latest/index.html
tf-explain offers interpretability methods for Tensorflow 2.0 to ease neural network’s understanding.
#Frameworks
tf-explain offers interpretability methods for Tensorflow 2.0 to ease neural network’s understanding.
#Frameworks
#Tips Efficient Training Large Models on Multiple GPUs, Main Concepts (from https://huggingface.co/docs/transformers/perf_train_gpu_many):
DataParallel (DP) - the same setup is replicated multiple times, and each being fed a slice of the data. The processing is done in parallel and all setups are synchronized at the end of each training step.
TensorParallel (TP) - each tensor is split up into multiple chunks, so instead of having the whole tensor reside on a single gpu, each shard of the tensor resides on its designated gpu. During processing each shard gets processed separately and in parallel on different GPUs and the results are synced at the end of the step. This is what one may call horizontal parallelism, as the splitting happens on horizontal level.
PipelineParallel (PP) - the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are places on a single gpu. Each gpu processes in parallel different stages of the pipeline and working on a small chunk of the batch.
Zero Redundancy Optimizer (ZeRO) - Also performs sharding of the tensors somewhat similar to TP, except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn’t need to be modified. It also supports various offloading techniques to compensate for limited GPU memory.
Sharded DDP - is another name for the foundational ZeRO concept as used by various other implementations of ZeRO.
#Frameworks :
https://www.deepspeed.ai/
https://fairscale.readthedocs.io/en/latest/
https://github.com/tunib-ai/oslo
https://github.com/microsoft/varuna
DataParallel (DP) - the same setup is replicated multiple times, and each being fed a slice of the data. The processing is done in parallel and all setups are synchronized at the end of each training step.
TensorParallel (TP) - each tensor is split up into multiple chunks, so instead of having the whole tensor reside on a single gpu, each shard of the tensor resides on its designated gpu. During processing each shard gets processed separately and in parallel on different GPUs and the results are synced at the end of the step. This is what one may call horizontal parallelism, as the splitting happens on horizontal level.
PipelineParallel (PP) - the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are places on a single gpu. Each gpu processes in parallel different stages of the pipeline and working on a small chunk of the batch.
Zero Redundancy Optimizer (ZeRO) - Also performs sharding of the tensors somewhat similar to TP, except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn’t need to be modified. It also supports various offloading techniques to compensate for limited GPU memory.
Sharded DDP - is another name for the foundational ZeRO concept as used by various other implementations of ZeRO.
#Frameworks :
https://www.deepspeed.ai/
https://fairscale.readthedocs.io/en/latest/
https://github.com/tunib-ai/oslo
https://github.com/microsoft/varuna
huggingface.co
Parallelism methods
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
[2302.14045] Language Is Not All You Need: Aligning Perception with Language Models
https://arxiv.org/abs/2302.14045
#paper New generation
https://arxiv.org/abs/2302.14045
#paper New generation
https://arxiv.org/abs/2207.06881 #Paper Recurrent Memory Transformer. Scaling transformer architecture to long sequences.
https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action
New model translates vision and language into action based on LLM
New model translates vision and language into action based on LLM
Google DeepMind
RT-2: New model translates vision and language into action
Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for...
https://www.anyscale.com/blog/continuous-batching-llm-inference
LLM inference acceleration #Frameworks
LLM inference acceleration #Frameworks
Anyscale
Achieve 23x LLM Inference Throughput & Reduce p50 Latency
In this blog, we discuss continuous batching, a critical systems-level optimization that improves both throughput and latency under load for LLMs.
❤1
Medium
Goodbye databases, it’s time to embrace Vector Databases!
The AI revolution is reshaping industries, promising remarkable innovations while introducing new challenges. In this transformative…
https://codemaker2016.medium.com/goodbye-databases-its-time-to-embrace-vector-databases-0ffa7879980e
#Tips
#Tips
🔥1