Here are 4 awesome articles to learn #ObjectDetection from scratch:
• Understanding and Building an Object Detection Model from Scratch in #Python - https://bit.ly/2ErXMVK
• Part 1: A Step-by-Step Introduction to the Basic Object Detection #Algorithms - https://bit.ly/2V4nqp8
• Part 2: A Practical Implementation of the Faster R-CNN Algorithm for Object Detection - https://bit.ly/2Ugrjdx
• Part 3: A Practical Guide to Object Detection using the Popular YOLO Framework - https://bit.ly/2uq7n9y
✴️ @AI_Python_EN
• Understanding and Building an Object Detection Model from Scratch in #Python - https://bit.ly/2ErXMVK
• Part 1: A Step-by-Step Introduction to the Basic Object Detection #Algorithms - https://bit.ly/2V4nqp8
• Part 2: A Practical Implementation of the Faster R-CNN Algorithm for Object Detection - https://bit.ly/2Ugrjdx
• Part 3: A Practical Guide to Object Detection using the Popular YOLO Framework - https://bit.ly/2uq7n9y
✴️ @AI_Python_EN
Protecting your #DeepLearning model will be the key area to focus on from cyber attacks to your models and algorithms.
Placing these is public cloud environments may severely affect your ability to protect these models and algorithms.
You need to prepare to defend these.
What are these adversarial attacks?
1. l2-norm attacks: in these attacks the attacker aims to minimize squared error between the adversarial and original image. These typically result in a very small amount of noise added to the image.
2. l∞-norm attacks: this is perhaps the simplest class of attacks which aim to limit or minimize the amount that any pixel is perturbed in order to achieve an adversary’s goal.
3. l0-norm attacks: these attacks minimize the number of modified pixels in the image.
Below is an example of an l2-norm attack where the left is classified as jeep but the right as a minivan.
#cyberattacks #algorithms #models #deeplearning
✴️ @AI_Python_EN
Placing these is public cloud environments may severely affect your ability to protect these models and algorithms.
You need to prepare to defend these.
What are these adversarial attacks?
1. l2-norm attacks: in these attacks the attacker aims to minimize squared error between the adversarial and original image. These typically result in a very small amount of noise added to the image.
2. l∞-norm attacks: this is perhaps the simplest class of attacks which aim to limit or minimize the amount that any pixel is perturbed in order to achieve an adversary’s goal.
3. l0-norm attacks: these attacks minimize the number of modified pixels in the image.
Below is an example of an l2-norm attack where the left is classified as jeep but the right as a minivan.
#cyberattacks #algorithms #models #deeplearning
✴️ @AI_Python_EN
Important Machine Learning algorithms and their Hyperparameters
#machinelearning #datascience #statistics #algorithms
✴️ @AI_Python_EN
#machinelearning #datascience #statistics #algorithms
✴️ @AI_Python_EN
Not everyone knows but my #book has its Github repository where all #Python code used to build illustrations is gathered.
So, while reading the book, you can actually run the described #algorithms, play with hyperparameters and #datasets, and generate your versions of illustrations.
https://github.com/aburkov/theMLbook
✴️ @AI_Python_EN
So, while reading the book, you can actually run the described #algorithms, play with hyperparameters and #datasets, and generate your versions of illustrations.
https://github.com/aburkov/theMLbook
✴️ @AI_Python_EN