Not Rocket Science
330 subscribers
7 photos
1 video
16 links
Technical Blog about Deep Learning by a Practitioner
Download Telegram
Channel created
Hi, my name Olga Chernytska and I am a Data Scientist. Soon it will be 5 years since I’ve started my career and to celebrate my little anniversary I am launching a technical blog about Deep Learning.

🚀Not Rocket Science🚀
https://notrocketscience.blog
Recently I’ve finished with the tutorial on “2D Hand Pose Estimation”. If previously this task seemed complicated to you, it’s time to change it.

Gentle introduction to 2D Hand Pose Estimation: Approach Explained 👇
https://notrocketscience.blog/gentle-introduction-to-2d-hand-pose-estimation-approach-explained/
And here is the second part. This time you’ll learn how to train a hand pose estimator in PyTorch. Experience with PyTorch is not required, I tried to make it introductory, so if you want to make yourself comfortable with this popular deep learning framework - isn’t it a perfect time now? 🙂

Gentle introduction to 2D Hand Pose Estimation: Let’s Code It! 👇
https://notrocketscience.blog/gentle-introduction-to-2d-hand-pose-estimation-lets-code-it/
Also sharing Github repo that I’ve created for the Hand Pose Estimation tutorial. You’ll find training and inference scripts there (yeah, in PyTorch). Feel free to use it as a starting point for your own research or a commercial project.

https://github.com/OlgaChernytska/2D-Hand-Pose-Estimation-RGB
Hi there, have a couple of blog updates:

1. I have changed the blog positioning to “About Deep Learning by a Practitioner” - previously it was “About Deep Learning for Women”. I want this blog to be more content-oriented and remove such a big focus on gender.

2. If you want to leave a comment on the website, you do not need the Facebook/Email authentification anymore. Now, there is only a simple comment form there, so I hope, more people will be involved in discussions.
Data Augmentation is one of the most important topics in Deep Learning. So here comes a complete guide for Computer Vision. If you are not using data augmentation or not sure if you’re using it correctly - my last post is for you.

Complete Guide to Data Augmentation for Computer Vision 👇
https://notrocketscience.blog/complete-guide-to-data-augmentation-for-computer-vision/
My post on Data Augmentation was just republished to Towards Data Science. It’s my third post there. That’s good news.

However, my posts are not getting many reads right away, even though there are 600k followers in Towards Data Science. Dozens of good posts are published there every day, and mine - may be just missing. That’s bad news.

Blogging is a long-term project, so we’ll see…

https://towardsdatascience.com/complete-guide-to-data-augmentation-for-computer-vision-1abe4063ad07
This media is not supported in your browser
VIEW IN TELEGRAM
My new tutorial is on Visualization, where I am reviewing a great plotly feature.

How to create an interactive 3D chart and share it easily with anyone 👇
https://notrocketscience.blog/how-to-create-an-interactive-3d-chart-and-share-it-easily-with-anyone/
Doing image augmentation for Segmentation or Object Detection tasks is not that easy. Unfortunately, Native PyTorch and Tensorflow augmenters do not support simultaneous transforms for an image and its labels (mask, bounding box). If you are tired of writing your own transforms - Albumentations library is for you.

Overview of Albumentations: Open-source library for advanced image augmentations👇
https://notrocketscience.blog/overview-of-albumentations-open-source-library-for-advanced-image-augmentations/
My Overview of Albumentations was added to TowardsDataScience hands-on tutorials 🎉

It’s kind of funny because I didn’t really like how that post came out. Remember me being a bit upset when I published it.

Lessons learned: You never ever can objectively evaluate the work you do. So judging yourself before even getting any external feedback - just makes no sense :)
Being able to reproduce the latest scientific papers is an extremely competitive skill for a Data Scientist. And it’s a great way (and more advanced) to deepen your knowledge in Machine Learning.

My recent post is on how to learn to reproduce Deep Learning papers. We will cover:

- How to choose your first paper, so your learning will be smooth and stressless;
- What is the typical paper structure and where important information is located;
- Step-by-step instruction on how to reproduce a paper if you’re a beginner;
- Where to find help if you get stuck.

I prefer to add coding to my tutorials. For those who want to start practicing right away, I am showing how to reproduce a fundamental paper on Image Super-Resolution. If you’d like to follow this part, you should have some experience with CNNs.

Learn To Reproduce Papers: Beginner’s Guide👇
https://notrocketscience.blog/learn-to-reproduce-papers-beginners-guide/
This media is not supported in your browser
VIEW IN TELEGRAM
Super-Resolution is the task of taking the low-resolution (small, poor quality) image and “cleverly” upscaling it to the high-resolution (large, good quality) image.

GAN-based approaches are widely used for the Super-Resolution task and show the best restoration quality. However, these approaches are more of an advanced level.

For those who want to start learning Super-Resolution, I recommend reviewing CNN-based approaches first. They have lower restoration quality but are much simpler.

Fast Super-Resolution CNN (FSRCNN) is an example of a CNN-based approach. I showed how to reproduced the original paper in the latest post - “Learn To Reproduce Papers: Beginner’s Guide”, and explained all the details there.

Scripts for data loader, model, training, and inference - I’ve finalized in the Github repository. Check it here:

💻Code Implementation of Fast Super-Resolution CNN
https://github.com/OlgaChernytska/Super-Resolution-with-FSRCNN
Word Embeddings is the most fundamental concept in Deep Natural Language Processing. And word2vec is one of the earliest algorithms used to train word embeddings.

Word2vec is quite old, and there are more recent alternatives. However, it would be a good concept for beginners or those, who want to practice implementing papers.

My latest post is about word2vec. I skip all the intuition and high-level overview and go straight to implementation details. In particular:

- We’ll start with a detailed model architecture overview.
- Then go through data preparation steps.
- I’ll show how to implement wor2vec from scratch with PyTorch - model architecture, data loaders, training flow, etc.
- And finally, we’ll use word embeddings to find similar words and word clusters within the text corpus.

And find out that “King – Man + Woman = Queen” is not that easy to reproduce.

Word2vec with PyTorch: Implementing Original Paper👇
https://notrocketscience.blog/word2vec-with-pytorch-implementing-original-paper/
🇺🇦News for my Ukrainian readers🇺🇦

On Thursday I am giving a lecture - “How to get a job in IT”.

This lecture is in Russian, focuses on the Ukrainian IT market, and is primarily for junior-level specialists (not only Data Scientists). I’ll share tips&tricks on where to look for vacancies, how to write a good CV, and prepare for technical interviews.

When: November 18th, 19:00
Where: Zoom
Cost: Free of charge
Organizers: Kyiv School of Economics Student Club

Registration link👇
My recent lecture “How to get a job in IT" is now available on Youtube.

It would be useful for junior-level IT specialists who want to work in Ukrainian companies. The lecture is in Russian.
After a little break, a new post is coming…

Despite being considered as black-box algorithms, neural networks are actually quite explainable, when you know where to look😉

During the last weeks, I have been working on my pet project - Visual Inspection with Computer Vision. I was curious about:

- How to build a model to classify images into “Good” / “Anomaly” classes, depending on whether an item in the image has a defect or not.

- But more importantly, how to explain why the model made this particular decision.

Look at the image attached. These are real predictions on the test set produced by my model. The model was trained only on binary labels (“Good” / “Anomaly”). But in the inference mode it is able to return bounding boxes of the defects.

If you’d like to know how I did it - come and read my new post!

Explainable Defect Detection using Convolutional Neural Networks: Case Study👇
https://notrocketscience.blog/explainable-defect-detection-using-convolutional-neural-networks-case-study/
I am open-sourcing the code for my Visual Inspection pet project. So you can read it, run it or use it for any of your research and commercial tasks.

In case you’d like to get a detailed explanation of the code there - I am planning to publish the second part of the tutorial that will be about it. Stay tuned!

💻Visual Inspection with Computer Vision
https://github.com/OlgaChernytska/Visual-Inspection