Bedilbek Khamidov - Observer
11 subscribers
5 photos
1 file
23 links
Observations of a Tech who looks at different perspectives to better document every piece of work
Download Telegram
to view and join the conversation
Found an interesting tool for automatic API client library generation based on OpenAPI Spec (v2, v3)
https://openapi-generator.tech.

The most important thing, number of supported languages for client library generation is pretty much big. Some of the mainstream supported languages from the list are shown below:
- Python
- Go
- C++
- Kotlin
- Java
- PHP
- Rust
- Ruby
- Dart
- Node.js/Javascript
- Typescript
- Swift

And this is not the end, it can also create server-stubs, documentation, configuration automatically for many languages/frameworks
In brightest day, in blackest night, no evil shall escape my sight. Let those who worship evil's might, beware my power, Github Issue's light.

It was a usual day. I just wanted to install numpy on Jetson Nano device. pip suggested and installed the numpy 1.19.5 version. I said okay and entered to python shell to test the numpy. See what happened in the picture.

Fucking and scaring Illegal instruction (core dumped) scenario. It reminded about university years where I faced this issue while programming with assembly or c/c++ languages. Tough times when you are too dumb to access not allowed memory segment.
Bedilbek Khamidov - Observer
In brightest day, in blackest night, no evil shall escape my sight. Let those who worship evil's might, beware my power, Github Issue's light. It was a usual day. I just wanted to install numpy on Jetson Nano device. pip suggested and installed the numpy…
I went deeper and tried to find the cause with gdb debugger, but soon realized that it was beyond my microscopic capabilities to resolve this issue. Because it was due to some libopenblas library.

Thanks god, there is Github. There was already closed issue about this problem and it was already resolved.

Workaround
1. export environment variable OPENBLAS_CORETYPE=ARMV8
2. Downgrade numpy to 1.19.4
3. Upgrade numpy to 1.20.x
4. Compile from source on the failing hardware

Conclusion
Don't be too self-confident, cause you may face the strangest problems in everyday routines
This is a workaround to the problem I face most of the time when I run cuda powered apps on windows with wsl-2 beta.

I guess the reason is that for a beta version Microsoft is offering their own version of nvidia driver with built-in cuda library, and that's why maybe some of the shared libraries are being duplicated with host ones and creating incompatibility issue.


Also who are new to this topic, Windows finally introduced WSL-2 integration with Cuda and linux users can somehow feel potential use of video-card in their wsl based linux environments.
Check it out here

#ldconfig #libcuda #wsl #wsl2 #windows #cuda #so
Channel name was changed to «Bedilbek Khamidov - Observer»
I recalled one moment from my life when I created a bug in the system unintentionally. This was related to Value object vs Entity problem in DDD (Domain Driven Design).

To give an example from life let me bring my past experience with Lebazar (Delivery service). When I ordered first time, I set the address to my home. Then for my second order, I had to order some groceries to my office, so I changed my previous address to "Улица Тамарахоним".
Magic happened and my first order's address was also changed to my office's address. (You can see the screenshot in the next post)

The address inside the order should be a Value object, not an Entity. Entity has an identifier quality which defines it as an identity so that it may have relationships. Whereas Value object has just a supplemental purpose and doesn't have an identity and can't have any relationships. It means Entities may be prone to change, whereas value objects are not, they always should be recreated even if it creates redundancy problem.

This was a huge bug in Lebazar in my point of view and it was creating a bias data which literally could turn into trash for their future big data analysis.

Of course, I warned them about it, maybe they fixed, maybe not, I don't know.

#bug #advice #ddd #valueobject #entity #subjective #opinion
15.04.2020 order's address is same as 08.10.2020 order's address.
I had to build django integration with payme.uz gateway for one project. After finally completing it I could literally talk for hours about how bad their documentation was. One of the worst written documentation ever for payment gateway:
1. No versioning
2. Some attributes in the response are already deprecated and removed(when tested with postman), but it is not documented
3. Response attribute types are not documented and only example response is given, developer need to figure out what attribute type is (testing with postman may help)
4. New attributes are added to the response(when tested with postman), but it is not documented.
5. Error codes are not documented in single place, they are scattered throughout the document (maybe postman can help)

So if you want to build your own payme.uz integration you'd better use postman and don't read their documentation. If you want to take a look at their docs: https://developer.help.paycom.uz/ru
Good thing to know about PEP8.

Sometimes it happens that you think you wrote a nice line code, but your linter (static code analyzer) assumes there is something wrong with that line of code. Maybe there is typo, undefined reference, not used variable, not used import or so on. But you know you have a reason not to care about that warning. So you just put # noqa comment at the end of that line and that's it, your linter will ignore that line.

For example I always use it when I need to import some module that is not used but needs to be imported for first time initialization. If I don't comment it with # noqa then my IDE will unintentionally remove that module when code is optimized for imports and code is reformatted.

Above screenshot is a snipppet from sample Django app when signal handlers needs to be imported for first time registration

#pep8 #python #noqa
I had a special situation of dependency hell with cuda.

I needed ffmpeg to have better parallelization for my video editing pipelines, so built it with cuda support. Then for some other reason I had to rebuild OpenCV with this ffmpeg where I was stuck for hours. It turned I was building ffmpeg incorrectly and found the docs for proper configuration.

So, if you want to build ffmpeg with cuda support, plus you need to make it available for other shared libraries to use and build OpenCV with that ffmpeg configuration then here is the tutorial:

Installing ffmpeg
1. Start by following the instructions in nvidia's docs till Configure section
2. ./configure --enable-nonfree --enable-cuda-nvcc --enable-libnpp --enable-pic --extra-ldexeflags=-pie --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64
3. Continue with the instructions.

Installing OpenCV
1. Start by following the instructions in opencv's docs till Configure section
2. Add -DWITH_FFMPEG=ON -DWITH_CUDA=ON to cmake options
* -DWITH_CUDA=ON is optional only if you want to build OpenCV with cuda support.
3. Change -Wl, value of CMAKE_SHARED_LINKER_FLAGS variable to -Wl,-Bsymbolic, inside OpenCVFindLibsPerf.cmake file
4. Continue with the instructions.

#ffmpeg #opencv #custom #build #cuda #pic #pie #linking
Another story about cuda.

When you are building application powered by cuda you need to be careful and find the answers to the following questions to better handle the situation:
1. What cuda toolkit versions are installed?
2. What cuda toolkit version is currently being used?
3. Is current version of cuda toolkit being correctly used?
4. What is cuda compute capability of the device? *
5. What gpu driver version is installed? *
6. What version of cuda-toolkit support device's architecture? *


The final question DO YOU REALLY NEED THE CUDA AND ALL OF ITS HELL?

Let's start by finding ways to answer those questions one by one:

1. Commonly cuda-toolkit binaries and libraries are installed inside /usr/local/, so you can execute the following command ls /usr/local/
If nothing is installed then do not hurry to install anything, just wait unless questions 4 and 5 are answered.

2. Commonly currently installed cuda-toolkit path is a soft link that is located in /usr/local/cuda, so you can execute the following command ls -la /usr/local/
If inappropriate version is linked, then unlink and link the one you want.

3.1 Check environment variable $LD_LIBRARY_PATH whether it includes cuda-toolkit libraries. If after executing echo $LD_LIBRARY_PATH at least default cuda-toolkit library path (usr/local/cuda/lib64) is there, then it is okay. Otherwise according to the answer of question 1 set this environment variable.
3.2 Check environment variable $PATH whether it includes cuda-toolkit binaries, if default cuda-toolkit binary path ('/usr/local/cuda/bin`) is included then you are fine. Otherwise according to the answer of question 1 set this environment variable

4. Check out this guide to get compute capability information about the device
Compute capability will help you find the best cuda-toolkit version that you need to install. But cuda-toolkit also depends on the cuda driver installed. So 5th question needs to be answered.

5. Execute nvidia-smi. if it works, then you have cuda drivers installed and you can get the driver version. Check out this guide by having in mind your compute capability to find the best cuda-toolkit for you and the ones that work in your computer. However, above command fails to work, don't worry you need to install the cuda driver first, so again check previous guide to find the best cuda driver and cuda-toolkit according to your cuda compute capability.

6. It turns out you have installed all of the above drivers and libraries, but you still have problems regarding the wrong architecture during builds, then you are in the most dangerous and complex zone, welcome! Maybe you need to check this guide to understand different gencodes and architectures that are available for your cuda-toolkit.

You still have problems, sorry maybe I can not help you, maybe you need to revise what I have already written, or submit your question in nvidia forums, stackoverflow or somewhere else so that someone can help you.

#gpu #cuda #toolkit #driver #gencode #architecture
Nvidia - cuSparse, cuSolver, cuBlas; MAGMA are beasts for solving non-linear problems, matrix operations, and mostly linear algebra stuff. They are powered by nvidia drivers or parallel execution units, so you only benefit if you have capable machine. How to install, build or use them is a another story for some another time.

If anyone has this kind of problem that can be optimized with parallelization or needs large-scale matrix computations, then here is a great cookbook to start. The following book helped me a lot to solve one of my problems at work. Hope it will help someone else too.

p.s. All examples are written in C/C++.

#book #cookbook #cusolver #cublas #cusparse #magma #cuda #nvidia #matrix #computation

source: https://developer.nvidia.com/sites/default/files/akamai/cuda/files/Misc/mygpu.pdf
You have a dynamic system that you want to predict with linear model and you know that there is some noise or uncertainty in it. Kalman filter can be a solution to your problem. It is linear optimal estimator for finding unknown variables in your dynamic system.

Here is a great video playlist explaining what it is and how to use it.
https://youtube.com/playlist?list=PLX2gX-ftPVXU3oUFNATxGXY90AULiqnWT

#kalman #kalmanfilter #signals #modeling #estimator
Finally (hopefully) time has come. Now we do not have to go to each utility service separately to prove what we have or haven't used. I remember times when people had unbelievable 7 figure debts (in sum) from usage of gas, water, etc. Just because they were a little unconscious and didn't have opportunity to measure, inspect and report their usage on time (online), they had to accept unfairness. Now everyone can become more aware and conscious about their actions and prove their actions with the help of internet. Below I am writing all the necessary details about proper way to handle most of the utility issues:

1. Online payment systems to regularly check and pay utility bills
https://payme.uz/
https://click.uz/
https://apelsin.uz/


2. Reporting usage, checking usage statistics, reporting incidents

2.1 Water usage:
- http://cabinet.uzst.uz/
- @uzst_bot - telegram bot

2.2 Hot water usage
- http://cabinet.teploenergo.uz/
- @tashteploenergo_bot - telegram bot

2.3 Trash and garbage collection
- https://maxsustrans.uz/
- https://cleancity.uz/

2.4 Electricity usage
- https://mu.het.uz/

2.5 Gas usage
- http://cabinet.hududgaz.uz/
- @hgt_abonent_bot - telegram bot

To access the above services, most of the time you just need to know your account number from each service and identity credentials of whom the household belongs, if it does not help just call your regional utility service and ask for credentials.


3. Installing and removing utility meters; Periodically verifying and inspecting utility meters; Sealing utility meters; Registering for a new account.
https://my.gov.uz/


4. Utility meter legislation (inspection periodicity; price for inspection, installation, removal; duration of inspection)
https://www.lex.uz/ru/docs/4481469

In the above document there is everything you need to know when some "smarty-pants" inspector comes to your house and claim for unbelievable amount of money.

#utility #utilities #meter #water #gas #electricity #trash #hotwater #cabinet #reporting #report #measuring #measure #inspection