Algo Vision
475 subscribers
340 photos
64 videos
5 files
114 links
Computer Vision - Algorithm
for commercial questions @mlenginer
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Bu vedio AI yordamida generatsiya qilingan.
OpenAI o'zining yangi txt2vedio AI sini taqdim etdi.
🔥51👍1
CMake haqida eshitganmisiz?
Fikrlaringizni izohlarda qoldiring.
XALQARO TEXNOLOGIYA MUSOBAQASI TEKNOFEST 2024

120 dan ortiq davlatning yosh texnolog va innovatorlari ishtirok etadigan Xalqaro texnika va texnologiya musobaqasi TEKNOFEST 2024 uchun qabul 20-fevralgacha davom etadi.

Teknofestning 46 ta yo‘nalishidan birini tanlab o‘z jamoangizni tuzing va O‘zbekiston nomidan ishtirok eting.

Sizdan faqatgina loyihangizning asosiy g‘oyasini yozishingiz talab qilinadi.

Loyihangizni musobaqaning keyingi bosqichlarida T3 fondi tomonidan beriladigan moddiy yordam ko‘magida amalga oshirib finalda ishtirok eting.

Jami mukofot miqdori: 1 mln. AQSH dollari
Ishtirokchilarga jami moddiy yordam miqdori: 1.7 mln.  AQSH dollari

TEKNOFEST 2024 uchun o‘z g‘oyangiz bilan ishtirok eting, loyihalashtiring, amalda tatbiq eting va katta miqdordagi sovringa ega bo‘ling!!!

Ishtirok etish uchun ro‘yxatdan o‘ting

@teknofest_uz
👍5
C# dagi havola.
juda qiziq va galati chiqibdi.
Bu C# ga xotirani yaxshiroq boshqarish va ortiqcha copy larni
qisqartirish imkonini beradi.
👍62🔥1
C/C++ ayniqsa C memory unsafe hisoblanadi.
Shu hisobdan tezlik bu tillarda juda yuqori.
Shu bilan birga bu ham yaxshi tomon ham yomon.
GC bulishi bir tomondan juda yaxshi.
masalan Java Pythonda Go da yoki C# Rustda va boshqa tillarda xotirani avtomatik tozalash
tizimi mavjud. Bu Backend-Web uchun juda muhim
lekin Grafikada chi?
Oyna juda katta tezlik bilan rendering bulayotganda
GC musorga tuladi va shuning uchun bu dasturlash tilari orada bir qotib keyin
yana uz ishini davom etiradi.
yane GC to xotirani tozalamasa dastur uz ishini davom etirmaydi.
Lekin agar dasturchi bu muammoni uz buyniga ololsa u holda yaxshigina performancega
ega bulishi mumkin.

C dagi muammo C++ juda chiroyli yechimga ega.
Unda Smart Pointerlar mavjud.
Lekin shunday bulsada siz tomondan bitta qilingan xato
dastur yoki loyihani batamom buzishi mumkin.
Lekin bu 100 foizdan 0.5 foiz.
Chunki C++ urganuvchilarga eng birinchi qilib Logikani quyadi.
Sizni fikrlashga majbur qiladi. Boshqasiga siz shunchaki tilni uzlashtira olmaysiz.

Rust esa * kursatgichlardan umuman voz kechgan.
va yangicha turdagi move semantica
bilan bu muammoni tamoman yechgan.

Xullas Oq Uy nima disa shu 😄
https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/press-release-technical-report/
Qonuniy tarzda bu tillardan voz keching deyapti bulmasa ....

PS: Menimcha oq uy C da yozilgan dasturlarni uqish va revers injenereng qilishda qiynalayapti.
1👍1
Loyihangizga AI ni qushish bu siz uylaydigandek murakkab emas.
Pythonda bu shunchaki 5 qator kod.
ultralytics tomonidan R-CNN arxetekturali neyron tarmoqi aniqrogi YOLOV
oilasiga mansub Yolov8 bilan AI ni qushish juda oson.
Ishonmaysizmi?
unda sinab kuring.
shunchaki pip packet manageri bilan

pip install ultralytics

kutubxonasini urnating. (Linuxda pip3)
from ultralytics import YOLO

model = YOLO("yolov8s.pt")
ans = model.predict(source= 0, show = True, imgsz = 320, conf = 0.7)

va bu kodni ishga tushuring.

Siz roboflow saytida shunchaki oddiy bitta akkaunt tuzib
bu modellarni uzingizga albatta moslab olishiz mumkin.
Va rasmdagi istalgan predmet-obekt haqida tuliq malumot olishiz mumkin.
Albatta bu GPU siz nisbatan sekinroq ishlaydi.
Lekin uni ham yengib utadigan yullari juda ko'p!!!
|Algo Vision |
53🔥2👍1
Agar AI ga malum bir cheklov quyilmasa u kelajakda qurol sifatida foydalanishi tabiy holat.
Rost va yolgon malumot urtasidagi farqni aniqlash juda qiyin buladi.
Oxirgi vaqtda revojlanayotgan LLM modellar uzlari urganish qobiliyatiga ega.
Bu shunchaki afsona emas bu haqiqat u xuddi inson kabi urganish qobiliyatiga ega.
bu degani ular mustaqil ravishda malumotni vediodan insondan urganib boradi.
AI qaysidir maqsadda tuzilsa u kelajakda umuman oldindan kutilmagan natijalar ham berilishi mumkin.
😨6😐4👍3💯21
Rasm yoki freymdan malum predmet obekt ni joylashuvni olamiz.
Bu ham juda oson 5-6 qatorda bajariladi.
from ultralytics import YOLO

model = YOLO("yolov8s.pt")
ans = model.predict(source= ".../person.jpeg", show = True, imgsz = 320, conf = 0.7)
#yolov8s bir nechta turdagi obektlarni detection qila oladi
for obj in ans:
box = obj.boxes
print(box)
Algo Vision
Rasm yoki freymdan malum predmet obekt ni joylashuvni olamiz. Bu ham juda oson 5-6 qatorda bajariladi. from ultralytics import YOLO model = YOLO("yolov8s.pt") ans = model.predict(source= ".../person.jpeg", show = True, imgsz = 320, conf = 0.7) #yolov8s bir…
Agar natija olib kursak
cls: tensor([0., 0., 0., 0., 0., 0., 0.])
conf: tensor([0.8909, 0.8682, 0.8674, 0.8622, 0.8439, 0.8392, 0.7159])
data: tensor([[1.6254e+02, 2.2389e+01, 2.5266e+02, 1.6701e+02, 8.9091e-01, 0.0000e+00],
[2.3503e+02, 3.1486e+01, 2.9971e+02, 1.6686e+02, 8.6820e-01, 0.0000e+00],
[2.1997e+01, 5.3749e+01, 7.4538e+01, 1.6752e+02, 8.6741e-01, 0.0000e+00],
[1.1284e+02, 3.2849e+01, 1.6784e+02, 1.6768e+02, 8.6221e-01, 0.0000e+00],
[6.3885e+01, 4.3812e+01, 1.1679e+02, 1.6726e+02, 8.4389e-01, 0.0000e+00],
[3.2759e-02, 5.2869e+00, 4.6813e+01, 1.6724e+02, 8.3916e-01, 0.0000e+00],
[1.5617e+02, 1.0563e+01, 1.9749e+02, 9.0878e+01, 7.1594e-01, 0.0000e+00]])
id: None
is_track: False
orig_shape: (168, 300)
shape: torch.Size([7, 6])
xywh: tensor([[207.5979, 94.6973, 90.1146, 144.6161],
[267.3671, 99.1744, 64.6808, 135.3771],
[ 48.2676, 110.6329, 52.5402, 113.7673],
[140.3382, 100.2657, 55.0010, 134.8336],
[ 90.3380, 105.5342, 52.9057, 123.4454],
[ 23.4231, 86.2625, 46.7806, 161.9512],
[176.8280, 50.7202, 41.3200, 80.3153]])
xywhn: tensor([[0.6920, 0.5637, 0.3004, 0.8608],
[0.8912, 0.5903, 0.2156, 0.8058],
[0.1609, 0.6585, 0.1751, 0.6772],
[0.4678, 0.5968, 0.1833, 0.8026],
[0.3011, 0.6282, 0.1764, 0.7348],
[0.0781, 0.5135, 0.1559, 0.9640],
[0.5894, 0.3019, 0.1377, 0.4781]])
xyxy: tensor([[1.6254e+02, 2.2389e+01, 2.5266e+02, 1.6701e+02],
[2.3503e+02, 3.1486e+01, 2.9971e+02, 1.6686e+02],
[2.1997e+01, 5.3749e+01, 7.4538e+01, 1.6752e+02],
[1.1284e+02, 3.2849e+01, 1.6784e+02, 1.6768e+02],
[6.3885e+01, 4.3812e+01, 1.1679e+02, 1.6726e+02],
[3.2759e-02, 5.2869e+00, 4.6813e+01, 1.6724e+02],
[1.5617e+02, 1.0563e+01, 1.9749e+02, 9.0878e+01]])
xyxyn: tensor([[5.4180e-01, 1.3327e-01, 8.4218e-01, 9.9408e-01],
[7.8342e-01, 1.8742e-01, 9.9902e-01, 9.9323e-01],
[7.3325e-02, 3.1994e-01, 2.4846e-01, 9.9712e-01],
[3.7613e-01, 1.9553e-01, 5.5946e-01, 9.9811e-01],
[2.1295e-01, 2.6078e-01, 3.8930e-01, 9.9558e-01],
[1.0920e-04, 3.1470e-02, 1.5604e-01, 9.9546e-01],
[5.2056e-01, 6.2872e-02, 6.5829e-01, 5.4094e-01]])

Shunga uxshagan natija beradi.
Natija siz ishlatayotgan freymworkga bogliq (Pytorch, ....)
Demak birinchi listda sinf (0-bu odam umumiy sinflar
names: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
)

keyin esa conf yane har bitta obektni aniqlash aniqligi ruyxati (rasmda bir nechta obekt buladi)
keyin esa bizga obekt joylashuvi xywh yane yuqori chap burchag koordinatasi va w-uzunlik h-balandlik beriladi.
bundan tashqari biz turtburchak asosida malumotni olishimiz mumkin yuqori chao va pastgi ong xyxy
🔥3
Algo Vision
Umumiy natija
AI ni har bir dasturchi minimal darajada urganishi shart.
Bu sizni loyihangizni bezagi.
Chunki insoniyat aynan shunday yangi evalyutsion bosqichga kutarilmoqda.

Siz yuqoridagi odamlar rasmidan orasidagi eng kam masofa bulganlarni topa olaszmi?
Izohlarda kutib qolaman!!!
2
14👍1
Forwarded from Sardorbek Saminov
Ramadan is coming..

Dasturlash tili bilan aytganda faqat code larimizdagi bag (bug) larga etiborimizni qaratavermasdan ozimizdagi bag larga ham etibor beradigan vaqtimiz keldi.
Xa aynan shunday Alloh bizga tekin Premium ni taklif qilmoqda.. Uni ushlab qolish uchun biroz sabr, ixlos va umid kifoya. Va qarabsizki bag lardan holi hayot huddi yangi tug'ilgan chaqaloq kabi... Premimum ni qo'ldan boy bermang! chunki keyingisi bo'lmasligi mumkin.

Tarovehlarda sabr tilayman !!!
144👍2😇2🕊1
Pose haqida eshitganmisiz?
bazida framedagi muhim malumotlarni aniqlash uchun keypoint-yane kalidli nuqtalar joylashtirishga tugri keladi. Chunki biz kuzatayotgan object predmet har xil rakursda yoki har xil holatda bulishi mumkin. Masalan qulimiz barmoqlarimiz har doim ham bir xil joyda va holatda turmaydi. Ana shunday holatlarda pose ishlatiladi.
Pose ham ML yoki Deep Learning bilan uqitiladi.
Buniham 5-6 qatorda aniqlasa buladi.
from ultralytics import YOLO

model = YOLO('yolov8s-pose.pt')

predict = model.predict(source='/home/azmiddin/Projects/watchlist/bus.jpg', show = True, conf = 0.6)
for ans in predict:
for keys in ans.keypoints:
print(keys.xy)

Bunda xy - bizga aynan kalidli nuqtalarni koordinatasini beradi.
3❤‍🔥1👍1
Algo Vision
Pose haqida eshitganmisiz? bazida framedagi muhim malumotlarni aniqlash uchun keypoint-yane kalidli nuqtalar joylashtirishga tugri keladi. Chunki biz kuzatayotgan object predmet har xil rakursda yoki har xil holatda bulishi mumkin. Masalan qulimiz barmoqlarimiz…
tensor([[[340.5744, 148.3254],
[350.6854, 140.2402],
[332.6736, 136.6814],
[ 0.0000, 0.0000],
[314.9926, 136.2933],
[370.0140, 198.1728],
[284.8586, 192.8096],
[386.5782, 262.8971],
[270.7247, 264.4329],
[370.6186, 270.7802],
[319.8119, 262.1946],
[358.5667, 326.0563],
[303.6407, 325.0616],
[ 0.0000, 0.0000],
[ 0.0000, 0.0000],
[ 0.0000, 0.0000],
[ 0.0000, 0.0000]]])
tensor([[[446.1829, 116.5201],
[454.5997, 104.8722],
[441.8440, 107.2018],
[489.7914, 101.1642],
[ 0.0000, 0.0000],
[536.9339, 157.8492],
[443.6078, 158.5508],
[538.3223, 228.8252],
[430.7207, 235.1595],
[473.5414, 197.9666],
[430.5384, 262.4939],
[512.8117, 335.2951],
[445.9020, 331.1200],
[ 0.0000, 0.0000],
...
7🔥2🤷‍♀1
This media is not supported in your browser
VIEW IN TELEGRAM
Object Tracking bu real vaqtda malum bir obektlarni sanash va ularni holatini aniqlash.
Object Tracking asosan avtobus .... larda kuproq qullaniladi.
Masalan nechta kirib chiqishni hisoblash kerak bulganda.
Object Tracking bevosita Detection (Obektni aniqlash) bilan bogliq.
import cv2
from ultralytics import YOLO

# Model yuklash
model = YOLO('yolov8n.pt')
#opencv yordamida freymlarni uqiymiz
video_path = "test.mp4"
cap = cv2.VideoCapture(video_path)

while cap.isOpened():
success, frame = cap.read()
frame = cv2.resize(frame, (416, 416))
if success:
results = model.track(frame, persist=True, conf = 0.5, iou = 0.5)
annotated_frame = results[0].plot()
cv2.imshow("Tracking", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
break

cap.release()
cv2.destroyAllWindows()
🔥10
Algo Vision pinned «Recently, I joined the OpenCV Computer Vision community. OpenCV is an open-source project, and we have started development on OpenCV version 5. Please vote to include C++17 as part of the C++ language standard for OpenCV. https://github.com/opencv/openc…»