Python | Machine Learning | Coding | R
62.3K subscribers
1.13K photos
67 videos
141 files
778 links
List of our channels:
https://t.me/addlist/8_rRW2scgfRhOTc0

Discover powerful insights with Python, Machine Learning, Coding, and Rβ€”your essential toolkit for data-driven solutions, smart alg

Help and ads: @hussein_sheikho

https://telega.io/?r=nikapsOH
Download Telegram
πŸ”­ Daily Useful Scripts

Daily.py is a repository that provides a collection of ready-to-use Python scripts for automating common daily tasks.

git clone https://github.com/Chamepp/Daily.py.git

β–ͺ Github: https://github.com/Chamepp/Daily.py

https://t.me/CodeProgrammer
Introduction to Python

Learn fundamental concepts for Python beginners that will help you get started on your journey to learn Python. These tutorials focus on the absolutely essential things you need to know about Python.

What You’ll Learn:
β€’ Installing a Python environment
β€’ The basics of the Python language

https://realpython.com/learning-paths/python3-introduction/

https://t.me/CodeProgrammer
Flask by Example

You’re going to start building a Flask app that calculates word-frequency pairs based on the text from a given URL. This is a full-stack tutorial covering a number of web development techniques. Jump right in and discover the basics of Python web development with the Flask microframework.

https://realpython.com/learning-paths/flask-by-example/

https://t.me/CodeProgrammer
πŸ‘β€πŸ—¨ Running YOLOv7 algorithm on your webcam using Ikomia API
Python | Machine Learning | Coding | R
πŸ‘β€πŸ—¨ Running YOLOv7 algorithm on your webcam using Ikomia API
πŸ‘β€πŸ—¨ Running YOLOv7 algorithm on your webcam using Ikomia API

from ikomia.dataprocess.workflow import Workflow
from ikomia.utils import ik
from ikomia.utils.displayIO import display
import cv2

stream = cv2.VideoCapture(0)

# Init the workflow
wf = Workflow()

# Add color conversion
cvt = wf.add_task(ik.ocv_color_conversion(code=str(cv2.COLOR_BGR2RGB)), auto_connect=True)

# Add YOLOv7 detection
yolo = wf.add_task(ik.infer_yolo_v7(conf_thres="0.7"), auto_connect=True)

while True:
ret, frame = stream.read()

# Test if streaming is OK
if not ret:
continue

# Run workflow on image
wf.run_on(frame)

# Display results from "yolo"
display(
yolo.get_image_with_graphics(),
title="Object Detection - press 'q' to quit",
viewer="opencv"
)

# Press 'q' to quit the streaming process
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# After the loop release the stream object
stream.release()

# Destroy all windows
cv2.destroyAllWindows()


https://t.me/CodeProgrammer
πŸ–₯ Generate API docs under a minute in Django

https://t.me/CodeProgrammer
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ‘Savant: Supercharged Computer Vision and Video Analytics Framework on DeepStream

git clone https://github.com/insight-platform/Savant.git

cd Savant/samples/peoplenet_detector

git lfs pull


β–ͺGithub: https://github.com/insight-platform/Savant

https://t.me/CodeProgrammer
πŸ–₯ Convert PDF to docx using Python

β–ͺGithub: https://github.com/dothinking/pdf2docx

https://t.me/CodeProgrammer

Please more reaction with our posts
ML_cheatsheets.pdf
6.5 MB
Machine Learning cheatsheet (very important)

https://t.me/CodeProgrammer

Please more reaction with our posts
βœ‹ Hand gesture recognition

Full Source Code πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡
Python | Machine Learning | Coding | R
βœ‹ Hand gesture recognition Full Source Code πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡
βœ‹ Hand gesture recognition

import cv2
import mediapipe as mp

# Initialize MediaPipe Hands module
mp_hands = mp.solutions.hands
hands = mp_hands.Hands()

# Initialize MediaPipe Drawing module for drawing landmarks
mp_drawing = mp.solutions.drawing_utils

# Open a video capture object (0 for the default camera)
cap = cv2.VideoCapture(0)

while cap.isOpened():
ret, frame = cap.read()

if not ret:
continue

# Convert the frame to RGB format
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# Process the frame to detect hands
results = hands.process(frame_rgb)

# Check if hands are detected
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
# Draw landmarks on the frame
mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)

# Display the frame with hand landmarks
cv2.imshow('Hand Recognition', frame)

# Exit when 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# Release the video capture object and close the OpenCV windows
cap.release()
cv2.destroyAllWindows()


https://t.me/CodeProgrammer

Please more reaction with our posts