Data Science Jupyter Notebooks
11.8K subscribers
290 photos
43 videos
9 files
858 links
Explore the world of Data Science through Jupyter Notebooksโ€”insights, tutorials, and tools to boost your data journey. Code, analyze, and visualize smarter with every post.
Download Telegram
๐ŸŽฏ Trackers Library is Officially Released! ๐Ÿš€

If you're working in computer vision and object tracking, this one's for you!

๐Ÿ’ก Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:

โœ… Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!

โœ… Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoTโ€‘SORT, ByteTrack, OCโ€‘SORT โ€“ with even more coming soon!

๐Ÿงฉ Released under the permissive Apache 2.0 license โ€“ free for everyone to use and contribute.

๐Ÿ‘ Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!

๐Ÿ“Œ Links:
๐Ÿ”— GitHub
๐Ÿ”— Docs


๐Ÿ“š Quick-start notebooks for SORT and DeepSORT are linked ๐Ÿ‘‡๐Ÿป
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo


#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI


๐Ÿ“ก By: https://t.me/DataScienceN
๐Ÿ‘4โค1๐Ÿ”ฅ1
๐Ÿš€ The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!

This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM โ€” including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!

The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs โ€” all while keeping the core SAM weights frozen.

The newly introduced parameters include:

* A High-Quality Token
* A Global-Local Feature Fusion mechanism

This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.

๐Ÿ“„ Documentation: https://lnkd.in/e5iDT6Tf
๐Ÿง  Model Access: https://lnkd.in/ehS6ZUyv
๐Ÿ’ป Source Code: https://lnkd.in/eg5qiKC2



#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA

๐ŸŒŸ
https://t.me/DataScienceN
โค2๐Ÿ‘2๐Ÿ”ฅ1
๐Ÿ”ฅPowerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
Weโ€™ve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor โ€” making it even easier to develop advanced Edge AI applications. ๐Ÿ’ก
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
๐Ÿ‘‰
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yaml
๐ŸŽฅ The video below shows the result of this process!
๐Ÿ”Benchmark results for YOLO11n on IMX500:โœ… Inference Time: 62.50 msโœ… mAP50-95 (B): 0.644๐Ÿ“Œ Want to learn more about YOLO11 and Sony IMX500? Check it out here โžก๏ธ
https://docs.ultralytics.com/integrations/sony-imx500/

#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment

๐ŸŒŸhttps://t.me/DataScienceN
๐Ÿ‘1๐Ÿ”ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐Ÿ’ƒ GENMO: Generalist Human Motion by NVIDIA ๐Ÿ’ƒ

NVIDIA introduces GENMO, a unified generalist model for human motion that seamlessly combines motion estimation and generation within a single framework. GENMO supports conditioning on videos, 2D keypoints, text, music, and 3D keyframes, enabling highly versatile motion understanding and synthesis.

Currently, no official code release is available.

Review:
https://t.ly/Q5T_Y

Paper:
https://lnkd.in/ds36BY49

Project Page:
https://lnkd.in/dAYHhuFU

#NVIDIA #GENMO #HumanMotion #DeepLearning #AI #ComputerVision #MotionGeneration #MachineLearning #MultimodalAI #3DReconstruction


โœ‰๏ธ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk

๐Ÿ“ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘3
10 GitHub repos to build a career in AI engineering:

(100% free step-by-step roadmap)

1๏ธโƒฃ ML for Beginners by Microsoft

A 12-week project-based curriculum that teaches classical ML using Scikit-learn on real-world datasets.

Includes quizzes, lessons, and hands-on projects, with some videos.

GitHub repo โ†’ https://lnkd.in/dCxStbYv

2๏ธโƒฃ AI for Beginners by Microsoft

This repo covers neural networks, NLP, CV, transformers, ethics & more. There are hands-on labs in PyTorch & TensorFlow using Jupyter.

Beginner-friendly, project-based, and full of real-world apps.

GitHub repo โ†’ https://lnkd.in/dwS5Jk9E

3๏ธโƒฃ Neural Networks: Zero to Hero

Now that youโ€™ve grasped the foundations of AI/ML, itโ€™s time to dive deeper.

This repo by Andrej Karpathy builds modern deep learning systems from scratch, including GPTs.

GitHub repo โ†’ https://lnkd.in/dXAQWucq

4๏ธโƒฃ DL Paper Implementations

So far, you have learned the fundamentals of AI, ML, and DL. Now study how the best architectures work.

This repo covers well-documented PyTorch implementations of 60+ research papers on Transformers, GANs, Diffusion models, etc.

GitHub repo โ†’ https://lnkd.in/dTrtDrvs

5๏ธโƒฃ Made With ML

Now itโ€™s time to learn how to go from notebooks to production.

Made With ML teaches you how to design, develop, deploy, and iterate on real-world ML systems using MLOps, CI/CD, and best practices.

GitHub repo โ†’ https://lnkd.in/dYyjjBGb

6๏ธโƒฃ Hands-on LLMs

- You've built neural nets.
- You've explored GPTs and LLMs.

Now apply them. This is a visually rich repo that covers everything about LLMs, like tokenization, fine-tuning, RAG, etc.

GitHub repo โ†’ https://lnkd.in/dh2FwYFe

7๏ธโƒฃ Advanced RAG Techniques

Hands-on LLMs will give you a good grasp of RAG systems. Now learn advanced RAG techniques.

This repo covers 30+ methods to make RAG systems faster, smarter, and accurate, like HyDE, GraphRAG, etc.

GitHub repo โ†’ https://lnkd.in/dBKxtX-D

8๏ธโƒฃ AI Agents for Beginners by Microsoft

After diving into LLMs and mastering RAG, learn how to build AI agents.

This hands-on course covers building AI agents using frameworks like AutoGen.

GitHub repo โ†’ https://lnkd.in/dbFeuznE

9๏ธโƒฃ Agents Towards Production

The above course will teach what AI agents are. Next, learn how to ship them.

This is a practical playbook for building agents covering memory, orchestration, deployment, security & more.

GitHub repo โ†’ https://lnkd.in/dcwmamSb

๐Ÿ”Ÿ AI Engg. Hub

To truly master LLMs, RAG, and AI agents, you need projects.

This covers 70+ real-world examples, tutorials, and agent app you can build, adapt, and ship.

GitHub repo โ†’ https://lnkd.in/geMYm3b6

#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers


โœ‰๏ธ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk

๐Ÿ“ฑ Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
โค3
๐Ÿ”ฅ Trending Repository: Machine-Learning-Tutorials

๐Ÿ“ Description: machine learning and deep learning tutorials, articles and other resources

๐Ÿ”— Repository URL: https://github.com/ujjwalkarn/Machine-Learning-Tutorials

๐ŸŒ Website: http://ujjwalkarn.github.io/Machine-Learning-Tutorials

๐Ÿ“– Readme: https://github.com/ujjwalkarn/Machine-Learning-Tutorials#readme

๐Ÿ“Š Statistics:
๐ŸŒŸ Stars: 16.6K stars
๐Ÿ‘€ Watchers: 797
๐Ÿด Forks: 3.9K forks

๐Ÿ’ป Programming Languages: Not available

๐Ÿท๏ธ Related Topics:
#list #machine_learning #awesome #deep_neural_networks #deep_learning #neural_network #neural_networks #awesome_list #machinelearning #deeplearning #deep_learning_tutorial


==================================
๐Ÿง  By: https://t.me/DataScienceN
โค2
๐Ÿ”ฅ Trending Repository: datascience

๐Ÿ“ Description: This repository is a compilation of free resources for learning Data Science.

๐Ÿ”— Repository URL: https://github.com/geekywrites/datascience

๐ŸŒ Website: https://twitter.com/geekywrites

๐Ÿ“– Readme: https://github.com/geekywrites/datascience#readme

๐Ÿ“Š Statistics:
๐ŸŒŸ Stars: 5.1K stars
๐Ÿ‘€ Watchers: 381
๐Ÿด Forks: 529 forks

๐Ÿ’ป Programming Languages: Not available

๐Ÿท๏ธ Related Topics:
#data_science #machine_learning #natural_language_processing #computer_vision #machine_learning_algorithms #artificial_intelligence #neural_networks #deeplearning #datascienceproject


==================================
๐Ÿง  By: https://t.me/DataScienceN
Forwarded from Machine Learning
โœจ Detecting COVID-19 in X-ray images with Keras, TensorFlow, and Deep Learning โœจ

๐Ÿ“– In this tutorial, you will learn how to automatically detect COVID-19 in a hand-created X-ray image dataset using Keras, TensorFlow, and Deep Learning. Like most people in the world right now, Iโ€™m genuinely concerned about COVID-19. I find myself constantlyโ€ฆ...

๐Ÿท๏ธ #DeepLearning #KerasandTensorFlow #MedicalComputerVision #Tutorials
โค1
Forwarded from Machine Learning
In Python, building AI-powered Telegram bots unlocks massive potential for image generation, processing, and automationโ€”master this to create viral tools and ace full-stack interviews! ๐Ÿค–

# Basic Bot Setup - The foundation (PTB v20+ Async)
from telegram.ext import Application, CommandHandler, MessageHandler, filters

async def start(update, context):
await update.message.reply_text(
"โœจ AI Image Bot Active!\n"
"/generate - Create images from text\n"
"/enhance - Improve photo quality\n"
"/help - Full command list"
)

app = Application.builder().token("YOUR_BOT_TOKEN").build()
app.add_handler(CommandHandler("start", start))
app.run_polling()


# Image Generation - DALL-E Integration (OpenAI)
import openai
from telegram.ext import ContextTypes

openai.api_key = os.getenv("OPENAI_API_KEY")

async def generate(update: Update, context: ContextTypes.DEFAULT_TYPE):
if not context.args:
await update.message.reply_text("โŒ Usage: /generate cute robot astronaut")
return

prompt = " ".join(context.args)
try:
response = openai.Image.create(
prompt=prompt,
n=1,
size="1024x1024"
)
await update.message.reply_photo(
photo=response['data'][0]['url'],
caption=f"๐ŸŽจ Generated: *{prompt}*",
parse_mode="Markdown"
)
except Exception as e:
await update.message.reply_text(f"๐Ÿ”ฅ Error: {str(e)}")

app.add_handler(CommandHandler("generate", generate))


Learn more: https://hackmd.io/@husseinsheikho/building-AI-powered-Telegram-bots

#Python #TelegramBot #AI #ImageGeneration #StableDiffusion #OpenAI #MachineLearning #CodingInterview #FullStack #Chatbots #DeepLearning #ComputerVision #Programming #TechJobs #DeveloperTips #CareerGrowth #CloudComputing #Docker #APIs #Python3 #Productivity #TechTips


https://t.me/DataScienceM ๐Ÿฆพ
Please open Telegram to view this post
VIEW IN TELEGRAM
โค4