Kaggle Data Hub
archive.zip.001
2 GB
img2txt
archive.zip.008
1.1 GB
✉️ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk
📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
❤5👍2
Forwarded from Machine Learning with Python
🔥 Accelerate Your IT Career with FREE Certification Kits!
🚀 Get Hired Faster—Zero Cost!
Grab expert guides, labs, and courses for AWS, Azure, AI, Python, Cyber Security, and beyond—100% FREE, no hidden fees!
✅ CLICK your field👇
✅ DOWNLOAD & dominate your goals!
🔗 AWS + Azure Cloud Mastery: https://bit.ly/44S0dNS
🔗 AI & Machine Learning Starter Kit: https://bit.ly/3FrKw5H
🔗 Python, Excel, Cyber Security Courses: https://bit.ly/4mFrA4g
📘 FREE Career Hack: IT Success Roadmap E-book ➔ https://bit.ly/3Z6JS49
🚨 Limited Time! Act FAST!
📱 Join Our IT Study Group: https://bit.ly/43piMq8
💬 1-on-1 Exam Help: https://wa.link/sbpp0m
Your dream job won’t wait—GRAB YOUR RESOURCES NOW! 💻✨
🚀 Get Hired Faster—Zero Cost!
Grab expert guides, labs, and courses for AWS, Azure, AI, Python, Cyber Security, and beyond—100% FREE, no hidden fees!
✅ CLICK your field👇
✅ DOWNLOAD & dominate your goals!
🔗 AWS + Azure Cloud Mastery: https://bit.ly/44S0dNS
🔗 AI & Machine Learning Starter Kit: https://bit.ly/3FrKw5H
🔗 Python, Excel, Cyber Security Courses: https://bit.ly/4mFrA4g
📘 FREE Career Hack: IT Success Roadmap E-book ➔ https://bit.ly/3Z6JS49
🚨 Limited Time! Act FAST!
📱 Join Our IT Study Group: https://bit.ly/43piMq8
💬 1-on-1 Exam Help: https://wa.link/sbpp0m
Your dream job won’t wait—GRAB YOUR RESOURCES NOW! 💻✨
FitBit dataset
Fitness tracker data from smart watch device usage
About Dataset:
Fitness tracker data from smart watch device usage
About Dataset:
This is a Kaggle data set that contains personal fitness tracker data from thirty Fitbit users. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. It includes information about daily activity, steps, and heart rate that can be used to explore users’ habits
❤4
ASL Alphabet
Image data set for alphabets in the American Sign Language
About
Image data set for alphabets in the American Sign Language
About
The data set is a collection of images of alphabets from the American Sign Language, separated in 29 folders which represent the various classes.
Content
The training data set contains 87,000 images which are 200x200 pixels. There are 29 classes, of which 26 are for the letters A-Z and 3 classes for SPACE, DELETE and NOTHING.
These 3 classes are very helpful in real-time applications, and classification.
The test data set contains a mere 29 images, to encourage the use of real-world test images.
❤5
ASL Alphabet.zip
1 GB
ASL Alphabet
✉️ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❤4
Forwarded from Machine Learning with Python
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.me/addlist/8_rRW2scgfRhOTc0
✅ https://t.me/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2
Fundus Image Dataset for Vessel Segmentation
High-resolution manually annotated fundus images for vessel segmentation
About Dataset
High-resolution manually annotated fundus images for vessel segmentation
About Dataset
The FIVES (Fundus Image Vessel Segmentation) dataset comprises 800 high-resolution color fundus photographs manually annotated at the pixel level for retinal vessel segmentation. The images represent a wide range of ages (4–83 years) and include various ocular conditions such as diabetic retinopathy, age-related macular degeneration, and glaucoma. The annotations were standardized via expert crowdsourcing. Each image was further assessed for three quality aspects (illumination and color distortion, blur, and low contrast) using published automatic algorithms. This dataset is currently the largest publicly available collection for retinal vessel segmentation and is designed to facilitate the development and evaluation of AI-based segmentation models.
The dataset supports automated analysis of retinal vasculature for ophthalmological and systemic disease assessment and contributes significantly to advancing the field of AI in medical imaging.
❤3
Fundus Image Dataset for Vessel Segmentation.zip
1.6 GB
Fundus Image Dataset for Vessel Segmentation
✉️ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5
Forwarded from Machine Learning with Python
🙏💸 500$ FOR THE FIRST 500 WHO JOIN THE CHANNEL! 🙏💸
Join our channel today for free! Tomorrow it will cost 500$!
https://t.me/+Y4vkzbTTshVhYTQ1
You can join at this link! 👆👇
https://t.me/+Y4vkzbTTshVhYTQ1
Join our channel today for free! Tomorrow it will cost 500$!
https://t.me/+Y4vkzbTTshVhYTQ1
You can join at this link! 👆👇
https://t.me/+Y4vkzbTTshVhYTQ1
Pavement Dataset
Synthetic Dataset on Road Pavements (Educational Purposes)
Synthetic Dataset on Road Pavements (Educational Purposes)
🏗 Pavement Condition Monitoring and Maintenance Prediction
📘 Scenario
You are a data analyst for a city engineering office tasked with identifying which road segments require urgent maintenance. The office has collected inspection data on various roads, including surface conditions, traffic volume, and environmental factors.
Your goal is to analyze this data and build a binary classification model to predict whether a given road segment needs maintenance, based on pavement and environmental indicators.
🔍 Target Variable: Needs_Maintenance
This binary label indicates whether the road segment requires immediate maintenance, defined by the following rule:
Needs_Maintenance = 1
Needs_Maintenance = 0 otherwise
archive.zip
19.9 MB
Pavement Dataset
✉️ Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5
🟢 Name Of Dataset: BAH (Behavioural Ambivalence/Hesitancy)
🟢 Description Of Dataset:
Recognizing complex emotions linked to ambivalence and hesitancy (A/H) can play a critical role in the personalization and effectiveness of digital behaviour change interventions. These subtle and conflicting emotions are manifested by a discord between multiple modalities, such as facial and vocal expressions, and body language. Although experts can be trained to identify A/H, integrating them into digital interventions is costly and less effective. Automatic learning systems provide a cost-effective alternative that can adapt to individual users, and operate seamlessly within real-time, and resource-limited environments. However, there are currently no datasets available for the design of ML models to recognize A/H.This paper introduces a first Behavioural Ambivalence/Hesitancy ( BAH) dataset collected for subject-based multimodal recognition of A/H in videos. It contains videos from 224 participants captured across 9 provinces in Canada, with different age, and ethnicity. Through our web platform, we recruited participants to answer 7 questions, some of which were designed to elicit A/H while recording themselves via webcam with microphone. BAH amounts to 1,118 videos for a total duration of 8.26 hours with 1.5 hours of A/H. Our behavioural team annotated timestamp segments to indicate where A/H occurs, and provide frame- and video-level annotations with the A/H cues. Video transcripts and their timestamps are also included, along with cropped and aligned faces in each frame, and a variety of participants meta-data.Additionally, this paper provides preliminary benchmarking results baseline models for BAH at frame- and video-level recognition with mono- and multi-modal setups. It also includes results on models for zero-shot prediction, and for personalization using unsupervised domain adaptation. The limited performance of baseline models highlights the challenges of recognizing A/H in real-world videos. The data, code, and pretrained weights are available.
🟢 Official Homepage: https://github.com/sbelharbi/bah-dataset
🟢 Number of articles that used this dataset: 1
🟢 Dataset Loaders:
Not found
🟢 Articles related to the dataset:
📝 BAH Dataset for Ambivalence/Hesitancy Recognition in Videos for Behavioural Change
==================================
🔴 For more data science resources:
✓ https://t.me/DataScienceT
🟢 Description Of Dataset:
Recognizing complex emotions linked to ambivalence and hesitancy (A/H) can play a critical role in the personalization and effectiveness of digital behaviour change interventions. These subtle and conflicting emotions are manifested by a discord between multiple modalities, such as facial and vocal expressions, and body language. Although experts can be trained to identify A/H, integrating them into digital interventions is costly and less effective. Automatic learning systems provide a cost-effective alternative that can adapt to individual users, and operate seamlessly within real-time, and resource-limited environments. However, there are currently no datasets available for the design of ML models to recognize A/H.This paper introduces a first Behavioural Ambivalence/Hesitancy ( BAH) dataset collected for subject-based multimodal recognition of A/H in videos. It contains videos from 224 participants captured across 9 provinces in Canada, with different age, and ethnicity. Through our web platform, we recruited participants to answer 7 questions, some of which were designed to elicit A/H while recording themselves via webcam with microphone. BAH amounts to 1,118 videos for a total duration of 8.26 hours with 1.5 hours of A/H. Our behavioural team annotated timestamp segments to indicate where A/H occurs, and provide frame- and video-level annotations with the A/H cues. Video transcripts and their timestamps are also included, along with cropped and aligned faces in each frame, and a variety of participants meta-data.Additionally, this paper provides preliminary benchmarking results baseline models for BAH at frame- and video-level recognition with mono- and multi-modal setups. It also includes results on models for zero-shot prediction, and for personalization using unsupervised domain adaptation. The limited performance of baseline models highlights the challenges of recognizing A/H in real-world videos. The data, code, and pretrained weights are available.
🟢 Official Homepage: https://github.com/sbelharbi/bah-dataset
🟢 Number of articles that used this dataset: 1
🟢 Dataset Loaders:
Not found
🟢 Articles related to the dataset:
📝 BAH Dataset for Ambivalence/Hesitancy Recognition in Videos for Behavioural Change
==================================
🔴 For more data science resources:
✓ https://t.me/DataScienceT
❤4👍1
🟢 Name Of Dataset: ITDD (Industrial Textile Defect Detection)
🟢 Description Of Dataset:
The Industrial Textile Defect Detection (ITDD) dataset includes 1885 industrial textile images categorized into 4 categories: cotton fabric, dyed fabric, hemp fabric, and plaid fabric. These classes are collected from the industrial production sites of WEIQIAO Textile. ITDD is an upgraded version of WFDD that reorganizes three original classes and adds one new class.
🟢 Official Homepage: https://github.com/cqylunlun/CRAS?tab=readme-ov-file#dataset-release
🟢 Number of articles that used this dataset: 1
🟢 Dataset Loaders:
Not found
🟢 Articles related to the dataset:
📝 Center-aware Residual Anomaly Synthesis for Multi-class Industrial Anomaly Detection
==================================
🔴 For more data science resources:
✓ https://t.me/DataScienceT
🟢 Description Of Dataset:
The Industrial Textile Defect Detection (ITDD) dataset includes 1885 industrial textile images categorized into 4 categories: cotton fabric, dyed fabric, hemp fabric, and plaid fabric. These classes are collected from the industrial production sites of WEIQIAO Textile. ITDD is an upgraded version of WFDD that reorganizes three original classes and adds one new class.
🟢 Official Homepage: https://github.com/cqylunlun/CRAS?tab=readme-ov-file#dataset-release
🟢 Number of articles that used this dataset: 1
🟢 Dataset Loaders:
Not found
🟢 Articles related to the dataset:
📝 Center-aware Residual Anomaly Synthesis for Multi-class Industrial Anomaly Detection
==================================
🔴 For more data science resources:
✓ https://t.me/DataScienceT
❤1