Epython Lab
6.44K subscribers
660 photos
31 videos
104 files
1.22K links
Welcome to Epython Lab, where you can get resources to learn, one-on-one trainings on machine learning, business analytics, and Python, and solutions for business problems.

Buy ads: https://telega.io/c/epythonlab
Download Telegram
Forwarded from Epython Lab
As a data scientist 70-80 percent of your time spending on data cleansing. If you have given data which contains special characters and you may need to avoid those special characters, what methods do you use to avoid it?
https://youtu.be/qL7lX5lCfgw
👍3
Day 1: Introduction to the Challenge
📢 Day 1/100: The Journey Begins!
I'm embarking on a 100-day challenge to share insights, progress, and lessons learned as I build a data-driven credit scoring model tailored for Buy-Now-Pay-Later (BNPL) services in Ethiopia's fintech space. 🚀

Why this topic? BNPL is reshaping financial inclusion, and robust credit scoring is the backbone of sustainable lending. Follow along as I explore data, algorithms, and strategies to make this happen!

hashtag#Fintech hashtag#DataScience hashtag#CreditScoring hashtag#BNPL hashtag#FinancialInclusion hashtag#Ethiopia hashtag#100DaysChallenge
3👍3
📢Day 9/100: Feature Engineering Deep Dive
Feature engineering is where raw data turns into actionable insights! 🛠
In my credit scoring project, key features include:
1️⃣ Recency, Frequency, Monetary (RFM): Critical for understanding customer behavior.
2️⃣ Fraud indicators: High-value transactions flagged based on outlier analysis.
3️⃣ Categorical encodings: Using Weight of Evidence (WoE) to transform qualitative data like product categories.
💡 Takeaway: Good features are the foundation of any successful model. They ensure the patterns we observe are meaningful and actionable.
💡 Discussion point: What’s your go-to method for handling highly skewed data in financial datasets?
#FeatureEngineering #DataScience #CreditScoring #FintechEthiopia
👍3
📢Day 17/100: From Data to Insights



My journey started with collecting and cleaning data from Telegram channels, a hub for Ethiopian e-commerce.



Key steps:

1️⃣ Scraping Telegram messages to capture product details.

2️⃣ Preprocessing Amharic text to handle non-text characters and normalize content.

3️⃣ Tokenizing text for labeling.



💡 Takeaway: High-quality data preparation is the backbone of effective machine learning models.


#DataScience #AmharicNLP #FintechEthiopia