Article 28: Advanced Boosting โ XGBoost, LightGBM, and CatBoost ๐โก๏ธ
In this article, we look at the three most popular Boosting libraries. They all use the Gradient Boosting framework but improve it with clever engineering and math. ๐ง ๐
1. XGBoost (Extreme Gradient Boosting) ๐
XGBoost is the most famous library. It is Extreme because it is designed for speed and performance.
The Logic,
2. LightGBM (Light Gradient Boosting Machine) โก๏ธ
LightGBM was created by Microsoft. It is designed to use less memory and is very fast on huge datasets.
The Backend process:
3. CatBoost (Categorical Boosting) ๐
CatBoost was created by Yandex. It is the best choice when your data has many Categorical features (like "Country" or "Color" etc).
Why is it the uniqueness?
Summary ๐
Advanced Boosting libraries take Gradient Boosting to the next level. XGBoost is the great all rounder with strong mathematics. LightGBM is the fastest for massive datasets. CatBoost is the magic tool for categorical data. In the next article (Article 29), we will enter Phase 8: Reinforcement Learning (RL), where we learn how agents learn through Rewards and Penalties! ๐ฎโญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
In this article, we look at the three most popular Boosting libraries. They all use the Gradient Boosting framework but improve it with clever engineering and math. ๐ง ๐
1. XGBoost (Extreme Gradient Boosting) ๐
XGBoost is the most famous library. It is Extreme because it is designed for speed and performance.
The Logic,
โ Regularization (L1 & L2) - Unlike basic GBM, XGBoost includes dendrites to penalize complex models. This helps prevent Overfitting.
โ Second-Order Derivatives - It uses Taylor Expansion to calculate the loss function more accurately. This makes the optimization much faster than standard methods.
โ Pruning - It uses a Depth-First approach. It grows the tree to its maximum depth and then removes branches that do not add enough value.
โ Parallel Processing - It uses the computer's hardware efficiently to build trees faster.
2. LightGBM (Light Gradient Boosting Machine) โก๏ธ
LightGBM was created by Microsoft. It is designed to use less memory and is very fast on huge datasets.
The Backend process:
โ GOSS (Gradient-based One-Side Sampling) - It focuses only on data points with large gradients (errors) and ignores points with small errors. This reduces the amount of data it needs to process.
โ Leaf-Wise Growth - Standard models grow Level-Wise (layer by layer). LightGBM grows Leaf-Wise. It picks the leaf that will reduce the most loss and splits it. This results in much higher accuracy but can overfit if not tuned carefully.
โ EFB (Exclusive Feature Bundling) - It combines many features into one to reduce the dimensionality of the data without losing information.
3. CatBoost (Categorical Boosting) ๐
CatBoost was created by Yandex. It is the best choice when your data has many Categorical features (like "Country" or "Color" etc).
Why is it the uniqueness?
โ Native Categorical Support - You do not need to do One-Hot Encoding manually. CatBoost handles categories internally using a method called Ordered Boosting.
โ Symmetric Trees - It builds perfectly balanced trees. This makes the model very fast when used for predictions (Inference).
โ No Overfitting - It uses a mathematical trick to prevent target leakage, which makes it very stable even with small datasets.
Summary ๐
Advanced Boosting libraries take Gradient Boosting to the next level. XGBoost is the great all rounder with strong mathematics. LightGBM is the fastest for massive datasets. CatBoost is the magic tool for categorical data. In the next article (Article 29), we will enter Phase 8: Reinforcement Learning (RL), where we learn how agents learn through Rewards and Penalties! ๐ฎโญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
Telegram
Infinity CS
โค3
Forwarded from Computer Science and Programming
6 Components of Context Engineering
Context engineering is the practice of optimizing how information flows to AI models, comprising six core components: prompting techniques (few-shot, chain-of-thought), query augmentation (rewriting, expansion, decomposition), long-term memory (vector/graph databases for episodic, semantic, and procedural memory), short-term memory (conversation history management), knowledge base retrieval (RAG pipelines with pre-retrieval, retrieval, and augmentation layers), and tools/agents (single and multi-agent architectures, MCPs). While model selection and prompts contribute only 25% to output quality, the remaining 75% comes from properly engineering these context components to deliver the right information at the right time in the right format.
โค2
Article 29: Reinforcement Learning Fundamentals โ The Agentโs Journey ๐ฎ๐ฐ
Reinforcement Learning is like training a dog. If the dog does a good thing, we give it a treat (Reward). If the dog does something bad, we do not give a treat (Penalty). Over time, the dog learns to do the things that get the most treats.
1. The Key Players in Reinforcement Learning (RL)
To understand RL, we must know these five main terms,
2. The Reinforcement Learning Interaction Loop
The agent and environment talk to each other in a continuous loop.
3. Markov Decision Process (MDP)
The mathematical foundation of Reinforcement Learning is the Markov Decision Process (MDP). MDP assumes that the future depends only on the current state and action. It does not matter how the agent arrived at the current state. We call this the Markov Property.
The Math components;
4. Exploration vs. Exploitation
This is the biggest challenge in RL. The agent must balance two things,
5. Why is Reinforcement Learning important?
Reinforcement Learning is the technology behind,
Summary ๐
Reinforcement Learning is about learning from interaction. An Agent takes Actions in an Environment to maximize its total Reward. The MDP provides the mathematical framework for this process. The agent must always balance Exploration (trying new things) and Exploitation (using known facts). โจ ๐๐
In the next article (Article 30), we will discuss Q-Learning and Deep Q-Networks (DQN). Ready to learn how agents use a Cheat Sheet to make decisions! ๐โญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
Reinforcement Learning is like training a dog. If the dog does a good thing, we give it a treat (Reward). If the dog does something bad, we do not give a treat (Penalty). Over time, the dog learns to do the things that get the most treats.
1. The Key Players in Reinforcement Learning (RL)
To understand RL, we must know these five main terms,
โ The Agent - This is the AI or the learner that makes decisions.
โ The Environment - This is the world where the agent lives and acts. For example, in a video game, the "game world" is the environment.
โ State - This is the current situation of the agent. It is like a snapshot of the environment at a specific time.
โ Action - This is what the agent chooses to do (like move left, jump, stay still).
โ Reward - This is the feedback from the environment. Positive reward for a good action and negative reward (Penalty) for a bad action.
2. The Reinforcement Learning Interaction Loop
The agent and environment talk to each other in a continuous loop.
โ The agent observes the current State.
โ The agent takes an Action.
โ The environment changes to a New State.
โ The environment gives a Reward to the agent.
โ The agent uses the reward to learn if the action was good or bad.
3. Markov Decision Process (MDP)
The mathematical foundation of Reinforcement Learning is the Markov Decision Process (MDP). MDP assumes that the future depends only on the current state and action. It does not matter how the agent arrived at the current state. We call this the Markov Property.
The Math components;
Policy (ฯ) - This is the agentโs strategy. It is a map that tells the agent which action to take in each state.
Value Function (V) - This is the total reward the agent expects to get in the long term, starting from a specific state.
Discount Factor (ฮณ) - This is a number between 0 and 1. It tells the agent how much to care about future rewards compared to immediate rewards.
4. Exploration vs. Exploitation
This is the biggest challenge in RL. The agent must balance two things,
Exploitation - The agent uses what it already knows to get a reward. (Example: Going to your favourite restaurant because you know the food is good).
Exploration - The agent tries something new to see if it gives a better reward. (Example: Trying a new restaurant to see if it is better than your favourite one).
5. Why is Reinforcement Learning important?
Reinforcement Learning is the technology behind,
โ Self-driving cars (learning how to drive safely).
โ Game AI (like AlphaGo, which beat the world champion).
โ Robotics (teaching robots to walk or pick up items).
Summary ๐
Reinforcement Learning is about learning from interaction. An Agent takes Actions in an Environment to maximize its total Reward. The MDP provides the mathematical framework for this process. The agent must always balance Exploration (trying new things) and Exploitation (using known facts). โจ ๐๐
In the next article (Article 30), we will discuss Q-Learning and Deep Q-Networks (DQN). Ready to learn how agents use a Cheat Sheet to make decisions! ๐โญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
Telegram
Infinity CS
โค1
Forwarded from Data Science & Machine Learning
๐ Data Science Roadmap ๐
๐ Start Here
โ๐ What is Data Science & Why It Matters?
โ๐ Roles (Data Analyst, Data Scientist, ML Engineer)
โ๐ Setting Up Environment (Python, Jupyter Notebook)
๐ Python for Data Science
โ๐ Python Basics (Variables, Loops, Functions)
โ๐ NumPy for Numerical Computing
โ๐ Pandas for Data Analysis
๐ Data Cleaning & Preparation
โ๐ Handling Missing Values
โ๐ Data Transformation
โ๐ Feature Engineering
๐ Exploratory Data Analysis (EDA)
โ๐ Descriptive Statistics
โ๐ Data Visualization (Matplotlib, Seaborn)
โ๐ Finding Patterns & Insights
๐ Statistics & Probability
โ๐ Mean, Median, Mode, Variance
โ๐ Probability Basics
โ๐ Hypothesis Testing
๐ Machine Learning Basics
โ๐ Supervised Learning (Regression, Classification)
โ๐ Unsupervised Learning (Clustering)
โ๐ Model Evaluation (Accuracy, Precision, Recall)
๐ Machine Learning Algorithms
โ๐ Linear Regression
โ๐ Decision Trees & Random Forest
โ๐ K-Means Clustering
๐ Model Building & Deployment
โ๐ Train-Test Split
โ๐ Cross Validation
โ๐ Deploy Models (Flask / FastAPI)
๐ Big Data & Tools
โ๐ SQL for Data Handling
โ๐ Introduction to Big Data (Hadoop, Spark)
โ๐ Version Control (Git & GitHub)
๐ Practice Projects
โ๐ House Price Prediction
โ๐ Customer Segmentation
โ๐ Sales Forecasting Model
๐ โ Move to Next Level
โ๐ Deep Learning (Neural Networks, TensorFlow, PyTorch)
โ๐ NLP (Text Analysis, Chatbots)
โ๐ MLOps & Model Optimization
Data Science Resources: https://whatsapp.com/channel/0029VaxbzNFCxoAmYgiGTL3Z
React "โค๏ธ" for more! ๐๐
๐ Start Here
โ๐ What is Data Science & Why It Matters?
โ๐ Roles (Data Analyst, Data Scientist, ML Engineer)
โ๐ Setting Up Environment (Python, Jupyter Notebook)
๐ Python for Data Science
โ๐ Python Basics (Variables, Loops, Functions)
โ๐ NumPy for Numerical Computing
โ๐ Pandas for Data Analysis
๐ Data Cleaning & Preparation
โ๐ Handling Missing Values
โ๐ Data Transformation
โ๐ Feature Engineering
๐ Exploratory Data Analysis (EDA)
โ๐ Descriptive Statistics
โ๐ Data Visualization (Matplotlib, Seaborn)
โ๐ Finding Patterns & Insights
๐ Statistics & Probability
โ๐ Mean, Median, Mode, Variance
โ๐ Probability Basics
โ๐ Hypothesis Testing
๐ Machine Learning Basics
โ๐ Supervised Learning (Regression, Classification)
โ๐ Unsupervised Learning (Clustering)
โ๐ Model Evaluation (Accuracy, Precision, Recall)
๐ Machine Learning Algorithms
โ๐ Linear Regression
โ๐ Decision Trees & Random Forest
โ๐ K-Means Clustering
๐ Model Building & Deployment
โ๐ Train-Test Split
โ๐ Cross Validation
โ๐ Deploy Models (Flask / FastAPI)
๐ Big Data & Tools
โ๐ SQL for Data Handling
โ๐ Introduction to Big Data (Hadoop, Spark)
โ๐ Version Control (Git & GitHub)
๐ Practice Projects
โ๐ House Price Prediction
โ๐ Customer Segmentation
โ๐ Sales Forecasting Model
๐ โ Move to Next Level
โ๐ Deep Learning (Neural Networks, TensorFlow, PyTorch)
โ๐ NLP (Text Analysis, Chatbots)
โ๐ MLOps & Model Optimization
Data Science Resources: https://whatsapp.com/channel/0029VaxbzNFCxoAmYgiGTL3Z
React "โค๏ธ" for more! ๐๐
โค3
Forwarded from Computer Science and Programming
Video.js v10 Beta: Hello, World (again)
Video.js v10.0.0 beta is a ground-up rewrite merging Video.js, Plyr, Vidstack, and Media Chrome into a single modern framework. Key highlights include an 88% reduction in default bundle size (66% even without ABR), a new composable streaming engine called SPF that enables much smaller adaptive bitrate bundles, first-class React and TypeScript support, unstyled UI primitives inspired by Radix/Base UI, and a shadcn-style skin ejection system. The architecture is fully composable โ unused features are tree-shaken out. Three presets ship with the beta: video, audio, and background video. New skins were designed by Plyr's creator Sam Potts. GA is targeted for mid-2026, with migration guides for Video.js v8, Plyr, Vidstack, and Media Chrome planned before then.
โค1
Article 30: Q-Learning and DQN โ The Agentโs Brain ๐ง
In a simple situation, agent can remember the best action for every situation. But in a complex situation, the agent needs a brain or a cheat sheet to help it choose.
1. Q-Learning (The Cheat Sheet)
Q-Learning is a Value-Based algorithm. The Q stands for Quality. It measures how good an action is for a specific state.
The Q-Table:
Imagine a table that lists every possible state and every possible action.
When the agent is in a state, it looks at the Q-Table. It picks the action with the highest Q-Value.
2. The Maths background (The Bellman Equation)
Now let's see how the agent fills in the Q-Table. It uses the Bellman Equation. Every time the agent takes an action and gets a reward, it updates the table with this logic; The new Q-value is the old value PLUS a small update based on the immediate reward and the best future reward.
Q(s, a) = Q(s, a) + ฮฑ [R + ฮณ max Q (s', a') - Q(s, a)]
ฮฑ (Learning Rate) tells the agent how much to trust new information and ฮณ (Discount Factor) tells the agent how much to value future rewards.
3. The Problem (The Curse of Dimensionality)
A Q-Table works well for simple games like Tic-Tac-Toe. But what about a video game with millions of pixels? If a game has 1 million possible states, the table becomes too big for the computer's memory. This is why we need a smarter way to store Q-Values.
4. Deep Q-Networks (DQN)
In DQN we throw away the big table and replace it with a Neural Network.
How it works:
5. Making DQN Stable?
Learning with a Neural Network in RL is often unstable. To fix this DQN uses two advanced techniques.
Summary ๐
Q-Learning uses a Q-Table to store the value of actions in different states. When the situation is too complex for a table, we use Deep Q-Networks (DQN). DQN uses a Neural Network to predict values and uses Experience Replay to keep the learning stable. โจ ๐๐.
In the next article (Article 31), we discuss Policy Gradients and Actor-Critic Methods. In that, the agent learns a strategy directly instead of just looking at values! ๐ญโญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
In a simple situation, agent can remember the best action for every situation. But in a complex situation, the agent needs a brain or a cheat sheet to help it choose.
1. Q-Learning (The Cheat Sheet)
Q-Learning is a Value-Based algorithm. The Q stands for Quality. It measures how good an action is for a specific state.
The Q-Table:
Imagine a table that lists every possible state and every possible action.
Rows - These represent the States.
Columns - These represent the Actions.
Cells - These store the Q-Value.
When the agent is in a state, it looks at the Q-Table. It picks the action with the highest Q-Value.
2. The Maths background (The Bellman Equation)
Now let's see how the agent fills in the Q-Table. It uses the Bellman Equation. Every time the agent takes an action and gets a reward, it updates the table with this logic; The new Q-value is the old value PLUS a small update based on the immediate reward and the best future reward.
Q(s, a) = Q(s, a) + ฮฑ [R + ฮณ max Q (s', a') - Q(s, a)]
ฮฑ (Learning Rate) tells the agent how much to trust new information and ฮณ (Discount Factor) tells the agent how much to value future rewards.
3. The Problem (The Curse of Dimensionality)
A Q-Table works well for simple games like Tic-Tac-Toe. But what about a video game with millions of pixels? If a game has 1 million possible states, the table becomes too big for the computer's memory. This is why we need a smarter way to store Q-Values.
4. Deep Q-Networks (DQN)
In DQN we throw away the big table and replace it with a Neural Network.
How it works:
The agent gives the current State (like an image) as input to the Neural Network.
The Neural Network does not give a single answer. It predicts the Q-Values for all possible actions at once.
The agent picks the action with the highest predicted Q-Value.
5. Making DQN Stable?
Learning with a Neural Network in RL is often unstable. To fix this DQN uses two advanced techniques.
Experience Replay- The agent stores its past experiences (State, Action, Reward, Next State) in a memory buffer. Instead of learning only from the current step, it takes a random sample from its memory to train the network. This prevents the agent from forgetting old lessons.
Target Network - DQN uses two identical Neural Networks. One network makes the prediction, and the second target network calculates the goal. We update the Target Network only once in a while. This keeps the learning steady and calm.
Summary ๐
Q-Learning uses a Q-Table to store the value of actions in different states. When the situation is too complex for a table, we use Deep Q-Networks (DQN). DQN uses a Neural Network to predict values and uses Experience Replay to keep the learning stable. โจ ๐๐.
In the next article (Article 31), we discuss Policy Gradients and Actor-Critic Methods. In that, the agent learns a strategy directly instead of just looking at values! ๐ญโญ๏ธ ๐๐
โ๏ธ @TheInfinityAI
Telegram
Infinity CS
โค2
What is MCP? ๐ญ๐ค
The Model Context Protocol (MCP) is a revolutionary open standard developed by Anthropic. It is transforming how AI interacts with the world. If you have ever felt that AI models are isolated or limited by what they can see and do, MCP is the only trusted solution. Before MCP, connecting an AI model to a new tool or database was difficult. Developers had to write unique custom code for every single integration. It moves the industry from siloed agents to a standardized system where everything can talk to each other.
The Architecture of MCP.
MCP works through a simple system that manages how information moves,
This communication happens using JSON-RPC 2.0. It is a standard message format that ensures everyone is speaking the same language.
The Three Core Building Blocks
An MCP server offers three main primitives or functions to an AI agent,
Real-World Applications
MCP is not just a concept. it is already being used in several ways,
By standardizing how AI interacts with the world, MCP makes AI more powerful, more secure and much easier for developers to build.
โ๏ธ @TheInfinityAI
The Model Context Protocol (MCP) is a revolutionary open standard developed by Anthropic. It is transforming how AI interacts with the world. If you have ever felt that AI models are isolated or limited by what they can see and do, MCP is the only trusted solution. Before MCP, connecting an AI model to a new tool or database was difficult. Developers had to write unique custom code for every single integration. It moves the industry from siloed agents to a standardized system where everything can talk to each other.
The Architecture of MCP.
MCP works through a simple system that manages how information moves,
Host - The environment where AI lives, such as Claude Desktop or VS Code. It serves as the entry point for the user.
Client - acts as the connector. It manages the connection and sends specific requests to the servers.
Server - The provider that exposes tools and data to the AI. Developers build these servers to give the AI specific powers, like searching the web or reading a database.
This communication happens using JSON-RPC 2.0. It is a standard message format that ensures everyone is speaking the same language.
The Three Core Building Blocks
An MCP server offers three main primitives or functions to an AI agent,
Tools - executable functions that allow the AI to take action. For example, a tool might allow the AI to search the web or run a piece of code.
Resources - read-only data sources that provide context. This includes documents, database records or file contents that the AI can read but not change.
Prompts - reusable templates that help the AI complete tasks consistently. They guide the AI to handle specific requests
Real-World Applications
MCP is not just a concept. it is already being used in several ways,
AI Coding Assistants - Tools like Claude Code and Cursor.
Data Analysis - AI can connect to live databases to run SQL queries and analyze information securely.
Workflow Automation - AI agents can read your calendar, manage tasks and send emails through MCP servers.
RAG Systems - Agents can retrieve documents from PDFs and knowledge bases to provide more accurate answers.
By standardizing how AI interacts with the world, MCP makes AI more powerful, more secure and much easier for developers to build.
โ๏ธ @TheInfinityAI
Telegram
๐๐ง๐๐ข๐ง๐ข๐ญ๐ฒ ๐๐
โค3
This media is not supported in your browser
VIEW IN TELEGRAM
In todayโs fast-changing online media world, yt-dlp remains the leading open-source tool for downloading video and audio. Even as streaming platforms add stronger protections, this command-line tool is still trusted by developers.
The most significant evolution is how yt-dlp handles platform security. To bypass modern bot-detection and encrypted signatures, yt-dlp now supports external JavaScript runtimes. While it remains a Python-based tool, it uses engines like Deno or Node.js to solve complex server-side puzzles in real-time. This helps it access high-quality content, including 4K and 8K videos, which many basic tools cannot download.
yt-dlp is not just a downloader. It helps users keep control over their content. Since online media can be removed or locked behind paywalls at any time, yt-dlp allows users to save and manage their own copies. For users who are comfortable with terminal tools, it remains a powerful tool between online content and local storage.
โ๏ธ @TheInfinityAI
Please open Telegram to view this post
VIEW IN TELEGRAM
Article 31: Policy Gradients and Actor-Critic โ The Direct Strategy ๐ญ
Sometimes, calculating the value of every action is too hard. Instead of it, the agent learns the policy directly. We call it as Policy-Based learning.
1. Policy Gradients (Learning the Probability).
In Q-Learning, we pick the action with the highest number. In Policy Gradients, a Neural Network outputs a probability distribution.
The workflow;
2. The Actor-Critic Architecture
Policy Gradients can be noisy and slow. To fix this, we combine Value-Based and Policy-Based methods. This is the Actor-Critic model.
Think it as a movie set:
3. Advantage Function (A)
The critic uses a special math tool called the advantage function to help the Actor.
A(s, a) = Q(s, a) - V(s)
Summary ๐
Policy Gradients teach the agent a strategy (probabilities) directly. Actor-Critic models use two brains; one to act (Actor) and one to give feedback (Critic). The Advantage Function helps the agent understand if an action is better than the average choice.โจ ๐ ๐ .
In the next article (Article 32), we enter Phase 9: Deep Learning, where we study the secrets of Neural Networks and Backpropagation!๐ง ๐ ๐
โ๏ธ @TheInfinityAI
Sometimes, calculating the value of every action is too hard. Instead of it, the agent learns the policy directly. We call it as Policy-Based learning.
1. Policy Gradients (Learning the Probability).
In Q-Learning, we pick the action with the highest number. In Policy Gradients, a Neural Network outputs a probability distribution.
The workflow;
The network says there is a 70% chance that jumping is good and a 30% chance that running is good.
The agent picks an action based on these percentages.
If the action leads to a high reward, the machine increases the probability of that action for the future.
If the action leads to a penalty, it decreases the probability.
2. The Actor-Critic Architecture
Policy Gradients can be noisy and slow. To fix this, we combine Value-Based and Policy-Based methods. This is the Actor-Critic model.
Think it as a movie set:
The Actor - this is a neural network that learns the Policy. It decides which Action to take.
The Critic - This is a second neural network that learns the value. It watches the actor and critiques the action. It tells to the actor if the action was better or worse than expected.
3. Advantage Function (A)
The critic uses a special math tool called the advantage function to help the Actor.
A(s, a) = Q(s, a) - V(s)
V(s) - This is the average reward we expect from this state.
Q(s, a) - This is the actual reward we got from a specific action.
Result - If A is positive, the action was better than average. The actor learns to do it more. If A is negative, the actor learns to do it less.
Summary ๐
Policy Gradients teach the agent a strategy (probabilities) directly. Actor-Critic models use two brains; one to act (Actor) and one to give feedback (Critic). The Advantage Function helps the agent understand if an action is better than the average choice.
In the next article (Article 32), we enter Phase 9: Deep Learning, where we study the secrets of Neural Networks and Backpropagation!
Please open Telegram to view this post
VIEW IN TELEGRAM
Telegram
๐๐ง๐๐ข๐ง๐ข๐ญ๐ฒ ๐๐
โค1
Which library is used for basic plotting in Python?
Anonymous Quiz
25%
NumPy
27%
Pandas
42%
Matplotlib
6%
Tensorflow
Data breach Alert โ ๏ธ
เทเทโเถปเท เถฝเถเถเทเทเท เถปเทเถขเทโเถบ เถดเถปเทเถดเทเถฝเถฑ, เถดเท เทเถญเท เทเถทเท เทเท เถดเท เทเถญเท เถดเทเถฝเถฑ เถ เถธเทเถญเทโเถบเถเทเถบเท เถฑเทเถฝ เทเทเถถเท เถ เถฉเทเทเถบ (pubad.gov.lk) เทเท เถเทเท เถดเถฏเทเถฐเถญเทเทเถฝเถง เถเถฝเทเถฝ เถเถปเถฝเท เถญเทเถบเทเถฑ เทเถบเทเถถเถปเท เถดเทโเถปเทเทเถปเถบเถเท เทเทเถญเทเทเทเถฑเท เถปเทเถขเทโเถบ เทเทเทเถเถบเถฑเท เถฏเทเทเท เถเถซเถฑเถเถเท เทเถเทเทเถฏเท เถฏเถญเทเถญ เถ เถฑเทเถญเถปเทเถขเทเถฝเถบเถง เถธเทเถฏเทเทเทเถป เถเถญเท เถถเทเถง เทเทเถปเทเถญเท เทเทเถฑเทเท. wh6am เถฑเทเถธเถญเท Threat Actor เทเทเทเทเถฑเท เถธเท เถฏเถญเทเถญ Dark Web เถ เถฉเทเท เทเถปเทเท เถ เถฝเทเทเท เถเทเถปเทเถธเถง เถเถญเทเทเทเท เถเถปเถฑ เถถเทเถง เถญเทเถปเถญเทเถปเท Dark web Intelligence เทเทเถปเทเถญเท เถเถปเถฑเทเท.
เถธเท เทเทเถเทเถซเถฑเทเถฑ เถญเทเถบเทเถฑ Data เถ เถญเถป,
เทเถเท เถฏเทเทเถฝเท เถญเทเถบเทเถฑเทเท. เถธเทเถ เถญเทเถธ unconfirmed. เถฑเถธเทเถญเท Source เถเทเถดเถบเถเทเถฑเทเถธ เถธเทเถ เทเทเถปเทเถญเท เถเถปเถฑเท. เถธเท เถญเทเถปเถญเทเถปเท Dark Web เทเถปเทเท เถฉเทเถฝเถปเท 200 เทเทเถฑเท เทเทเท เท เถธเทเถฏเถฝเถเถง เถ เถฝเทเทเท เทเทเถฑเทเท. เถปเทเถขเทโเถบ เทเทเทเถเถบเทเถฑเท เถเถฏเทเถปเท เถเทเถฝเทเถฝเถฏเท Phishing Attack เทเถฝเถง เถ เทเท เทเทเถฑเทเถฑ เถดเทเท เทเทเถฑเท เถธเท เถฑเทเทเท. (Targeted Phishing, Identity Theft, Social Engineering)
Sources
โช๏ธ Source 01
โช๏ธ Source 02
โช๏ธ Source 03
โ๏ธ @TheInfinityAI
เทเทโเถปเท เถฝเถเถเทเทเท เถปเทเถขเทโเถบ เถดเถปเทเถดเทเถฝเถฑ, เถดเท เทเถญเท เทเถทเท เทเท เถดเท เทเถญเท เถดเทเถฝเถฑ เถ เถธเทเถญเทโเถบเถเทเถบเท เถฑเทเถฝ เทเทเถถเท เถ เถฉเทเทเถบ (pubad.gov.lk) เทเท เถเทเท เถดเถฏเทเถฐเถญเทเทเถฝเถง เถเถฝเทเถฝ เถเถปเถฝเท เถญเทเถบเทเถฑ เทเถบเทเถถเถปเท เถดเทโเถปเทเทเถปเถบเถเท เทเทเถญเทเทเทเถฑเท เถปเทเถขเทโเถบ เทเทเทเถเถบเถฑเท เถฏเทเทเท เถเถซเถฑเถเถเท เทเถเทเทเถฏเท เถฏเถญเทเถญ เถ เถฑเทเถญเถปเทเถขเทเถฝเถบเถง เถธเทเถฏเทเทเทเถป เถเถญเท เถถเทเถง เทเทเถปเทเถญเท เทเทเถฑเทเท. wh6am เถฑเทเถธเถญเท Threat Actor เทเทเทเทเถฑเท เถธเท เถฏเถญเทเถญ Dark Web เถ เถฉเทเท เทเถปเทเท เถ เถฝเทเทเท เถเทเถปเทเถธเถง เถเถญเทเทเทเท เถเถปเถฑ เถถเทเถง เถญเทเถปเถญเทเถปเท Dark web Intelligence เทเทเถปเทเถญเท เถเถปเถฑเทเท.
เถธเท เทเทเถเทเถซเถฑเทเถฑ เถญเทเถบเทเถฑ Data เถ เถญเถป,
โช๏ธ Full names (first and last, initials)
โช๏ธ Email addresses (personal and work)
โช๏ธ Phone numbers (mobile, office, home)
โช๏ธ Physical addresses (home and work)
โช๏ธ National ID numbers (NIC)
โช๏ธ Job titles and designations
โช๏ธ Employer names and department details
โช๏ธ Usernames and hashed passwords
โช๏ธ User registration dates and last activity timestamps
โช๏ธ Internal government circulars and service minutes (PDF files)
เทเถเท เถฏเทเทเถฝเท เถญเทเถบเทเถฑเทเท. เถธเทเถ เถญเทเถธ unconfirmed. เถฑเถธเทเถญเท Source เถเทเถดเถบเถเทเถฑเทเถธ เถธเทเถ เทเทเถปเทเถญเท เถเถปเถฑเท. เถธเท เถญเทเถปเถญเทเถปเท Dark Web เทเถปเทเท เถฉเทเถฝเถปเท 200 เทเทเถฑเท เทเทเท เท เถธเทเถฏเถฝเถเถง เถ เถฝเทเทเท เทเทเถฑเทเท. เถปเทเถขเทโเถบ เทเทเทเถเถบเทเถฑเท เถเถฏเทเถปเท เถเทเถฝเทเถฝเถฏเท Phishing Attack เทเถฝเถง เถ เทเท เทเทเถฑเทเถฑ เถดเทเท เทเทเถฑเท เถธเท เถฑเทเทเท. (Targeted Phishing, Identity Theft, Social Engineering)
Sources
โช๏ธ Source 01
โช๏ธ Source 02
โช๏ธ Source 03
โ๏ธ @TheInfinityAI
Telegram
๐๐ง๐๐ข๐ง๐ข๐ญ๐ฒ ๐๐
Which activation function is most commonly used in hidden layers of deep neural networks? โด๏ธ
Anonymous Quiz
22%
Sigmoid
20%
Tanh
30%
ReLU
28%
Softmax