π€π§ AI Projects : A Comprehensive Showcase of Machine Learning, Deep Learning and Generative AI
ποΈ 27 Oct 2025
π AI News & Trends
Artificial Intelligence (AI) is transforming industries across the globe, driving innovation through automation, data-driven insights and intelligent decision-making. Whether itβs predicting house prices, detecting diseases or building conversational chatbots, AI is at the core of modern digital solutions. The AI Project Gallery by Hema Kalyan Murapaka is an exceptional GitHub repository that curates a wide ...
#AI #MachineLearning #DeepLearning #GenerativeAI #ArtificialIntelligence #GitHub
ποΈ 27 Oct 2025
π AI News & Trends
Artificial Intelligence (AI) is transforming industries across the globe, driving innovation through automation, data-driven insights and intelligent decision-making. Whether itβs predicting house prices, detecting diseases or building conversational chatbots, AI is at the core of modern digital solutions. The AI Project Gallery by Hema Kalyan Murapaka is an exceptional GitHub repository that curates a wide ...
#AI #MachineLearning #DeepLearning #GenerativeAI #ArtificialIntelligence #GitHub
β€3π₯1
In Python, image processing unlocks powerful capabilities for computer vision, data augmentation, and automationβmaster these techniques to excel in ML engineering interviews and real-world applications! πΌ
more explain: https://hackmd.io/@husseinsheikho/imageprocessing
#Python #ImageProcessing #ComputerVision #Pillow #OpenCV #MachineLearning #CodingInterview #DataScience #Programming #TechJobs #DeveloperTips #AI #DeepLearning #CloudComputing #Docker #BackendDevelopment #SoftwareEngineering #CareerGrowth #TechTips #Python3
# PIL/Pillow Basics - The essential image library
from PIL import Image
# Open and display image
img = Image.open("input.jpg")
img.show()
# Convert formats
img.save("output.png")
img.convert("L").save("grayscale.jpg") # RGB to grayscale
# Basic transformations
img.rotate(90).save("rotated.jpg")
img.resize((300, 300)).save("resized.jpg")
img.transpose(Image.FLIP_LEFT_RIGHT).save("mirrored.jpg")
more explain: https://hackmd.io/@husseinsheikho/imageprocessing
#Python #ImageProcessing #ComputerVision #Pillow #OpenCV #MachineLearning #CodingInterview #DataScience #Programming #TechJobs #DeveloperTips #AI #DeepLearning #CloudComputing #Docker #BackendDevelopment #SoftwareEngineering #CareerGrowth #TechTips #Python3
β€5π1
π€π§ Free for 1 Year: ChatGPT Goβs Big Move in India
ποΈ 28 Oct 2025
π AI News & Trends
On 28 October 2025, OpenAI announced that its mid-tier subscription plan, ChatGPT Go, will be available free for one full year in India starting from 4 November. (www.ndtv.com) What is ChatGPT Go? Whatβs the deal? Why this matters ? Things to check / caveats What should users do? Broader implications This move by OpenAI indicates ...
#ChatGPTGo #OpenAI #India #FreeAccess #ArtificialIntelligence #TechNews
ποΈ 28 Oct 2025
π AI News & Trends
On 28 October 2025, OpenAI announced that its mid-tier subscription plan, ChatGPT Go, will be available free for one full year in India starting from 4 November. (www.ndtv.com) What is ChatGPT Go? Whatβs the deal? Why this matters ? Things to check / caveats What should users do? Broader implications This move by OpenAI indicates ...
#ChatGPTGo #OpenAI #India #FreeAccess #ArtificialIntelligence #TechNews
β€7
Gemini will be with you here on our channel and will post useful things for you π
Are you ready?
Are you ready?
Please open Telegram to view this post
VIEW IN TELEGRAM
β€14π7
π‘ Building a Simple Convolutional Neural Network (CNN)
Constructing a basic Convolutional Neural Network (CNN) is a fundamental step in deep learning for image processing. Using TensorFlow's Keras API, we can define a network with convolutional, pooling, and dense layers to classify images. This example sets up a simple CNN to recognize handwritten digits from the MNIST dataset.
Code explanation: This script defines a simple CNN using Keras. It loads and normalizes MNIST images. The
#Python #DeepLearning #CNN #Keras #TensorFlow
βββββββββββββββ
By: @CodeProgrammer β¨
Constructing a basic Convolutional Neural Network (CNN) is a fundamental step in deep learning for image processing. Using TensorFlow's Keras API, we can define a network with convolutional, pooling, and dense layers to classify images. This example sets up a simple CNN to recognize handwritten digits from the MNIST dataset.
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
import numpy as np
# 1. Load and preprocess the MNIST dataset
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Reshape images for CNN: (batch_size, height, width, channels)
# MNIST images are 28x28 grayscale, so channels = 1
train_images = train_images.reshape((60000, 28, 28, 1)).astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype('float32') / 255
# 2. Define the CNN architecture
model = models.Sequential()
# First Convolutional Block
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
# Second Convolutional Block
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
# Flatten the 3D output to 1D for the Dense layers
model.add(layers.Flatten())
# Dense (fully connected) layers
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) # Output layer for 10 classes (digits 0-9)
# 3. Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Print a summary of the model layers
model.summary()
# 4. Train the model (uncomment to run training)
# print("\nTraining the model...")
# model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_split=0.1)
# 5. Evaluate the model (uncomment to run evaluation)
# print("\nEvaluating the model...")
# test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
# print(f"Test accuracy: {test_acc:.4f}")
Code explanation: This script defines a simple CNN using Keras. It loads and normalizes MNIST images. The
Sequential model adds Conv2D layers for feature extraction, MaxPooling2D for downsampling, a Flatten layer to transition to 1D, and Dense layers for classification. The model is then compiled with an optimizer, loss function, and metrics, and a summary of its architecture is printed. Training and evaluation steps are included as commented-out examples.#Python #DeepLearning #CNN #Keras #TensorFlow
βββββββββββββββ
By: @CodeProgrammer β¨
β€16
Matplotlib_cheatsheet.pdf
3.1 MB
Main features of Matplotlib:
#doc #cheatsheet #PythonTips
Matplotlib Cheatsheet (
https://t.me/CodeProgrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
β€4π1
Perfect for those who want to level up from
print('Hello') to advanced projects.1. 30-Days-Of-Python β a 30-day Python challenge covering the basics of the language.
2. Python Basics β simple and clear Python basics for beginners.
3. Learn Python β a topic-based guide with examples and code.
4. Python Guide β best practices, tools, and advanced topics.
5. Learn Python 3 β an easy-to-understand guide to Python 3 with practice.
6. Python Programming Exercises β 100+ Python exercises.
7. Coding Problems β algorithmic problems, perfect for interview prep.
8. Project-Based-Learning β learn Python through real projects.
9. Projects β ideas for practical projects and skill improvement.
10. 100-Days-Of-ML-Code β a step-by-step guide to Machine Learning in Python.
11. TheAlgorithms/Python β a huge collection of algorithms in Python.
12. Amazing-Python-Scripts β useful scripts from automation to advanced utilities.
13. Geekcomputers/Python β a collection of practical scripts: networking, files, automation.
14. Materials β code, exercises, and projects from Real Python.
15. Awesome Python β a top list of the best frameworks and libraries.
16. 30-Seconds-of-Python β short snippets for quick solutions.
17. Python Reference β life hacks, tutorials, and useful scripts.
#python #doc #github #soft
Please open Telegram to view this post
VIEW IN TELEGRAM
β€10π2π₯1π―1
Don't forget to subscribe to the Premium channel, we are your guide to the future.
A huge warehouse of books and courses
https://t.me/+r_Tcx2c-oVU1OWNi
A huge warehouse of books and courses
https://t.me/+r_Tcx2c-oVU1OWNi
Telegram
Data Science Premium (Books & Courses)
access to thousands of valuable resources, including essential books and courses.
Paid books
Paid courses from coursera and Udemy
Paid project
Paid books
Paid courses from coursera and Udemy
Paid project
Forwarded from Python | Machine Learning | Coding | R
This channels is for Programmers, Coders, Software Engineers.
0οΈβ£ Python
1οΈβ£ Data Science
2οΈβ£ Machine Learning
3οΈβ£ Data Visualization
4οΈβ£ Artificial Intelligence
5οΈβ£ Data Analysis
6οΈβ£ Statistics
7οΈβ£ Deep Learning
8οΈβ£ programming Languages
β
https://t.me/addlist/8_rRW2scgfRhOTc0
β
https://t.me/Codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
β€9
π€π§ Reflex: Build Full-Stack Web Apps in Pure Python β Fast, Flexible and Powerful
ποΈ 29 Oct 2025
π AI News & Trends
Building modern web applications has traditionally required mastering multiple languages and frameworks from JavaScript for the frontend to Python, Java or Node.js for the backend. For many developers, switching between different technologies can slow down productivity and increase complexity. Reflex eliminates that problem. It is an innovative open-source full-stack web framework that allows developers to ...
#Reflex #FullStack #WebDevelopment #Python #OpenSource #WebApps
ποΈ 29 Oct 2025
π AI News & Trends
Building modern web applications has traditionally required mastering multiple languages and frameworks from JavaScript for the frontend to Python, Java or Node.js for the backend. For many developers, switching between different technologies can slow down productivity and increase complexity. Reflex eliminates that problem. It is an innovative open-source full-stack web framework that allows developers to ...
#Reflex #FullStack #WebDevelopment #Python #OpenSource #WebApps
β€2
π€π§ MLOps Basics: A Complete Guide to Building, Deploying and Monitoring Machine Learning Models
ποΈ 30 Oct 2025
π AI News & Trends
Machine Learning models are powerful but building them is only half the story. The true challenge lies in deploying, scaling and maintaining these models in production environments β a process that requires collaboration between data scientists, developers and operations teams. This is where MLOps (Machine Learning Operations) comes in. MLOps combines the principles of DevOps ...
#MLOps #MachineLearning #DevOps #ModelDeployment #DataScience #ProductionAI
ποΈ 30 Oct 2025
π AI News & Trends
Machine Learning models are powerful but building them is only half the story. The true challenge lies in deploying, scaling and maintaining these models in production environments β a process that requires collaboration between data scientists, developers and operations teams. This is where MLOps (Machine Learning Operations) comes in. MLOps combines the principles of DevOps ...
#MLOps #MachineLearning #DevOps #ModelDeployment #DataScience #ProductionAI
π€π§ MiniMax-M2: The Open-Source Revolution Powering Coding and Agentic Intelligence
ποΈ 30 Oct 2025
π AI News & Trends
Artificial intelligence is evolving faster than ever, but not every innovation needs to be enormous to make an impact. MiniMax-M2, the latest release from MiniMax-AI, demonstrates that efficiency and power can coexist within a streamlined framework. MiniMax-M2 is an open-source Mixture of Experts (MoE) model designed for coding tasks, multi-agent collaboration and automation workflows. With ...
#MiniMaxM2 #OpenSource #MachineLearning #CodingAI #AgenticIntelligence #MixtureOfExperts
ποΈ 30 Oct 2025
π AI News & Trends
Artificial intelligence is evolving faster than ever, but not every innovation needs to be enormous to make an impact. MiniMax-M2, the latest release from MiniMax-AI, demonstrates that efficiency and power can coexist within a streamlined framework. MiniMax-M2 is an open-source Mixture of Experts (MoE) model designed for coding tasks, multi-agent collaboration and automation workflows. With ...
#MiniMaxM2 #OpenSource #MachineLearning #CodingAI #AgenticIntelligence #MixtureOfExperts
β€1π1π₯1
π‘ NumPy Tip: Efficient Filtering with Boolean Masks
Avoid slow Python loops for filtering data. Instead, create a "mask" array of
Code explanation: A NumPy array
#Python #NumPy #DataScience #CodingTips #Programming
βββββββββββββββ
By: @CodeProgrammer β¨
Avoid slow Python loops for filtering data. Instead, create a "mask" array of
True/False values based on a condition. Applying this mask to your original array instantly selects only the elements where the mask is True, which is significantly faster.import numpy as np
# Create an array of data
data = np.array([10, 55, 8, 92, 43, 77, 15])
# Create a boolean mask for values greater than 50
high_values_mask = data > 50
# Use the mask to select elements
filtered_data = data[high_values_mask]
print(filtered_data)
# Output: [55 92 77]
Code explanation: A NumPy array
data is created. Then, a boolean array high_values_mask is generated, which is True for every element in data greater than 50. This mask is used as an index to efficiently extract and print only those matching elements from the original array.#Python #NumPy #DataScience #CodingTips #Programming
βββββββββββββββ
By: @CodeProgrammer β¨
β€2
π‘ Python F-Strings Cheatsheet
F-strings (formatted string literals) provide a concise and powerful way to embed expressions inside string literals for formatting. Just prefix the string with an
1. Basic Variable and Expression Embedding
β’ Place variables or expressions directly inside curly braces
2. Number Formatting
Control the appearance of numbers, such as padding with zeros or setting decimal precision.
β’
β’
3. Alignment and Padding
Align text within a specified width, which is useful for creating tables or neatly formatted output.
β’ Use
4. Date and Time Formatting
Directly format
β’ Use a colon
#Python #Programming #CodingTips #FStrings #PythonTips
βββββββββββββββ
By: @CodeProgrammer β¨
F-strings (formatted string literals) provide a concise and powerful way to embed expressions inside string literals for formatting. Just prefix the string with an
f or F.1. Basic Variable and Expression Embedding
name = "Alice"
quantity = 5
print(f"Hello, {name}. You have {quantity * 2} items in your cart.")
# Output: Hello, Alice. You have 10 items in your cart.
β’ Place variables or expressions directly inside curly braces
{}. Python evaluates the expression and inserts the result into the string.2. Number Formatting
Control the appearance of numbers, such as padding with zeros or setting decimal precision.
pi_value = 3.14159
order_id = 42
print(f"Pi: {pi_value:.2f}")
print(f"Order ID: {order_id:04d}")
# Output:
# Pi: 3.14
# Order ID: 0042
β’
:.2f formats the float to have exactly two decimal places.β’
:04d formats the integer to be at least 4 digits long, padding with leading zeros if necessary.3. Alignment and Padding
Align text within a specified width, which is useful for creating tables or neatly formatted output.
item = "Docs"
print(f"|{item:<10}|") # Left-aligned
print(f"|{item:^10}|") # Center-aligned
print(f"|{item:>10}|") # Right-aligned
# Output:
# |Docs |
# | Docs |
# | Docs|
β’ Use
< for left, ^ for center, and > for right alignment, followed by the total width.4. Date and Time Formatting
Directly format
datetime objects within an f-string.from datetime import datetime
now = datetime.now()
print(f"Current time: {now:%Y-%m-%d %H:%M}")
# Output: Current time: 2023-10-27 14:30
β’ Use a colon
: followed by standard strftime formatting codes to display dates and times as you wish.#Python #Programming #CodingTips #FStrings #PythonTips
βββββββββββββββ
By: @CodeProgrammer β¨
β€3π1
π‘ Keras: Building Neural Networks Simply
Keras is a high-level deep learning API, now part of TensorFlow, designed for fast and easy experimentation. This guide covers the fundamental workflow: defining, compiling, training, and using a neural network model.
β’ Model Definition:
β’
β’
β’ Compilation:
β’
β’
β’
β’ Training: The
β’
β’
β’
β’ Prediction:
β’ For a classification model with a softmax output, this returns an array of probabilities for each class.
β’
#Keras #TensorFlow #DeepLearning #MachineLearning #Python
βββββββββββββββ
By: @CodeProgrammer β¨
Keras is a high-level deep learning API, now part of TensorFlow, designed for fast and easy experimentation. This guide covers the fundamental workflow: defining, compiling, training, and using a neural network model.
from tensorflow import keras
from tensorflow.keras import layers
# Define a Sequential model
model = keras.Sequential([
# Input layer with 64 neurons, expecting flat input data
layers.Dense(64, activation="relu", input_shape=(784,)),
# A hidden layer with 32 neurons
layers.Dense(32, activation="relu"),
# Output layer with 10 neurons for 10-class classification
layers.Dense(10, activation="softmax")
])
model.summary()
β’ Model Definition:
keras.Sequential creates a simple, layer-by-layer model.β’
layers.Dense is a standard fully-connected layer. The first layer must specify the input_shape.β’
activation functions like "relu" introduce non-linearity, while "softmax" is used on the output layer for multi-class classification to produce probabilities.# (Continuing from the previous step)
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
print("Model compiled successfully.")
β’ Compilation:
.compile() configures the model for training.β’
optimizer is the algorithm used to update the model's weights (e.g., 'adam' is a popular choice).β’
loss is the function the model tries to minimize during training. sparse_categorical_crossentropy is common for integer-based classification labels.β’
metrics are used to monitor the training and testing steps. Here, we track accuracy.import numpy as np
# Create dummy training data
x_train = np.random.random((1000, 784))
y_train = np.random.randint(10, size=(1000,))
# Train the model
history = model.fit(
x_train,
y_train,
epochs=5,
batch_size=32,
verbose=0 # Hides the progress bar for a cleaner output
)
print(f"Training complete. Final accuracy: {history.history['accuracy'][-1]:.4f}")
# Output (will vary):
# Training complete. Final accuracy: 0.4570
β’ Training: The
.fit() method trains the model on your data.β’
x_train and y_train are your input features and target labels.β’
epochs defines how many times the model will see the entire dataset.β’
batch_size is the number of samples processed before the model is updated.# Create a single dummy sample to test
x_test = np.random.random((1, 784))
# Get the model's prediction
predictions = model.predict(x_test)
predicted_class = np.argmax(predictions[0])
print(f"Predicted class: {predicted_class}")
print(f"Confidence scores: {predictions[0].round(2)}")
# Output (will vary):
# Predicted class: 3
# Confidence scores: [0.09 0.1 0.1 0.12 0.1 0.09 0.11 0.1 0.09 0.1 ]
β’ Prediction:
.predict() is used to make predictions on new, unseen data.β’ For a classification model with a softmax output, this returns an array of probabilities for each class.
β’
np.argmax() is used to find the index (the class) with the highest probability score.#Keras #TensorFlow #DeepLearning #MachineLearning #Python
βββββββββββββββ
By: @CodeProgrammer β¨
β€3π₯3π1
π‘ {{Python Exam}}
Python dictionaries are a fundamental data structure used to store data as key-value pairs. They are mutable (can be changed), dynamic, and since Python 3.7, they maintain the order of insertion. Keys must be unique and of an immutable type (like strings or numbers), while values can be of any type.
1. Creating and Accessing Dictionaries
β’ A dictionary is created using curly braces
β’
β’
β’
2. Modifying a Dictionary
β’ A new key-value pair is added using simple assignment
β’ The value of an existing key is updated by assigning a new value to it.
β’ The
3. Looping Through Dictionaries
β’
β’
β’
#Python #DataStructures #Dictionaries #Programming #PythonBasics
βββββββββββββββ
By: @CodeProgrammer β¨
Python dictionaries are a fundamental data structure used to store data as key-value pairs. They are mutable (can be changed), dynamic, and since Python 3.7, they maintain the order of insertion. Keys must be unique and of an immutable type (like strings or numbers), while values can be of any type.
1. Creating and Accessing Dictionaries
# Creating a dictionary
student = {
"name": "Alex",
"age": 21,
"courses": ["Math", "CompSci"]
}
# Accessing values
print(f"Name: {student['name']}")
print(f"Age: {student.get('age')}")
# Safe access for a non-existent key
print(f"Major: {student.get('major', 'Not specified')}")
# --- Sample Output ---
# Name: Alex
# Age: 21
# Major: Not specified
β’ A dictionary is created using curly braces
{} with key: value pairs.β’
student['name'] accesses the value using its key. This will raise a KeyError if the key doesn't exist.β’
student.get('age') is a safer way to access a value, returning None if the key is not found.β’
.get() can also take a second argument as a default value to return if the key is missing.2. Modifying a Dictionary
user_profile = {
"username": "coder_01",
"level": 5
}
# Add a new key-value pair
user_profile["email"] = "coder@example.com"
print(f"After adding: {user_profile}")
# Update an existing value
user_profile["level"] = 6
print(f"After updating: {user_profile}")
# Remove a key-value pair
del user_profile["email"]
print(f"After deleting: {user_profile}")
# --- Sample Output ---
# After adding: {'username': 'coder_01', 'level': 5, 'email': 'coder@example.com'}
# After updating: {'username': 'coder_01', 'level': 6, 'email': 'coder@example.com'}
# After deleting: {'username': 'coder_01', 'level': 6}β’ A new key-value pair is added using simple assignment
dict[new_key] = new_value.β’ The value of an existing key is updated by assigning a new value to it.
β’ The
del keyword completely removes a key-value pair from the dictionary.3. Looping Through Dictionaries
inventory = {
"apples": 430,
"bananas": 312,
"oranges": 525
}
# Loop through keys
print("--- Keys ---")
for item in inventory.keys():
print(item)
# Loop through values
print("\n--- Values ---")
for quantity in inventory.values():
print(quantity)
# Loop through key-value pairs
print("\n--- Items ---")
for item, quantity in inventory.items():
print(f"{item}: {quantity}")
# --- Sample Output ---
# --- Keys ---
# apples
# bananas
# oranges
#
# --- Values ---
# 430
# 312
# 525
#
# --- Items ---
# apples: 430
# bananas: 312
# oranges: 525β’
.keys() returns a view object of all keys, which can be looped over.β’
.values() returns a view object of all values.β’
.items() returns a view object of key-value tuple pairs, allowing you to easily access both in each loop iteration.#Python #DataStructures #Dictionaries #Programming #PythonBasics
βββββββββββββββ
By: @CodeProgrammer β¨
β€4π1
Forwarded from Data Science Machine Learning Data Analysis
π Build Your Own ChatGPT-like Chatbot with Java and Python
π Category: ARTIFICIAL INTELLIGENCE
π Date: 2024-05-30 | β±οΈ Read time: 33 min read
Creating a custom LLM inference infrastructure from scratch
π Category: ARTIFICIAL INTELLIGENCE
π Date: 2024-05-30 | β±οΈ Read time: 33 min read
Creating a custom LLM inference infrastructure from scratch
β€2
#NLP #Lesson #SentimentAnalysis #MachineLearning
Building an NLP Model from Scratch: Sentiment Analysis
This lesson will guide you through creating a complete Natural Language Processing (NLP) project. We will build a sentiment analysis classifier that can determine if a piece of text is positive or negative.
---
Step 1: Setup and Data Preparation
First, we need to import the necessary libraries and prepare our dataset. For simplicity, we'll use a small, hard-coded list of sentences. In a real-world project, you would load this data from a file (e.g., a CSV).
#Python #DataPreparation
---
Step 2: Text Preprocessing
Computers don't understand words, so we must clean and process our text data first. This involves making text lowercase, removing punctuation, and filtering out common "stop words" (like 'the', 'a', 'is') that don't add much meaning.
#TextPreprocessing #DataCleaning
---
Step 3: Splitting the Data
We must split our data into a training set (to teach the model) and a testing set (to evaluate its performance on unseen data).
#MachineLearning #TrainTestSplit
---
Step 4: Feature Extraction (Vectorization)
We need to convert our cleaned text into a numerical format. We'll use TF-IDF (Term Frequency-Inverse Document Frequency). This technique converts text into vectors of numbers, giving more weight to words that are important to a document but not common across all documents.
#FeatureEngineering #TFIDF #Vectorization
Building an NLP Model from Scratch: Sentiment Analysis
This lesson will guide you through creating a complete Natural Language Processing (NLP) project. We will build a sentiment analysis classifier that can determine if a piece of text is positive or negative.
---
Step 1: Setup and Data Preparation
First, we need to import the necessary libraries and prepare our dataset. For simplicity, we'll use a small, hard-coded list of sentences. In a real-world project, you would load this data from a file (e.g., a CSV).
#Python #DataPreparation
# Imports and Data
import re
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report
import nltk
from nltk.corpus import stopwords
# You may need to download stopwords for the first time
# nltk.download('stopwords')
# Sample Data (In a real project, load this from a file)
texts = [
"I love this movie, it's fantastic!",
"This was a terrible film.",
"The acting was superb and the plot was great.",
"I would not recommend this to anyone.",
"It was an okay movie, not the best but enjoyable.",
"Absolutely brilliant, a must-see!",
"A complete waste of time and money.",
"The story was compelling and engaging."
]
# Labels: 1 for Positive, 0 for Negative
labels = [1, 0, 1, 0, 1, 1, 0, 1]
---
Step 2: Text Preprocessing
Computers don't understand words, so we must clean and process our text data first. This involves making text lowercase, removing punctuation, and filtering out common "stop words" (like 'the', 'a', 'is') that don't add much meaning.
#TextPreprocessing #DataCleaning
# Text Preprocessing Function
stop_words = set(stopwords.words('english'))
def preprocess_text(text):
# Make text lowercase
text = text.lower()
# Remove punctuation
text = re.sub(r'[^\w\s]', '', text)
# Tokenize and remove stopwords
tokens = text.split()
filtered_tokens = [word for word in tokens if word not in stop_words]
return " ".join(filtered_tokens)
# Apply preprocessing to our dataset
processed_texts = [preprocess_text(text) for text in texts]
print("--- Original vs. Processed ---")
for i in range(3):
print(f"Original: {texts[i]}")
print(f"Processed: {processed_texts[i]}\n")
---
Step 3: Splitting the Data
We must split our data into a training set (to teach the model) and a testing set (to evaluate its performance on unseen data).
#MachineLearning #TrainTestSplit
# Splitting data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
processed_texts,
labels,
test_size=0.25, # Use 25% of data for testing
random_state=42 # for reproducibility
)
print(f"Training samples: {len(X_train)}")
print(f"Testing samples: {len(X_test)}")
---
Step 4: Feature Extraction (Vectorization)
We need to convert our cleaned text into a numerical format. We'll use TF-IDF (Term Frequency-Inverse Document Frequency). This technique converts text into vectors of numbers, giving more weight to words that are important to a document but not common across all documents.
#FeatureEngineering #TFIDF #Vectorization
β€1
# Initialize the TF-IDF Vectorizer
vectorizer = TfidfVectorizer()
# Fit the vectorizer on the training data and transform it
X_train_tfidf = vectorizer.fit_transform(X_train)
# Only transform the test data using the already-fitted vectorizer
X_test_tfidf = vectorizer.transform(X_test)
print("Shape of training data vectors:", X_train_tfidf.shape)
print("Shape of testing data vectors:", X_test_tfidf.shape)
---
Step 5: Training the NLP Model
Now we can train a machine learning model. Multinomial Naive Bayes is a simple yet powerful algorithm that works very well for text classification tasks.
#ModelTraining #NaiveBayes
# Initialize and train the Naive Bayes classifier
model = MultinomialNB()
model.fit(X_train_tfidf, y_train)
print("Model training complete.")
---
Step 6: Making Predictions and Evaluating the Model
With our model trained, let's use it to make predictions on our unseen test data and see how well it performs.
#Evaluation #ModelPerformance #Prediction
# Make predictions on the test set
y_pred = model.predict(X_test_tfidf)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print(f"Model Accuracy: {accuracy * 100:.2f}%\n")
# Display a detailed classification report
print("Classification Report:")
print(classification_report(y_test, y_pred, target_names=['Negative', 'Positive']))
---
Step 7: Discussion of Results
#Results #Discussion
Our model achieved 100% accuracy on this very small test set.
Accuracy: This is the percentage of correct predictions. 100% is perfect, but this is expected on such a tiny, clean dataset. In the real world, an accuracy of 85-95% is often considered very good.
Precision: Of all the times the model predicted "Positive", what percentage were actually positive?
Recall: Of all the actual "Positive" texts, what percentage did the model correctly identify?
F1-Score: A weighted average of Precision and Recall.
Limitations: Our dataset is extremely small. A real model would need thousands of examples to be reliable and generalize well to new, unseen text.
---
Step 8: Testing the Model on New Sentences
Let's see how our complete pipeline works on brand new text.
#RealWorldNLP #Inference
# Function to predict sentiment of a new sentence
def predict_sentiment(sentence):
# 1. Preprocess the text
processed_sentence = preprocess_text(sentence)
# 2. Vectorize the text using the SAME vectorizer
vectorized_sentence = vectorizer.transform([processed_sentence])
# 3. Make a prediction
prediction = model.predict(vectorized_sentence)
# 4. Return the result
return "Positive" if prediction[0] == 1 else "Negative"
# Test with new sentences
new_sentence_1 = "The movie was absolutely amazing!"
new_sentence_2 = "I was very bored and did not like it."
print(f"'{new_sentence_1}' -> Sentiment: {predict_sentiment(new_sentence_1)}")
print(f"'{new_sentence_2}' -> Sentiment: {predict_sentiment(new_sentence_2)}")
βββββββββββββββ
By: @CodeProgrammer β¨
β€5π3
nature papers: 2000$
Q1 and Q2 papers 1000$
Q3 and Q4 papers 500$
Doctoral thesis (complete) 700$
M.S thesis 300$
paper simulation 200$
Contact me
https://t.me/m/-nTmpj5vYzNk
Q1 and Q2 papers 1000$
Q3 and Q4 papers 500$
Doctoral thesis (complete) 700$
M.S thesis 300$
paper simulation 200$
Contact me
https://t.me/m/-nTmpj5vYzNk
β€4