Python | Machine Learning | Coding | R
67.3K subscribers
1.25K photos
89 videos
153 files
906 links
Help and ads: @hussein_sheikho

Discover powerful insights with Python, Machine Learning, Coding, and R—your essential toolkit for data-driven solutions, smart alg

List of our channels:
https://t.me/addlist/8_rRW2scgfRhOTc0

https://telega.io/?r=nikapsOH
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
A new interactive sentiment visualization project has been developed, featuring a dynamic smiley face that reflects sentiment analysis results in real time. Using a natural language processing model, the system evaluates input text and adjusts the smiley face expression accordingly:

🙂 Positive sentiment

☹️ Negative sentiment

The visualization offers an intuitive and engaging way to observe sentiment dynamics as they happen.

🔗 GitHub: https://lnkd.in/e_gk3hfe
📰 Article: https://lnkd.in/e_baNJd2

#AI #SentimentAnalysis #DataVisualization #InteractiveDesign #NLP #MachineLearning #Python #GitHubProjects #TowardsDataScience

🔗 Our Telegram channels: https://t.me/addlist/0f6vfFbEMdAwODBk

📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
👍7👏3
#NLP #Lesson #SentimentAnalysis #MachineLearning

Building an NLP Model from Scratch: Sentiment Analysis

This lesson will guide you through creating a complete Natural Language Processing (NLP) project. We will build a sentiment analysis classifier that can determine if a piece of text is positive or negative.

---

Step 1: Setup and Data Preparation

First, we need to import the necessary libraries and prepare our dataset. For simplicity, we'll use a small, hard-coded list of sentences. In a real-world project, you would load this data from a file (e.g., a CSV).

#Python #DataPreparation

# Imports and Data
import re
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report
import nltk
from nltk.corpus import stopwords

# You may need to download stopwords for the first time
# nltk.download('stopwords')

# Sample Data (In a real project, load this from a file)
texts = [
"I love this movie, it's fantastic!",
"This was a terrible film.",
"The acting was superb and the plot was great.",
"I would not recommend this to anyone.",
"It was an okay movie, not the best but enjoyable.",
"Absolutely brilliant, a must-see!",
"A complete waste of time and money.",
"The story was compelling and engaging."
]
# Labels: 1 for Positive, 0 for Negative
labels = [1, 0, 1, 0, 1, 1, 0, 1]


---

Step 2: Text Preprocessing

Computers don't understand words, so we must clean and process our text data first. This involves making text lowercase, removing punctuation, and filtering out common "stop words" (like 'the', 'a', 'is') that don't add much meaning.

#TextPreprocessing #DataCleaning

# Text Preprocessing Function
stop_words = set(stopwords.words('english'))

def preprocess_text(text):
# Make text lowercase
text = text.lower()
# Remove punctuation
text = re.sub(r'[^\w\s]', '', text)
# Tokenize and remove stopwords
tokens = text.split()
filtered_tokens = [word for word in tokens if word not in stop_words]
return " ".join(filtered_tokens)

# Apply preprocessing to our dataset
processed_texts = [preprocess_text(text) for text in texts]
print("--- Original vs. Processed ---")
for i in range(3):
print(f"Original: {texts[i]}")
print(f"Processed: {processed_texts[i]}\n")


---

Step 3: Splitting the Data

We must split our data into a training set (to teach the model) and a testing set (to evaluate its performance on unseen data).

#MachineLearning #TrainTestSplit

# Splitting data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
processed_texts,
labels,
test_size=0.25, # Use 25% of data for testing
random_state=42 # for reproducibility
)

print(f"Training samples: {len(X_train)}")
print(f"Testing samples: {len(X_test)}")


---

Step 4: Feature Extraction (Vectorization)

We need to convert our cleaned text into a numerical format. We'll use TF-IDF (Term Frequency-Inverse Document Frequency). This technique converts text into vectors of numbers, giving more weight to words that are important to a document but not common across all documents.

#FeatureEngineering #TFIDF #Vectorization