In Python, image processing unlocks powerful capabilities for computer vision, data augmentation, and automation—master these techniques to excel in ML engineering interviews and real-world applications! 🖼
more explain: https://hackmd.io/@husseinsheikho/imageprocessing
#Python #ImageProcessing #ComputerVision #Pillow #OpenCV #MachineLearning #CodingInterview #DataScience #Programming #TechJobs #DeveloperTips #AI #DeepLearning #CloudComputing #Docker #BackendDevelopment #SoftwareEngineering #CareerGrowth #TechTips #Python3
# PIL/Pillow Basics - The essential image library
from PIL import Image
# Open and display image
img = Image.open("input.jpg")
img.show()
# Convert formats
img.save("output.png")
img.convert("L").save("grayscale.jpg") # RGB to grayscale
# Basic transformations
img.rotate(90).save("rotated.jpg")
img.resize((300, 300)).save("resized.jpg")
img.transpose(Image.FLIP_LEFT_RIGHT).save("mirrored.jpg")
more explain: https://hackmd.io/@husseinsheikho/imageprocessing
#Python #ImageProcessing #ComputerVision #Pillow #OpenCV #MachineLearning #CodingInterview #DataScience #Programming #TechJobs #DeveloperTips #AI #DeepLearning #CloudComputing #Docker #BackendDevelopment #SoftwareEngineering #CareerGrowth #TechTips #Python3
❤5👍1
💡 Building a Simple Convolutional Neural Network (CNN)
Constructing a basic Convolutional Neural Network (CNN) is a fundamental step in deep learning for image processing. Using TensorFlow's Keras API, we can define a network with convolutional, pooling, and dense layers to classify images. This example sets up a simple CNN to recognize handwritten digits from the MNIST dataset.
Code explanation: This script defines a simple CNN using Keras. It loads and normalizes MNIST images. The
#Python #DeepLearning #CNN #Keras #TensorFlow
━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
Constructing a basic Convolutional Neural Network (CNN) is a fundamental step in deep learning for image processing. Using TensorFlow's Keras API, we can define a network with convolutional, pooling, and dense layers to classify images. This example sets up a simple CNN to recognize handwritten digits from the MNIST dataset.
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
import numpy as np
# 1. Load and preprocess the MNIST dataset
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Reshape images for CNN: (batch_size, height, width, channels)
# MNIST images are 28x28 grayscale, so channels = 1
train_images = train_images.reshape((60000, 28, 28, 1)).astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype('float32') / 255
# 2. Define the CNN architecture
model = models.Sequential()
# First Convolutional Block
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
# Second Convolutional Block
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
# Flatten the 3D output to 1D for the Dense layers
model.add(layers.Flatten())
# Dense (fully connected) layers
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) # Output layer for 10 classes (digits 0-9)
# 3. Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Print a summary of the model layers
model.summary()
# 4. Train the model (uncomment to run training)
# print("\nTraining the model...")
# model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_split=0.1)
# 5. Evaluate the model (uncomment to run evaluation)
# print("\nEvaluating the model...")
# test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
# print(f"Test accuracy: {test_acc:.4f}")
Code explanation: This script defines a simple CNN using Keras. It loads and normalizes MNIST images. The
Sequential model adds Conv2D layers for feature extraction, MaxPooling2D for downsampling, a Flatten layer to transition to 1D, and Dense layers for classification. The model is then compiled with an optimizer, loss function, and metrics, and a summary of its architecture is printed. Training and evaluation steps are included as commented-out examples.#Python #DeepLearning #CNN #Keras #TensorFlow
━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
❤16
💡 Keras: Building Neural Networks Simply
Keras is a high-level deep learning API, now part of TensorFlow, designed for fast and easy experimentation. This guide covers the fundamental workflow: defining, compiling, training, and using a neural network model.
• Model Definition:
•
•
• Compilation:
•
•
•
• Training: The
•
•
•
• Prediction:
• For a classification model with a softmax output, this returns an array of probabilities for each class.
•
#Keras #TensorFlow #DeepLearning #MachineLearning #Python
━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
Keras is a high-level deep learning API, now part of TensorFlow, designed for fast and easy experimentation. This guide covers the fundamental workflow: defining, compiling, training, and using a neural network model.
from tensorflow import keras
from tensorflow.keras import layers
# Define a Sequential model
model = keras.Sequential([
# Input layer with 64 neurons, expecting flat input data
layers.Dense(64, activation="relu", input_shape=(784,)),
# A hidden layer with 32 neurons
layers.Dense(32, activation="relu"),
# Output layer with 10 neurons for 10-class classification
layers.Dense(10, activation="softmax")
])
model.summary()
• Model Definition:
keras.Sequential creates a simple, layer-by-layer model.•
layers.Dense is a standard fully-connected layer. The first layer must specify the input_shape.•
activation functions like "relu" introduce non-linearity, while "softmax" is used on the output layer for multi-class classification to produce probabilities.# (Continuing from the previous step)
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
print("Model compiled successfully.")
• Compilation:
.compile() configures the model for training.•
optimizer is the algorithm used to update the model's weights (e.g., 'adam' is a popular choice).•
loss is the function the model tries to minimize during training. sparse_categorical_crossentropy is common for integer-based classification labels.•
metrics are used to monitor the training and testing steps. Here, we track accuracy.import numpy as np
# Create dummy training data
x_train = np.random.random((1000, 784))
y_train = np.random.randint(10, size=(1000,))
# Train the model
history = model.fit(
x_train,
y_train,
epochs=5,
batch_size=32,
verbose=0 # Hides the progress bar for a cleaner output
)
print(f"Training complete. Final accuracy: {history.history['accuracy'][-1]:.4f}")
# Output (will vary):
# Training complete. Final accuracy: 0.4570
• Training: The
.fit() method trains the model on your data.•
x_train and y_train are your input features and target labels.•
epochs defines how many times the model will see the entire dataset.•
batch_size is the number of samples processed before the model is updated.# Create a single dummy sample to test
x_test = np.random.random((1, 784))
# Get the model's prediction
predictions = model.predict(x_test)
predicted_class = np.argmax(predictions[0])
print(f"Predicted class: {predicted_class}")
print(f"Confidence scores: {predictions[0].round(2)}")
# Output (will vary):
# Predicted class: 3
# Confidence scores: [0.09 0.1 0.1 0.12 0.1 0.09 0.11 0.1 0.09 0.1 ]
• Prediction:
.predict() is used to make predictions on new, unseen data.• For a classification model with a softmax output, this returns an array of probabilities for each class.
•
np.argmax() is used to find the index (the class) with the highest probability score.#Keras #TensorFlow #DeepLearning #MachineLearning #Python
━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
❤3🔥3👍1