Python | Machine Learning | Coding | R
67.4K subscribers
1.25K photos
89 videos
154 files
908 links
Help and ads: @hussein_sheikho

Discover powerful insights with Python, Machine Learning, Coding, and Rβ€”your essential toolkit for data-driven solutions, smart alg

List of our channels:
https://t.me/addlist/8_rRW2scgfRhOTc0

https://telega.io/?r=nikapsOH
Download Telegram
Top 100 Data Analysis Commands & Functions
πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡
Please open Telegram to view this post
VIEW IN TELEGRAM
❀3
Top 100 Data Analysis Commands & Functions

#DataAnalysis #Pandas #DataLoading #Inspection

Part 1: Pandas - Data Loading & Inspection

#1. pd.read_csv()
Reads a comma-separated values (csv) file into a Pandas DataFrame.

import pandas as pd
from io import StringIO

csv_data = "col1,col2,col3\n1,a,True\n2,b,False"
df = pd.read_csv(StringIO(csv_data))
print(df)

col1 col2   col3
0 1 a True
1 2 b False


#2. df.head()
Returns the first n rows of the DataFrame (default is 5).

import pandas as pd
df = pd.DataFrame({'A': range(10), 'B': list('abcdefghij')})
print(df.head(3))

A  B
0 0 a
1 1 b
2 2 c


#3. df.tail()
Returns the last n rows of theDataFrame (default is 5).

import pandas as pd
df = pd.DataFrame({'A': range(10), 'B': list('abcdefghij')})
print(df.tail(3))

A  B
7 7 h
8 8 i
9 9 j


#4. df.info()
Prints a concise summary of a DataFrame, including data types and non-null values.

import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, 2, np.nan], 'B': ['x', 'y', 'z']})
df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 A 2 non-null float64
1 B 3 non-null object
dtypes: float64(1), object(1)
memory usage: 176.0+ bytes


#5. df.describe()
Generates descriptive statistics for numerical columns.

import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4, 5]})
print(df.describe())

A
count 5.000000
mean 3.000000
std 1.581139
min 1.000000
25% 2.000000
50% 3.000000
75% 4.000000
max 5.000000


#6. df.shape
Returns a tuple representing the dimensionality (rows, columns) of the DataFrame.

import pandas as pd
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]})
print(df.shape)

(2, 3)


#7. df.columns
Returns the column labels of the DataFrame.

import pandas as pd
df = pd.DataFrame({'Name': ['Alice'], 'Age': [30]})
print(df.columns)

Index(['Name', 'Age'], dtype='object')


#8. df.dtypes
Returns the data types of each column.

import pandas as pd
df = pd.DataFrame({'A': [1, 2], 'B': [1.1, 2.2], 'C': ['x', 'y']})
print(df.dtypes)

A      int64
B float64
C object
dtype: object


#9. df['col'].value_counts()
Returns a Series containing counts of unique values in a column.

import pandas as pd
df = pd.DataFrame({'Fruit': ['Apple', 'Banana', 'Apple', 'Orange', 'Banana', 'Apple']})
print(df['Fruit'].value_counts())

Apple     3
Banana 2
Orange 1
Name: Fruit, dtype: int64


#10. df['col'].unique()
Returns an array of the unique values in a column.

import pandas as pd
df = pd.DataFrame({'Fruit': ['Apple', 'Banana', 'Apple', 'Orange']})
print(df['Fruit'].unique())

['Apple' 'Banana' 'Orange']


#11. df['col'].nunique()
Returns the number of unique values in a column.
❀2
import pandas as pd
df = pd.DataFrame({'Fruit': ['Apple', 'Banana', 'Apple', 'Orange']})
print(df['Fruit'].nunique())

3


#12. df.isnull()
Returns a DataFrame of boolean values indicating missing values.

import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, np.nan], 'B': [np.nan, 'x']})
print(df.isnull())

A      B
0 False True
1 True False


#13. df.isnull().sum()
Returns the number of missing values in each column.

import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, np.nan, 3, np.nan], 'B': [5, 6, 7, 8]})
print(df.isnull().sum())

A    2
B 0
dtype: int64


#14. df.to_csv()
Writes the DataFrame to a comma-separated values (csv) file.

import pandas as pd
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
csv_output = df.to_csv(index=False)
print(csv_output)

A,B
1,3
2,4


#15. df.copy()
Creates a deep copy of a DataFrame.

import pandas as pd
df1 = pd.DataFrame({'A': [1]})
df2 = df1.copy()
df2.loc[0, 'A'] = 99
print(f"Original df1:\n{df1}")
print(f"Copied df2:\n{df2}")

Original df1:
A
0 1
Copied df2:
A
0 99

---
#DataAnalysis #Pandas #Selection #Indexing

Part 2: Pandas - Data Selection & Indexing

#16. df['col']
Selects a single column as a Series.

import pandas as pd
df = pd.DataFrame({'Name': ['Alice', 'Bob'], 'Age': [30, 25]})
print(df['Name'])

0    Alice
1 Bob
Name: Name, dtype: object


#17. df[['col1', 'col2']]
Selects multiple columns as a new DataFrame.

import pandas as pd
df = pd.DataFrame({'Name': ['Alice'], 'Age': [30], 'City': ['New York']})
print(df[['Name', 'City']])

Name       City
0 Alice New York


#18. df.loc[]
Accesses a group of rows and columns by label(s) or a boolean array.

import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3]}, index=['x', 'y', 'z'])
print(df.loc['y'])

A    2
Name: y, dtype: int64


#19. df.iloc[]
Accesses a group of rows and columns by integer position(s).

import pandas as pd
df = pd.DataFrame({'A': [10, 20, 30]})
print(df.iloc[1])

A    20
Name: 1, dtype: int64


#20. df[df['col'] > value]
Selects rows based on a boolean condition (boolean indexing).

import pandas as pd
df = pd.DataFrame({'Age': [22, 35, 18, 40]})
print(df[df['Age'] > 30])

Age
1 35
3 40


#21. df.set_index()
Sets the DataFrame index using existing columns.

import pandas as pd
df = pd.DataFrame({'Country': ['USA', 'UK'], 'Code': [1, 44]})
df_indexed = df.set_index('Country')
print(df_indexed)

Code
Country
USA 1
UK 44


#22. df.reset_index()
Resets the index of the DataFrame and uses the default integer index.

import pandas as pd
df = pd.DataFrame({'Code': [1, 44]}, index=['USA', 'UK'])
df_reset = df.reset_index()
print(df_reset)

index  Code
0 USA 1
1 UK 44


#23. df.at[]
Accesses a single value by row/column label pair. Faster than .loc.
❀1
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3]}, index=['x', 'y', 'z'])
print(df.at['y', 'A'])

2


#24. df.iat[]
Accesses a single value by row/column integer position. Faster than .iloc.

import pandas as pd
df = pd.DataFrame({'A': [10, 20, 30]})
print(df.iat[1, 0])

20


#25. df.sample()
Returns a random sample of items from an axis of object.

import pandas as pd
df = pd.DataFrame({'A': range(10)})
print(df.sample(n=3))

A
8 8
2 2
5 5
(Note: Output rows will be random)

---
#DataAnalysis #Pandas #DataCleaning #Manipulation

Part 3: Pandas - Data Cleaning & Manipulation

#26. df.dropna()
Removes missing values.

import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, np.nan, 3]})
print(df.dropna())

A
0 1.0
2 3.0


#27. df.fillna()
Fills missing (NA/NaN) values using a specified method.

import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, np.nan, 3]})
print(df.fillna(0))

A
0 1.0
1 0.0
2 3.0


#28. df.astype()
Casts a pandas object to a specified dtype.

import pandas as pd
df = pd.DataFrame({'A': [1.1, 2.7, 3.5]})
df['A'] = df['A'].astype(int)
print(df)

A
0 1
1 2
2 3


#29. df.rename()
Alters axes labels.

import pandas as pd
df = pd.DataFrame({'a': [1], 'b': [2]})
df_renamed = df.rename(columns={'a': 'A', 'b': 'B'})
print(df_renamed)

A  B
0 1 2


#30. df.drop()
Drops specified labels from rows or columns.

import pandas as pd
df = pd.DataFrame({'A': [1], 'B': [2], 'C': [3]})
df_dropped = df.drop(columns=['B'])
print(df_dropped)

A  C
0 1 3


#31. pd.to_datetime()
Converts argument to datetime.

import pandas as pd
s = pd.Series(['2023-01-01', '2023-01-02'])
dt_s = pd.to_datetime(s)
print(dt_s)

0   2023-01-01
1 2023-01-02
dtype: datetime64[ns]


#32. df.apply()
Applies a function along an axis of the DataFrame.

import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3]})
df['B'] = df['A'].apply(lambda x: x * 2)
print(df)

A  B
0 1 2
1 2 4
2 3 6


#33. df['col'].map()
Maps values of a Series according to an input mapping or function.

import pandas as pd
df = pd.DataFrame({'Gender': ['M', 'F', 'M']})
df['Gender_Full'] = df['Gender'].map({'M': 'Male', 'F': 'Female'})
print(df)

Gender Gender_Full
0 M Male
1 F Female
2 M Male


#34. df.replace()
Replaces values given in to_replace with value.

import pandas as pd
df = pd.DataFrame({'Score': [10, -99, 15, -99]})
df_replaced = df.replace(-99, 0)
print(df_replaced)

Score
0 10
1 0
2 15
3 0


#35. df.duplicated()
Returns a boolean Series denoting duplicate rows.

import pandas as pd
df = pd.DataFrame({'A': [1, 2, 1], 'B': ['a', 'b', 'a']})
print(df.duplicated())

0    False
1 False
2 True
dtype: bool


#36. df.drop_duplicates()
Returns a DataFrame with duplicate rows removed.
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 1], 'B': ['a', 'b', 'a']})
print(df.drop_duplicates())

A  B
0 1 a
1 2 b


#37. df.sort_values()
Sorts by the values along either axis.

import pandas as pd
df = pd.DataFrame({'Age': [25, 22, 30]})
print(df.sort_values(by='Age'))

Age
1 22
0 25
2 30


#38. df.sort_index()
Sorts object by labels (along an axis).

import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3]}, index=[10, 5, 8])
print(df.sort_index())

A
5 2
8 3
10 1


#39. pd.cut()
Bins values into discrete intervals.

import pandas as pd
ages = pd.Series([22, 35, 58, 8, 42])
age_bins = pd.cut(ages, bins=[0, 18, 35, 60], labels=['Child', 'Adult', 'Senior'])
print(age_bins)

0     Adult
1 Adult
2 Senior
3 Child
4 Senior
dtype: category
Categories (3, object): ['Child' < 'Adult' < 'Senior']


#40. pd.qcut()
Quantile-based discretization function (bins into equal-sized groups).

import pandas as pd
data = pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
quartiles = pd.qcut(data, 4, labels=False)
print(quartiles)

0    0
1 0
2 0
3 1
4 1
5 2
6 2
7 3
8 3
9 3
dtype: int64


#41. s.str.contains()
Tests if a pattern or regex is contained within a string of a Series.

import pandas as pd
s = pd.Series(['apple', 'banana', 'apricot'])
print(s[s.str.contains('ap')])

0      apple
2 apricot
dtype: object


#42. s.str.split()
Splits strings around a given separator/delimiter.

import pandas as pd
s = pd.Series(['a_b', 'c_d'])
print(s.str.split('_', expand=True))

0  1
0 a b
1 c d


#43. s.str.lower()
Converts strings in the Series to lowercase.

import pandas as pd
s = pd.Series(['HELLO', 'World'])
print(s.str.lower())

0    hello
1 world
dtype: object


#44. s.str.strip()
Removes leading and trailing whitespace.

import pandas as pd
s = pd.Series([' hello ', ' world '])
print(s.str.strip())

0    hello
1 world
dtype: object


#45. s.dt.year
Extracts the year from a datetime Series.

import pandas as pd
s = pd.to_datetime(pd.Series(['2023-01-01', '2024-05-10']))
print(s.dt.year)

0    2023
1 2024
dtype: int64

---
#DataAnalysis #Pandas #Grouping #Aggregation

Part 4: Pandas - Grouping & Aggregation

#46. df.groupby()
Groups a DataFrame using a mapper or by a Series of columns.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
grouped = df.groupby('Team')
print(grouped)

<pandas.core.groupby.generic.DataFrameGroupBy object at 0x...>


#47. groupby.agg()
Aggregates using one or more operations over the specified axis.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
agg_df = df.groupby('Team').agg(['mean', 'sum'])
print(agg_df)

Points     
mean sum
Team
A 11 22
B 7 14


#48. groupby.size()
Computes group sizes.
❀1
import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B', 'A']})
print(df.groupby('Team').size())

Team
A 3
B 2
dtype: int64


#49. groupby.count()
Computes the count of non-NA cells for each group.

import pandas as pd
import numpy as np
df = pd.DataFrame({'Team': ['A', 'B', 'A'], 'Score': [1, np.nan, 3]})
print(df.groupby('Team').count())

Score
Team
A 2
B 0


#50. groupby.mean()
Computes the mean of group values.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
print(df.groupby('Team').mean())

Points
Team
A 11
B 7


#51. groupby.sum()
Computes the sum of group values.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
print(df.groupby('Team').sum())

Points
Team
A 22
B 14


#52. groupby.min()
Computes the minimum of group values.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
print(df.groupby('Team').min())

Points
Team
A 10
B 6


#53. groupby.max()
Computes the maximum of group values.

import pandas as pd
df = pd.DataFrame({'Team': ['A', 'B', 'A', 'B'], 'Points': [10, 8, 12, 6]})
print(df.groupby('Team').max())

Points
Team
A 12
B 8


#54. df.pivot_table()
Creates a spreadsheet-style pivot table as a DataFrame.

import pandas as pd
df = pd.DataFrame({'A': ['foo', 'foo', 'bar'], 'B': ['one', 'two', 'one'], 'C': [1, 2, 3]})
pivot = df.pivot_table(values='C', index='A', columns='B')
print(pivot)

B    one  two
A
bar 3.0 NaN
foo 1.0 2.0


#55. pd.crosstab()
Computes a cross-tabulation of two (or more) factors.

import pandas as pd
df = pd.DataFrame({'A': ['foo', 'foo', 'bar'], 'B': ['one', 'two', 'one']})
crosstab = pd.crosstab(df.A, df.B)
print(crosstab)

B    one  two
A
bar 1 0
foo 1 1

---
#DataAnalysis #Pandas #Merging #Joining

Part 5: Pandas - Merging & Concatenating

#56. pd.merge()
Merges DataFrame or named Series objects with a database-style join.

import pandas as pd
df1 = pd.DataFrame({'key': ['A', 'B'], 'val1': [1, 2]})
df2 = pd.DataFrame({'key': ['A', 'B'], 'val2': [3, 4]})
merged = pd.merge(df1, df2, on='key')
print(merged)

key  val1  val2
0 A 1 3
1 B 2 4


#57. pd.concat()
Concatenates pandas objects along a particular axis.

import pandas as pd
df1 = pd.DataFrame({'A': [1, 2]})
df2 = pd.DataFrame({'A': [3, 4]})
concatenated = pd.concat([df1, df2])
print(concatenated)

A
0 1
1 2
0 3
1 4


#58. df.join()
Joins columns with other DataFrame(s) on index or on a key column.
❀1
import pandas as pd
df1 = pd.DataFrame({'val1': [1, 2]}, index=['A', 'B'])
df2 = pd.DataFrame({'val2': [3, 4]}, index=['A', 'B'])
joined = df1.join(df2)
print(joined)

val1  val2
A 1 3
B 2 4


#59. pd.get_dummies()
Converts categorical variable into dummy/indicator variables (one-hot encoding).

import pandas as pd
s = pd.Series(list('abca'))
dummies = pd.get_dummies(s)
print(dummies)

a  b  c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0


#60. df.nlargest()
Returns the first n rows ordered by columns in descending order.

import pandas as pd
df = pd.DataFrame({'population': [100, 500, 200, 800]})
print(df.nlargest(2, 'population'))

population
3 800
1 500

---
#DataAnalysis #NumPy #Arrays

Part 6: NumPy - Array Creation & Manipulation

#61. np.array()
Creates a NumPy ndarray.

import numpy as np
arr = np.array([1, 2, 3])
print(arr)

[1 2 3]


#62. np.arange()
Returns an array with evenly spaced values within a given interval.

import numpy as np
arr = np.arange(0, 5)
print(arr)

[0 1 2 3 4]


#63. np.linspace()
Returns an array with evenly spaced numbers over a specified interval.

import numpy as np
arr = np.linspace(0, 10, 5)
print(arr)

[ 0.   2.5  5.   7.5 10. ]


#64. np.zeros()
Returns a new array of a given shape and type, filled with zeros.

import numpy as np
arr = np.zeros((2, 3))
print(arr)

[[0. 0. 0.]
[0. 0. 0.]]


#65. np.ones()
Returns a new array of a given shape and type, filled with ones.

import numpy as np
arr = np.ones((2, 3))
print(arr)

[[1. 1. 1.]
[1. 1. 1.]]


#66. np.random.rand()
Creates an array of the given shape and populates it with random samples from a uniform distribution over [0, 1).

import numpy as np
arr = np.random.rand(2, 2)
print(arr)

[[0.13949386 0.2921446 ]
[0.52273283 0.77122228]]
(Note: Output values will be random)


#67. arr.reshape()
Gives a new shape to an array without changing its data.

import numpy as np
arr = np.arange(6)
reshaped_arr = arr.reshape((2, 3))
print(reshaped_arr)

[[0 1 2]
[3 4 5]]


#68. np.concatenate()
Joins a sequence of arrays along an existing axis.

import numpy as np
a = np.array([[1, 2]])
b = np.array([[3, 4]])
print(np.concatenate((a, b), axis=0))

[[1 2]
[3 4]]


#69. np.vstack()
Stacks arrays in sequence vertically (row wise).

import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(np.vstack((a, b)))

[[1 2]
[3 4]]


#70. np.hstack()
Stacks arrays in sequence horizontally (column wise).

import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(np.hstack((a, b)))

[1 2 3 4]

---
#DataAnalysis #NumPy #Math #Statistics

Part 7: NumPy - Mathematical & Statistical Functions

#71. np.mean()
Computes the arithmetic mean along the specified axis.

import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(np.mean(arr))

3.0


#72. np.median()
Computes the median along the specified axis.

import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(np.median(arr))

3.0


#73. np.std()
Computes the standard deviation along the specified axis.
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(np.std(arr))

1.4142135623730951


#74. np.sum()
Sums array elements over a given axis.

import numpy as np
arr = np.array([[1, 2], [3, 4]])
print(np.sum(arr))

10


#75. np.min()
Returns the minimum of an array or minimum along an axis.

import numpy as np
arr = np.array([5, 2, 8, 1])
print(np.min(arr))

1


#76. np.max()
Returns the maximum of an array or maximum along an axis.

import numpy as np
arr = np.array([5, 2, 8, 1])
print(np.max(arr))

8


#77. np.sqrt()
Returns the non-negative square-root of an array, element-wise.

import numpy as np
arr = np.array([4, 9, 16])
print(np.sqrt(arr))

[2. 3. 4.]


#78. np.log()
Calculates the natural logarithm, element-wise.

import numpy as np
arr = np.array([1, np.e, np.e**2])
print(np.log(arr))

[0. 1. 2.]


#79. np.dot()
Calculates the dot product of two arrays.

import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(np.dot(a, b))

11


#80. np.where()
Returns elements chosen from x or y depending on a condition.

import numpy as np
arr = np.array([10, 5, 20, 15])
print(np.where(arr > 12, 'High', 'Low'))

['Low' 'Low' 'High' 'High']

---
#DataAnalysis #Matplotlib #Seaborn #Visualization

Part 8: Matplotlib & Seaborn - Data Visualization

#81. plt.plot()
Plots y versus x as lines and/or markers.

import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4], [1, 4, 9, 16])
# In a real script, you would call plt.show()
print("Output: A figure window opens displaying a line plot.")

Output: A figure window opens displaying a line plot.


#82. plt.scatter()
A scatter plot of y vs. x with varying marker size and/or color.

import matplotlib.pyplot as plt
plt.scatter([1, 2, 3, 4], [1, 4, 9, 16])
print("Output: A figure window opens displaying a scatter plot.")

Output: A figure window opens displaying a scatter plot.


#83. plt.hist()
Computes and draws the histogram of x.

import matplotlib.pyplot as plt
import numpy as np
data = np.random.randn(1000)
plt.hist(data, bins=30)
print("Output: A figure window opens displaying a histogram.")

Output: A figure window opens displaying a histogram.


#84. plt.bar()
Makes a bar plot.

import matplotlib.pyplot as plt
plt.bar(['A', 'B', 'C'], [10, 15, 7])
print("Output: A figure window opens displaying a bar chart.")

Output: A figure window opens displaying a bar chart.


#85. plt.boxplot()
Makes a box and whisker plot.

import matplotlib.pyplot as plt
import numpy as np
data = [np.random.normal(0, std, 100) for std in range(1, 4)]
plt.boxplot(data)
print("Output: A figure window opens displaying a box plot.")

Output: A figure window opens displaying a box plot.


#86. sns.heatmap()
Plots rectangular data as a color-encoded matrix.

import seaborn as sns
import numpy as np
data = np.random.rand(10, 12)
sns.heatmap(data)
print("Output: A figure window opens displaying a heatmap.")

Output: A figure window opens displaying a heatmap.


#87. sns.pairplot()
Plots pairwise relationships in a dataset.
❀2
import seaborn as sns
import pandas as pd
df = pd.DataFrame(np.random.randn(100, 4), columns=['A', 'B', 'C', 'D'])
# sns.pairplot(df) # This line would generate the plot
print("Output: A figure grid opens showing scatterplots for each pair of variables.")

Output: A figure grid opens showing scatterplots for each pair of variables.


#88. sns.countplot()
Shows the counts of observations in each categorical bin using bars.

import seaborn as sns
import pandas as pd
df = pd.DataFrame({'category': ['A', 'B', 'A', 'C', 'A', 'B']})
sns.countplot(x='category', data=df)
print("Output: A figure window opens showing a count plot.")

Output: A figure window opens showing a count plot.


#89. sns.jointplot()
Draws a plot of two variables with bivariate and univariate graphs.

import seaborn as sns
import pandas as pd
df = pd.DataFrame({'x': range(50), 'y': range(50) + np.random.randn(50)})
# sns.jointplot(x='x', y='y', data=df) # This line would generate the plot
print("Output: A figure shows a scatter plot with histograms for each axis.")

Output: A figure shows a scatter plot with histograms for each axis.


#90. plt.show()
Displays all open figures.

import matplotlib.pyplot as plt
plt.plot([1, 2, 3])
# plt.show() # In a script, this is essential to see the plot.
print("Executes the command to render and display the plot.")

Executes the command to render and display the plot.

---
#DataAnalysis #ScikitLearn #Modeling #Preprocessing

Part 9: Scikit-learn - Modeling & Preprocessing

#91. train_test_split()
Splits arrays or matrices into random train and test subsets.

from sklearn.model_selection import train_test_split
import numpy as np
X, y = np.arange(10).reshape((5, 2)), range(5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
print(f"X_train shape: {X_train.shape}")
print(f"X_test shape: {X_test.shape}")

X_train shape: (3, 2)
X_test shape: (2, 2)


#92. StandardScaler()
Standardizes features by removing the mean and scaling to unit variance.

from sklearn.preprocessing import StandardScaler
data = [[0, 0], [0, 0], [1, 1], [1, 1]]
scaler = StandardScaler()
print(scaler.fit_transform(data))

[[-1. -1.]
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]


#93. MinMaxScaler()
Transforms features by scaling each feature to a given range, typically [0, 1].

from sklearn.preprocessing import MinMaxScaler
data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]]
scaler = MinMaxScaler()
print(scaler.fit_transform(data))

[[0.   0.  ]
[0.25 0.25]
[0.5 0.5 ]
[1. 1. ]]


#94. LabelEncoder()
Encodes target labels with values between 0 and n_classes-1.

from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
encoded = le.fit_transform(['paris', 'tokyo', 'paris'])
print(encoded)

[0 1 0]


#95. OneHotEncoder()
Encodes categorical features as a one-hot numeric array.

from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
X = [['Male'], ['Female'], ['Female']]
print(enc.fit_transform(X).toarray())

[[0. 1.]
[1. 0.]
[1. 0.]]


#96. LinearRegression()
Ordinary least squares Linear Regression model.

from sklearn.linear_model import LinearRegression
X = [[0], [1], [2]]
y = [0, 1, 2]
reg = LinearRegression().fit(X, y)
print(f"Coefficient: {reg.coef_[0]}")
❀2
Coefficient: 1.0


#97. LogisticRegression()
Implements Logistic Regression for classification.

from sklearn.linear_model import LogisticRegression
X = [[-1], [0], [1], [2]]
y = [0, 0, 1, 1]
clf = LogisticRegression().fit(X, y)
print(f"Prediction for [[-2]]: {clf.predict([[-2]])}")

Prediction for [[-2]]: [0]


#98. KMeans()
K-Means clustering algorithm.

from sklearn.cluster import KMeans
X = [[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]
kmeans = KMeans(n_clusters=2, n_init='auto').fit(X)
print(kmeans.labels_)

[0 0 0 1 1 1]
(Note: Cluster labels may be flipped, e.g., [1 1 1 0 0 0])


#99. accuracy_score()
Calculates the accuracy classification score.

from sklearn.metrics import accuracy_score
y_true = [0, 1, 1, 0]
y_pred = [0, 1, 0, 0]
print(accuracy_score(y_true, y_pred))

0.75


#100. confusion_matrix()
Computes a confusion matrix to evaluate the accuracy of a classification.

from sklearn.metrics import confusion_matrix
y_true = [0, 1, 0, 1]
y_pred = [1, 1, 0, 1]
print(confusion_matrix(y_true, y_pred))

[[1 1]
[0 2]]


━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
❀6πŸ†’4πŸ‘2
Unlock premium learning without spending a dime! ⭐️ @DataScienceC is the first Telegram channel dishing out free Udemy coupons dailyβ€”grab courses on data science, coding, AI, and beyond. Join the revolution and boost your skills for free today! πŸ“•

What topic are you itching to learn next? 😊
https://t.me/DataScienceC 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
❀2
πŸ’‘ Top 50 Pillow Operations for Image Processing
πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡πŸ‘‡
Please open Telegram to view this post
VIEW IN TELEGRAM
❀1
πŸ’‘ Top 50 Pillow Operations for Image Processing

I. File & Basic Operations

β€’ Open an image file.
from PIL import Image
img = Image.open("image.jpg")

β€’ Save an image.
img.save("new_image.png")

β€’ Display an image (opens in default viewer).
img.show()

β€’ Create a new blank image.
new_img = Image.new("RGB", (200, 100), "blue")

β€’ Get image format (e.g., 'JPEG').
print(img.format)

β€’ Get image dimensions as a (width, height) tuple.
width, height = img.size

β€’ Get pixel format (e.g., 'RGB', 'L' for grayscale).
print(img.mode)

β€’ Convert image mode.
grayscale_img = img.convert("L")

β€’ Get a pixel's color value at (x, y).
r, g, b = img.getpixel((10, 20))

β€’ Set a pixel's color value at (x, y).
img.putpixel((10, 20), (255, 0, 0))


II. Cropping, Resizing & Pasting

β€’ Crop a rectangular region.
box = (100, 100, 400, 400)
cropped_img = img.crop(box)

β€’ Resize an image to an exact size.
resized_img = img.resize((200, 200))

β€’ Create a thumbnail (maintains aspect ratio).
img.thumbnail((128, 128))

β€’ Paste one image onto another.
img.paste(another_img, (50, 50))


III. Rotation & Transformation

β€’ Rotate an image (counter-clockwise).
rotated_img = img.rotate(45, expand=True)

β€’ Flip an image horizontally.
flipped_img = img.transpose(Image.FLIP_LEFT_RIGHT)

β€’ Flip an image vertically.
flipped_img = img.transpose(Image.FLIP_TOP_BOTTOM)

β€’ Rotate by 90, 180, or 270 degrees.
img_90 = img.transpose(Image.ROTATE_90)

β€’ Apply an affine transformation.
transformed = img.transform(img.size, Image.AFFINE, (1, 0.5, 0, 0, 1, 0))


IV. ImageOps Module Helpers

β€’ Invert image colors.
from PIL import ImageOps
inverted_img = ImageOps.invert(img)

β€’ Flip an image horizontally (mirror).
mirrored_img = ImageOps.mirror(img)

β€’ Flip an image vertically.
flipped_v_img = ImageOps.flip(img)

β€’ Convert to grayscale.
grayscale = ImageOps.grayscale(img)

β€’ Colorize a grayscale image.
colorized = ImageOps.colorize(grayscale, black="blue", white="yellow")

β€’ Reduce the number of bits for each color channel.
posterized = ImageOps.posterize(img, 4)

β€’ Auto-adjust image contrast.
adjusted_img = ImageOps.autocontrast(img)

β€’ Equalize the image histogram.
equalized_img = ImageOps.equalize(img)

β€’ Add a border to an image.
bordered = ImageOps.expand(img, border=10, fill='black')


V. Color & Pixel Operations

β€’ Split image into individual bands (e.g., R, G, B).
r, g, b = img.split()

β€’ Merge bands back into an image.
merged_img = Image.merge("RGB", (r, g, b))

β€’ Apply a function to each pixel.
brighter_img = img.point(lambda i: i * 1.2)

β€’ Get a list of colors used in the image.
colors = img.getcolors(maxcolors=256)

β€’ Blend two images with alpha compositing.
# Both images must be in RGBA mode
blended = Image.alpha_composite(img1_rgba, img2_rgba)


VI. Filters (ImageFilter)
❀2
β€’ Apply a simple blur filter.
from PIL import ImageFilter
blurred_img = img.filter(ImageFilter.BLUR)

β€’ Apply a box blur with a given radius.
box_blur = img.filter(ImageFilter.BoxBlur(5))

β€’ Apply a Gaussian blur.
gaussian_blur = img.filter(ImageFilter.GaussianBlur(radius=2))

β€’ Sharpen the image.
sharpened = img.filter(ImageFilter.SHARPEN)

β€’ Find edges.
edges = img.filter(ImageFilter.FIND_EDGES)

β€’ Enhance edges.
edge_enhanced = img.filter(ImageFilter.EDGE_ENHANCE)

β€’ Emboss the image.
embossed = img.filter(ImageFilter.EMBOSS)

β€’ Find contours.
contours = img.filter(ImageFilter.CONTOUR)


VII. Image Enhancement (ImageEnhance)

β€’ Adjust color saturation.
from PIL import ImageEnhance
enhancer = ImageEnhance.Color(img)
vibrant_img = enhancer.enhance(2.0)

β€’ Adjust brightness.
enhancer = ImageEnhance.Brightness(img)
bright_img = enhancer.enhance(1.5)

β€’ Adjust contrast.
enhancer = ImageEnhance.Contrast(img)
contrast_img = enhancer.enhance(1.5)

β€’ Adjust sharpness.
enhancer = ImageEnhance.Sharpness(img)
sharp_img = enhancer.enhance(2.0)


VIII. Drawing (ImageDraw & ImageFont)

β€’ Draw text on an image.
from PIL import ImageDraw, ImageFont
draw = ImageDraw.Draw(img)
font = ImageFont.truetype("arial.ttf", 36)
draw.text((10, 10), "Hello", font=font, fill="red")

β€’ Draw a line.
draw.line((0, 0, 100, 200), fill="blue", width=3)

β€’ Draw a rectangle (outline).
draw.rectangle([10, 10, 90, 60], outline="green", width=2)

β€’ Draw a filled ellipse.
draw.ellipse([100, 100, 180, 150], fill="yellow")

β€’ Draw a polygon.
draw.polygon([(10,10), (20,50), (60,10)], fill="purple")


#Python #Pillow #ImageProcessing #PIL #CheatSheet

━━━━━━━━━━━━━━━
By: @CodeProgrammer ✨
❀7πŸ”₯6πŸ‘2πŸŽ‰2
Core Python Cheatsheet.pdf
173.3 KB
Python is a high-level, interpreted programming language known for its simplicity, readability, and
 versatility. It was first released in 1991 by Guido van Rossum and has since become one of the most
 popular programming languages in the world.
 Python’s syntax emphasizes readability, with code written in a clear and concise manner using whitespace and indentation to define blocks of code. It is an interpreted language, meaning that
 code is executed line-by-line rather than compiled into machine code. This makes it easy to write and test code quickly, without needing to worry about the details of low-level hardware.
 Python is a general-purpose language, meaning that it can be used for a wide variety of applications, from web development to scientific computing to artificial intelligence and machine learning. Its simplicity and ease of use make it a popular choice for beginners, while its power and flexibility make it a favorite of experienced developers.
 Python’s standard library contains a wide range of modules and packages, providing support for
 everything from basic data types and control structures to advanced data manipulation and visualization. Additionally, there are countless third-party packages available through Python’s package manager, pip, allowing developers to easily extend Python’s capabilities to suit their needs.
 Overall, Python’s combination of simplicity, power, and flexibility makes it an ideal language for a wide range of applications and skill levels.


https://t.me/CodeProgrammer ⚑️
Please open Telegram to view this post
VIEW IN TELEGRAM
❀5πŸ‘1πŸ‘Ž1
Ever wondered if a forgotten crypto wallet could change your life?
X-Plus Software uses smart AI to uncover wallets left untouched for over a year β€” some still holding thousands!
Ready to discover hidden assets and jump in before everyone else?
Limited spots for early members β€” don’t miss your chance!
Take the first step right here.

#ad InsideAds