Github Top Repositories
12.9K subscribers
371 photos
57 videos
9 files
1.32K links
Top GitHub repositories in one place 🚀
Explore the best projects in programming, AI, data science, and more.
Download Telegram
# In __init__, call the setup method
self.setup_pos_ui()
self.current_sale_items = {} # Dictionary to store {drug_id: {data, quantity}}

def setup_pos_ui(self):
main_layout = QHBoxLayout()

# Left side: Sale and Barcode input
left_layout = QVBoxLayout()

barcode_group = QGroupBox("Scan Barcode")
barcode_layout = QVBoxLayout()
self.barcode_input = QLineEdit()
self.barcode_input.setPlaceholderText("Scan or type barcode and press Enter...")
self.barcode_input.returnPressed.connect(self.add_item_to_sale)
barcode_layout.addWidget(self.barcode_input)
barcode_group.setLayout(barcode_layout)

self.sales_table = QTableWidget()
self.sales_table.setColumnCount(5)
self.sales_table.setHorizontalHeaderLabels(['ID', 'Name', 'Quantity', 'Unit Price', 'Total Price'])

left_layout.addWidget(barcode_group)
left_layout.addWidget(self.sales_table)

# Right side: Totals and Actions
right_layout = QVBoxLayout()

total_group = QGroupBox("Sale Summary")
total_form = QFormLayout()
self.total_amount_label = QLabel("0.00")
total_form.addRow("Total Amount:", self.total_amount_label)
total_group.setLayout(total_form)

complete_sale_btn = QPushButton("Complete Sale")
complete_sale_btn.clicked.connect(self.complete_sale)
clear_sale_btn = QPushButton("Clear Sale")
clear_sale_btn.clicked.connect(self.clear_sale)

right_layout.addWidget(total_group)
right_layout.addWidget(complete_sale_btn)
right_layout.addWidget(clear_sale_btn)
right_layout.addStretch()

main_layout.addLayout(left_layout, stretch=3) # Left side takes 3/4 of space
main_layout.addLayout(right_layout, stretch=1) # Right side takes 1/4

self.pos_tab.setLayout(main_layout)

#Hashtags: #PointOfSale #BarcodeScanner #UIUX #PyQt5

---

#Step 4: Implementing the Sales Logic

This is the core logic that connects the barcode input to the sales table and the database. When a barcode is entered, we find the drug, add it to the current sale, and update the UI. The "Complete Sale" button will finalize the transaction by updating the database.

Add these methods to the PharmacyApp class:
def add_item_to_sale(self):
barcode = self.barcode_input.text()
if not barcode:
return

drug = db.find_drug_by_barcode(barcode)

if not drug:
QMessageBox.warning(self, "Not Found", "No drug found with this barcode.")
self.barcode_input.clear()
return

drug_id = drug[0]

if drug[3] <= 0: # Check quantity
QMessageBox.warning(self, "Out of Stock", f"{drug[1]} is out of stock.")
self.barcode_input.clear()
return

if drug_id in self.current_sale_items:
# Item already in sale, increment quantity
self.current_sale_items[drug_id]['quantity'] += 1
else:
# Add new item to sale
self.current_sale_items[drug_id] = {
'data': drug,
'quantity': 1
}

self.update_sales_table()
self.barcode_input.clear()

def update_sales_table(self):
self.sales_table.setRowCount(len(self.current_sale_items))
total_sale_amount = 0.0

for row, item in enumerate(self.current_sale_items.values()):
drug_data = item['data']
quantity = item['quantity']
unit_price = drug_data[4]
total_price = quantity * unit_price

self.sales_table.setItem(row, 0, QTableWidgetItem(str(drug_data[0]))) # ID
self.sales_table.setItem(row, 1, QTableWidgetItem(drug_data[1])) # Name
self.sales_table.setItem(row, 2, QTableWidgetItem(str(quantity)))
self.sales_table.setItem(row, 3, QTableWidgetItem(f"{unit_price:.2f}"))
self.sales_table.setItem(row, 4, QTableWidgetItem(f"{total_price:.2f}"))

total_sale_amount += total_price

self.total_amount_label.setText(f"{total_sale_amount:.2f}")

def complete_sale(self):
if not self.current_sale_items:
return

for drug_id, item in self.current_sale_items.items():
db.update_drug_quantity(drug_id, item['quantity'])

QMessageBox.information(self, "Success", f"Sale completed. Total: {self.total_amount_label.text()}")

self.clear_sale()
self.load_inventory_data() # Refresh inventory tab to show new quantities

def clear_sale(self):
self.current_sale_items.clear()
self.update_sales_table()

#Hashtags: #BusinessLogic #PointOfSale #PythonCode #Transaction

---

#Step 5: Results and Discussion

With all the code in place, you have a fully functional pharmacy management system.

How to Use It:
â€Ē Run the main.py script.
â€Ē Go to the "Inventory Management" tab and add a few drugs with unique barcodes.
â€Ē Go to the "Point of Sale" tab. The cursor will be in the barcode input field.
â€Ē Type a barcode of a drug you added and press Enter. The drug will appear in the sales table.
â€Ē Scan the same barcode again. The quantity for that drug in the sales table will increase to 2.
â€Ē Click "Complete Sale". A success message will appear. The sales table will clear.
â€Ē Switch back to the "Inventory Management" tab. You will see that the quantity of the sold drugs has decreased accordingly.
âĪ2
Discussion and Potential Improvements:
Real Barcode Scanner: This application works directly with a USB barcode scanner. A scanner acts as a keyboard, so when it scans a code, it types the numbers and sends an "Enter" keystroke, which perfectly triggers our returnPressed signal.
Data Integrity: We added a basic check for stock (quantity > 0). A more robust system would check if the quantity in the cart exceeds the quantity in stock before allowing the sale to complete.
Features for a Real Pharmacy: A production-level system would need many more features: prescription management, patient records, batch tracking for recalls, advanced reporting (e.g., top-selling drugs, low-stock alerts), user accounts with different permission levels, and receipt printing.
Database: SQLite is perfect for a single-user, standalone application. For a pharmacy with multiple terminals, a client-server database like PostgreSQL or MySQL would be necessary.

This project provides a solid foundation, demonstrating how to integrate hardware (like a barcode scanner) with a database-backed desktop application to solve a real-world business problem.

#ProjectComplete #SoftwareEngineering #PythonGUI #HealthTech

━━━━━━━━━━━━━━━
By: @DataScienceN âœĻ
âĪ3
ðŸ”Ĩ Trending Repository: nano-vllm

📝 Description: Nano vLLM

🔗 Repository URL: https://github.com/GeeeekExplorer/nano-vllm

📖 Readme: https://github.com/GeeeekExplorer/nano-vllm#readme

📊 Statistics:
🌟 Stars: 7.4K stars
👀 Watchers: 62
ðŸī Forks: 949 forks

ðŸ’ŧ Programming Languages: Python

🏷ïļ Related Topics:
#nlp #deep_learning #inference #pytorch #transformer #llm


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: glow

📝 Description: Render markdown on the CLI, with pizzazz! 💅ðŸŧ

🔗 Repository URL: https://github.com/charmbracelet/glow

📖 Readme: https://github.com/charmbracelet/glow#readme

📊 Statistics:
🌟 Stars: 19.9K stars
👀 Watchers: 75
ðŸī Forks: 480 forks

ðŸ’ŧ Programming Languages: Go - Dockerfile

🏷ïļ Related Topics:
#markdown #cli #hacktoberfest #excitement


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: hacker-scripts

📝 Description: Based on a true story

🔗 Repository URL: https://github.com/NARKOZ/hacker-scripts

📖 Readme: https://github.com/NARKOZ/hacker-scripts#readme

📊 Statistics:
🌟 Stars: 49K stars
👀 Watchers: 2.1k
ðŸī Forks: 6.7K forks

ðŸ’ŧ Programming Languages: JavaScript - Python - Java - Perl - Kotlin - Clojure

🏷ïļ Related Topics: Not available

==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: moon-dev-ai-agents

📝 Description: autonomous ai agents for trading in python

🔗 Repository URL: https://github.com/moondevonyt/moon-dev-ai-agents

🌐 Website: https://algotradecamp.com

📖 Readme: https://github.com/moondevonyt/moon-dev-ai-agents#readme

📊 Statistics:
🌟 Stars: 2.2K stars
👀 Watchers: 100
ðŸī Forks: 1.1K forks

ðŸ’ŧ Programming Languages: Python - HTML

🏷ïļ Related Topics: Not available

==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: agenticSeek

📝 Description: Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. 🔔 Official updates only via twitter @Martin993886460 (Beware of fake account)

🔗 Repository URL: https://github.com/Fosowl/agenticSeek

🌐 Website: http://agenticseek.tech

📖 Readme: https://github.com/Fosowl/agenticSeek#readme

📊 Statistics:
🌟 Stars: 22.4K stars
👀 Watchers: 132
ðŸī Forks: 2.4K forks

ðŸ’ŧ Programming Languages: Python - JavaScript - CSS - Shell - Batchfile - HTML - Dockerfile

🏷ïļ Related Topics:
#ai #agents #autonomous_agents #voice_assistant #llm #llm_agents #agentic_ai #deepseek_r1


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: LinkSwift

📝 Description: äļ€äļŠåŸšäšŽ JavaScript įš„į―‘į›˜æ–‡äŧķäļ‹č――åœ°å€čŽ·å–å·Ĩå…·ã€‚åŸšäšŽã€į―‘į›˜į›īé“ūäļ‹č――åŠĐ手】äŋŪæ”đ æ”Ŋ持 į™ūåšĶį―‘į›˜ / é˜ŋé‡Œäš‘į›˜ / äļ­å›―į§ŧåŠĻäš‘į›˜ / åĪĐįŋžäš‘į›˜ / čŋ…é›·äš‘į›˜ / åĪļå…‹į―‘į›˜ / UCį―‘į›˜ / 123äš‘į›˜ å…ŦåĪ§į―‘į›˜

🔗 Repository URL: https://github.com/hmjz100/LinkSwift

🌐 Website: https://github.com/hmjz100/LinkSwift/raw/main/%EF%BC%88%E6%94%B9%EF%BC%89%E7%BD%91%E7%9B%98%E7%9B%B4%E9%93%BE%E4%B8%8B%E8%BD%BD%E5%8A%A9%E6%89%8B.user.js

📖 Readme: https://github.com/hmjz100/LinkSwift#readme

📊 Statistics:
🌟 Stars: 7.9K stars
👀 Watchers: 26
ðŸī Forks: 371 forks

ðŸ’ŧ Programming Languages: JavaScript

🏷ïļ Related Topics:
#userscript #tampermonkey #aria2 #baidu #baiduyun #tampermonkey_script #baidunetdisk #tampermonkey_userscript #baidu_netdisk #motrix #aliyun_drive #123pan #189_cloud #139_cloud #xunlei_netdisk #quark_netdisk #ali_netdisk #yidong_netdisk #tianyi_netdisk #uc_netdisk


==================================
🧠 By: https://t.me/DataScienceM
Forwarded from Kaggle Data Hub
Unlock premium learning without spending a dime! ⭐ïļ @DataScienceC is the first Telegram channel dishing out free Udemy coupons daily—grab courses on data science, coding, AI, and beyond. Join the revolution and boost your skills for free today! 📕

What topic are you itching to learn next? 😊
https://t.me/DataScienceC 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
âĪ3
ðŸ”Ĩ Trending Repository: pytorch

📝 Description: Tensors and Dynamic neural networks in Python with strong GPU acceleration

🔗 Repository URL: https://github.com/pytorch/pytorch

🌐 Website: https://pytorch.org

📖 Readme: https://github.com/pytorch/pytorch#readme

📊 Statistics:
🌟 Stars: 94.5K stars
👀 Watchers: 1.8k
ðŸī Forks: 25.8K forks

ðŸ’ŧ Programming Languages: Python - C++ - Cuda - C - Objective-C++ - CMake

🏷ïļ Related Topics:
#python #machine_learning #deep_learning #neural_network #gpu #numpy #autograd #tensor


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: LocalAI

📝 Description: ðŸĪ– The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P and decentralized inference

🔗 Repository URL: https://github.com/mudler/LocalAI

🌐 Website: https://localai.io

📖 Readme: https://github.com/mudler/LocalAI#readme

📊 Statistics:
🌟 Stars: 36.4K stars
👀 Watchers: 241
ðŸī Forks: 2.9K forks

ðŸ’ŧ Programming Languages: Go - HTML - Python - JavaScript - Shell - C++

🏷ïļ Related Topics:
#api #ai #mcp #decentralized #text_generation #distributed #tts #image_generation #llama #object_detection #mamba #libp2p #gemma #mistral #audio_generation #llm #stable_diffusion #rwkv #musicgen #rerank


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: PageIndex

📝 Description: 📄🧠 PageIndex: Document Index for Reasoning-based RAG

🔗 Repository URL: https://github.com/VectifyAI/PageIndex

🌐 Website: https://pageindex.ai

📖 Readme: https://github.com/VectifyAI/PageIndex#readme

📊 Statistics:
🌟 Stars: 3.1K stars
👀 Watchers: 24
ðŸī Forks: 243 forks

ðŸ’ŧ Programming Languages: Python - Jupyter Notebook

🏷ïļ Related Topics:
#ai #retrieval #reasoning #rag #llm


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: opentui

📝 Description: OpenTUI is a library for building terminal user interfaces (TUIs)

🔗 Repository URL: https://github.com/sst/opentui

🌐 Website: https://opentui.com

📖 Readme: https://github.com/sst/opentui#readme

📊 Statistics:
🌟 Stars: 3.3K stars
👀 Watchers: 19
ðŸī Forks: 122 forks

ðŸ’ŧ Programming Languages: TypeScript - Zig - Go - Tree-sitter Query - Shell - Vue

🏷ïļ Related Topics: Not available

==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: awesome-rl-for-cybersecurity

📝 Description: A curated list of resources dedicated to reinforcement learning applied to cyber security.

🔗 Repository URL: https://github.com/Kim-Hammar/awesome-rl-for-cybersecurity

📖 Readme: https://github.com/Kim-Hammar/awesome-rl-for-cybersecurity#readme

📊 Statistics:
🌟 Stars: 948 stars
👀 Watchers: 32
ðŸī Forks: 137 forks

ðŸ’ŧ Programming Languages: Not available

🏷ïļ Related Topics: Not available

==================================
🧠 By: https://t.me/DataScienceN
âĪ1
ðŸ”Ĩ Trending Repository: How-To-Secure-A-Linux-Server

📝 Description: An evolving how-to guide for securing a Linux server.

🔗 Repository URL: https://github.com/imthenachoman/How-To-Secure-A-Linux-Server

📖 Readme: https://github.com/imthenachoman/How-To-Secure-A-Linux-Server#readme

📊 Statistics:
🌟 Stars: 20.5K stars
👀 Watchers: 339
ðŸī Forks: 1.3K forks

ðŸ’ŧ Programming Languages: Not available

🏷ïļ Related Topics:
#linux #security #server #hardening #security_hardening #linux_server #cc_by_sa #hardening_steps


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: edgevpn

📝 Description: â›ĩ The immutable, decentralized, statically built p2p VPN without any central server and automatic discovery! Create decentralized introspectable tunnels over p2p with shared tokens

🔗 Repository URL: https://github.com/mudler/edgevpn

🌐 Website: https://mudler.github.io/edgevpn

📖 Readme: https://github.com/mudler/edgevpn#readme

📊 Statistics:
🌟 Stars: 1.3K stars
👀 Watchers: 22
ðŸī Forks: 149 forks

ðŸ’ŧ Programming Languages: Go - HTML

🏷ïļ Related Topics:
#kubernetes #tunnel #golang #networking #mesh_networks #ipfs #nat #blockchain #p2p #vpn #mesh #golang_library #libp2p #cloudvpn #ipfs_blockchain #holepunch #p2pvpn


==================================
🧠 By: https://t.me/DataScienceM
ðŸ”Ĩ Trending Repository: cs-self-learning

📝 Description: čŪĄįŪ—æœšč‡Šå­Ķ指南

🔗 Repository URL: https://github.com/PKUFlyingPig/cs-self-learning

🌐 Website: https://csdiy.wiki

📖 Readme: https://github.com/PKUFlyingPig/cs-self-learning#readme

📊 Statistics:
🌟 Stars: 68.5K stars
👀 Watchers: 341
ðŸī Forks: 7.7K forks

ðŸ’ŧ Programming Languages: HTML

🏷ïļ Related Topics: Not available

==================================
🧠 By: https://t.me/DataScienceM
âĪ1
ðŸ’Ą Top 70 Web Scraping Operations in Python

I. Making HTTP Requests (requests)

â€Ē Import the library.
import requests

â€Ē Make a GET request to a URL.
response = requests.get('http://example.com')

â€Ē Check the response status code (200 is OK).
print(response.status_code)

â€Ē Access the raw HTML content (as bytes).
html_bytes = response.content

â€Ē Access the HTML content (as a string).
html_text = response.text

â€Ē Access response headers.
print(response.headers)

â€Ē Send a custom User-Agent header.
headers = {'User-Agent': 'My Cool Scraper 1.0'}
response = requests.get('http://example.com', headers=headers)

â€Ē Pass URL parameters in a request.
params = {'q': 'python scraping'}
response = requests.get('https://www.google.com/search', params=params)

â€Ē Make a POST request with form data.
payload = {'key1': 'value1', 'key2': 'value2'}
response = requests.post('http://httpbin.org/post', data=payload)

â€Ē Handle potential request errors.
try:
response = requests.get('http://example.com', timeout=5)
response.raise_for_status() # Raise an exception for bad status codes
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")


II. Parsing HTML with BeautifulSoup (Setup & Navigation)

â€Ē Import the library.
from bs4 import BeautifulSoup

â€Ē Create a BeautifulSoup object from HTML text.
soup = BeautifulSoup(html_text, 'html.parser')

â€Ē Prettify the parsed HTML for readability.
print(soup.prettify())

â€Ē Access a tag directly by name (gets the first one).
title_tag = soup.title

â€Ē Navigate to a tag's parent.
title_parent = soup.title.parent

â€Ē Get an iterable of a tag's children.
for child in soup.head.children:
print(child.name)

â€Ē Get the next sibling tag.
first_p = soup.find('p')
next_p = first_p.find_next_sibling('p')

â€Ē Get the previous sibling tag.
second_p = soup.find_all('p')[1]
prev_p = second_p.find_previous_sibling('p')


III. Finding Elements with BeautifulSoup
â€Ē Find the first occurrence of a tag.
first_link = soup.find('a')

â€Ē Find all occurrences of a tag.
all_links = soup.find_all('a')

â€Ē Find tags by their CSS class.
articles = soup.find_all('div', class_='article-content')

â€Ē Find a tag by its ID.
main_content = soup.find(id='main-container')

â€Ē Find tags by other attributes.
images = soup.find_all('img', attrs={'data-src': True})

â€Ē Find using a list of multiple tags.
headings = soup.find_all(['h1', 'h2', 'h3'])

â€Ē Find using a regular expression.
import re
links_with_blog = soup.find_all('a', href=re.compile(r'blog'))

â€Ē Find using a custom function.
# Finds tags with a 'class' but no 'id'
tags = soup.find_all(lambda tag: tag.has_attr('class') and not tag.has_attr('id'))

â€Ē Limit the number of results.
first_five_links = soup.find_all('a', limit=5)

â€Ē Use CSS Selectors to find one element.
footer = soup.select_one('#footer > p')

â€Ē Use CSS Selectors to find all matching elements.
article_links = soup.select('div.article a')

â€Ē Select direct children using CSS selector.
nav_items = soup.select('ul.nav > li')


IV. Extracting Data with BeautifulSoup

â€Ē Get the text content from a tag.
title_text = soup.title.get_text()

â€Ē Get stripped text content.
link_text = soup.find('a').get_text(strip=True)

â€Ē Get all text from the entire document.
all_text = soup.get_text()

â€Ē Get an attribute's value (like a URL).
link_url = soup.find('a')['href']

â€Ē Get the tag's name.
tag_name = soup.find('h1').name

â€Ē Get all attributes of a tag as a dictionary.
attrs_dict = soup.find('img').attrs


V. Parsing with lxml and XPath

â€Ē Import the library.
from lxml import html

â€Ē Parse HTML content with lxml.
tree = html.fromstring(response.content)

â€Ē Select elements using an XPath expression.
# Selects all <a> tags inside <div> tags with class 'nav'
links = tree.xpath('//div[@class="nav"]/a')

â€Ē Select text content directly with XPath.
# Gets the text of all <h1> tags
h1_texts = tree.xpath('//h1/text()')

â€Ē Select an attribute value with XPath.
# Gets all href attributes from <a> tags
hrefs = tree.xpath('//a/@href')


VI. Handling Dynamic Content (Selenium)

â€Ē Import the webdriver.
from selenium import webdriver

â€Ē Initialize a browser driver.
driver = webdriver.Chrome() # Requires chromedriver

â€Ē Navigate to a webpage.
driver.get('http://example.com')

â€Ē Find an element by its ID.
element = driver.find_element('id', 'my-element-id')

â€Ē Find elements by CSS Selector.
elements = driver.find_elements('css selector', 'div.item')

â€Ē Find an element by XPath.
button = driver.find_element('xpath', '//button[@type="submit"]')

â€Ē Click a button.
button.click()

â€Ē Enter text into an input field.
search_box = driver.find_element('name', 'q')
search_box.send_keys('Python Selenium')

â€Ē Wait for an element to become visible.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)

â€Ē Get the page source after JavaScript has executed.
dynamic_html = driver.page_source

â€Ē Close the browser window.
driver.quit()


VII. Common Tasks & Best Practices

â€Ē Handle pagination by finding the "Next" link.
next_page_url = soup.find('a', text='Next')['href']

â€Ē Save data to a CSV file.
import csv
with open('data.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['Title', 'Link'])
# writer.writerow([title, url]) in a loop

â€Ē Save data to CSV using pandas.
import pandas as pd
df = pd.DataFrame(data, columns=['Title', 'Link'])
df.to_csv('data.csv', index=False)

â€Ē Use a proxy with requests.
proxies = {'http': 'http://10.10.1.10:3128', 'https': 'http://10.10.1.10:1080'}
requests.get('http://example.com', proxies=proxies)

â€Ē Pause between requests to be polite.
import time
time.sleep(2) # Pause for 2 seconds

â€Ē Handle JSON data from an API.
json_response = requests.get('https://api.example.com/data').json()

â€Ē Download a file (like an image).
img_url = 'http://example.com/image.jpg'
img_data = requests.get(img_url).content
with open('image.jpg', 'wb') as handler:
handler.write(img_data)

â€Ē Parse a sitemap.xml to find all URLs.
# Get the sitemap.xml file and parse it like any other XML/HTML to extract <loc> tags.


VIII. Advanced Frameworks (Scrapy)

â€Ē Create a Scrapy spider (conceptual command).
scrapy genspider example example.com

â€Ē Define a parse method to process the response.
# In your spider class:
def parse(self, response):
# parsing logic here
pass

â€Ē Extract data using Scrapy's CSS selectors.
titles = response.css('h1::text').getall()

â€Ē Extract data using Scrapy's XPath selectors.
links = response.xpath('//a/@href').getall()

â€Ē Yield a dictionary of scraped data.
yield {'title': response.css('title::text').get()}

â€Ē Follow a link to parse the next page.
next_page = response.css('li.next a::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)

â€Ē Run a spider from the command line.
scrapy crawl example -o output.json

â€Ē Pass arguments to a spider.
scrapy crawl example -a category=books

â€Ē Create a Scrapy Item for structured data.
import scrapy
class ProductItem(scrapy.Item):
name = scrapy.Field()
price = scrapy.Field()

â€Ē Use an Item Loader to populate Items.
from scrapy.loader import ItemLoader
loader = ItemLoader(item=ProductItem(), response=response)
loader.add_css('name', 'h1.product-name::text')


#Python #WebScraping #BeautifulSoup #Selenium #Requests

━━━━━━━━━━━━━━━
By: @DataScienceN âœĻ
âĪ3