Forwarded from DLeX: AI Python (Hamid Shahsavani)
py compile.pdf
473.1 KB
عنوان مقاله : چطور source پایتونی خود را compile کنیم,مزیت های آن چیست؟
#python
#pdf
#hamid_shahsavani
#compile
@ai_python
#python
#hamid_shahsavani
#compile
@ai_python
Forwarded from از نگاه احسان
برای ساخت این ربات از python-telegram-bot و BeautifulSoup استفاده شده. آموزش BeautifulSoupرو می تونید توی دوره پایتون برای تست نفوذ پیدا کنید:
https://goo.gl/X9Ho4O
#python_for_pentesting
https://goo.gl/X9Ho4O
#python_for_pentesting
Forwarded from Tech C**P (Alireza Hos.)
Make your
Django deliberately doesn’t serve media for you, and it’s designed that way to save you from yourself. If you try to serve media from the same Apache instance that’s serving Django, you’re going to absolutely kill performance. Apache reuses processes between each request, so once a process caches all the code and libraries for Django, those stick around in memory. If you aren’t using that process to service a Django request, all the memory overhead is wasted.
So, set up all your media to be served by a different web server entirely. Ideally, this is a physically separate machine running a high- performance web server like lighttpd or tux. If you can’t afford the separate machine, at least have the media server be a separate process on the same machine.
For more information on how to separate static folder:
- https://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files
If you can afford it, stick your database server on a separate machine, too. All too often Apache and PostgreSQL (or MySQL or whatever) compete for system resources in a bad way. A separate DB server — ideally one with lots of RAM and fast (10k or better) drives — will seriously improve the number of hits you can dish out.
I don’t totally understand how KeepAlive works, but turning it off on our Django servers increased performance by something like 50%. Of course, don’t do this if the same server is also serving media… but you’re not doing that, right?
Although Django has support for a number of cache backends, none of them perform even half as well as memcached does. If you find yourself needing the cache, do yourself a favor and don’t even play around with the other backends; go straight for memcached.
#python #django #memcached
Django application blazing fast by doing some tips:1- Use a separate media server:Django deliberately doesn’t serve media for you, and it’s designed that way to save you from yourself. If you try to serve media from the same Apache instance that’s serving Django, you’re going to absolutely kill performance. Apache reuses processes between each request, so once a process caches all the code and libraries for Django, those stick around in memory. If you aren’t using that process to service a Django request, all the memory overhead is wasted.
So, set up all your media to be served by a different web server entirely. Ideally, this is a physically separate machine running a high- performance web server like lighttpd or tux. If you can’t afford the separate machine, at least have the media server be a separate process on the same machine.
For more information on how to separate static folder:
- https://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files
2- Use a separate database server:If you can afford it, stick your database server on a separate machine, too. All too often Apache and PostgreSQL (or MySQL or whatever) compete for system resources in a bad way. A separate DB server — ideally one with lots of RAM and fast (10k or better) drives — will seriously improve the number of hits you can dish out.
3- Turn off KeepAlive:I don’t totally understand how KeepAlive works, but turning it off on our Django servers increased performance by something like 50%. Of course, don’t do this if the same server is also serving media… but you’re not doing that, right?
4- Use memcached:Although Django has support for a number of cache backends, none of them perform even half as well as memcached does. If you find yourself needing the cache, do yourself a favor and don’t even play around with the other backends; go straight for memcached.
#python #django #memcached
Forwarded from Full Stack's Broadcast (Dom)
Forwarded from DLeX: AI Python (Amir)
Media is too big
VIEW IN TELEGRAM
📌 LocalPilot: Use GitHub Copilot locally on your Macbook with one-click!
با استفاده از این ابزار میتونید گیتهاب کوپایلت رو بصورت لوکال روی مک اجرا و استفاده کنید.
🔗 https://github.com/danielgross/localpilot
#CoPilot #Github
#ToolBox #Python
@ai_python
با استفاده از این ابزار میتونید گیتهاب کوپایلت رو بصورت لوکال روی مک اجرا و استفاده کنید.
🔗 https://github.com/danielgross/localpilot
#CoPilot #Github
#ToolBox #Python
@ai_python
👎1