UNDERCODE COMMUNITY
2.68K subscribers
1.23K photos
31 videos
2.65K files
80.3K links
πŸ¦‘ Undercode Cyber World!
@UndercodeCommunity


1️⃣ World first platform which Collect & Analyzes every New hacking method.
+ AI Pratice
@Undercode_Testing

2️⃣ Cyber & Tech NEWS:
@Undercode_News

3️⃣ CVE @Daily_CVE

✨ Web & Services:
β†’ Undercode.help
Download Telegram
Speed
Another major benefit of tmpfs is its lightning speed. Because a typical tmpfs file system will reside entirely in RAM, reading and writing can be almost instantaneous. Even if some swap partitions are used, the performance is still excellent. When more free VM resources are available, this part of the tmpfs file system will be moved to RAM. Having the VM subsystem automatically move part of the tmpfs file system to the swap partition is actually good for performance, because doing so allows the VM subsystem to free up space for processes that require RAM. This, together with its ability to dynamically resize, allows the operating system to have much better overall performance and flexibility than choosing to use a traditional RAM disk.

No persistence
This may not seem like a positive factor, tmpfs data will not be retained after a restart, because virtual memory is inherently volatile. I think you may have guessed one reason why tmpfs is called "tmpfs", right? However, this can actually be a good thing. It makes tmpfs an excellent file system for storing data that you don't need to keep (such as temporary files, which can be found in / tmp, and certain parts of the / var file system tree).

Use tmpfs
In order to use tmpfs, all you need is a 2.4 series kernel with the "Virtual memory file system support (formerly shm fs)" option enabled; this option is in the "File systems" section of the kernel configuration options. Once you have a kernel with tmpfs enabled, you can start installing the tmpfs file system. In fact, it is a good idea to turn on the tmpfs option in all your 2.4 kernels, whether or not you plan to use tmpfs. This is because you need kernel tmpfs support to use POSIX shared memory. However, System V shared memory does not require tmpfs in the kernel to work. Note that you don't need to install the tmpfs file system in order for POSIX shared memory to work; you only need to support tmpfs in the kernel. POSIX shared memory is not used much now, but this situation may change over time.

Avoid low VM situations
. The fact that tmpfs dynamically increases or decreases as needed is confusing: if your tmpfs file system grows to such an extent that it runs out of all virtual memory, and you have no remaining RAM or swap partition, then What will happen? Generally speaking, this situation is a bit annoying. If it is a 2.4.4 kernel, the kernel will lock immediately. In the case of the 2.4.6 kernel, the VM subsystem has been fixed in many ways. Although exhausting the VM is not a good experience, things will not fail completely. If the 2.4.6 kernel has reached a point where more VMs cannot be allocated, you are obviously reluctant to write any new data to the tmpfs file system. In addition, some other things may happen. First, some other processes in the system will be unable to allocate more memory; usually, this means that the system will become extremely slow and almost unresponsive. In this way, the super user will take the necessary steps to alleviate this low VM situation will be very difficult, or unusually time-consuming.

In addition, the kernel has a built-in final defense system to release memory when there is no available memory, it will find the process that occupies VM resources and terminate the process. Unfortunately, this "terminating process" solution usually leads to undesirable consequences when the use of tmpfs increases and the VM is exhausted. The following are the reasons. tmpfs itself cannot (and should not) be terminated because it is part of the kernel rather than a user process, and there is no easy way for the kernel to find out which process is filling the tmpfs file system. Therefore, the kernel will erroneously attack the largest VM-consuming process it can find, usually the X server, if you happen to use it. Therefore, your X server will be terminated, but the root cause (tmpfs) of the low VM situation has not been resolved. Ick.
Low VM: Solution
Fortunately, tmpfs allows you to specify the maximum upper limit of the file system capacity when installing or remounting the file system. In fact, from the 2.4.6 kernel to the 2.11g kernel, these parameters can only be set during installation, not during reinstallation, but we can expect to set these parameters during reinstallation in the near future. The optimal setting of the maximum tmpfs capacity depends on the resources and usage patterns of your specific Linux host; the idea is to prevent a tmpfs file system that fully uses resources from exhausting all virtual memory. Happening. A good way to find a good tmpfs upper limit is to use top to monitor the usage of your system's swap partition during peak usage. Then, make sure that the specified upper limit of tmpfs is slightly less than the sum of all free swap partitions and free RAM during these peak usage times.

It is easy to create a tmpfs file system with maximum capacity. To create a new tmpfs file system with a maximum size of 32 MB, type:
# mount tmpfs / dev / shm -t tmpfs -o size = 32m

This time, instead of mounting the tmpfs file system in / mnt / tmpfs, we created it in / dev / shm, which happens to be the "official" mount point for the tmpfs file system. If you happen to be using devfs, you will find that this directory has been created for you.

Also, if we want to limit the capacity of the file system to 512 KB or 1 GB, we can specify size = 512k and size = 1g, respectively. In addition to limiting capacity, we can also limit inodes (file system objects) by specifying the nr_inodes = x parameter. When using nr_inodes, x can be a simple integer, followed by a k, m, or g to specify thousands, millions, or billions (!) Of index nodes.

Moreover, if you want to add the equivalent function of the above mount tmpfs command to / etc / fstab, it should be like this:



tmpfs / dev / shm tmpfs size = 32m 0 0


installed on the existing installation point
when previously used 2.2 , Trying to install anything again at the installation point where something is already installed will cause an error. However, the rewritten kernel installation code makes the use of mount points multiple times no longer a problem. Here is an example scenario: suppose we have an existing file system installed in / tmp. However, we decided to start using tmpfs for / tmp storage. In the past, your only option is to uninstall / tmp and reinstall your new tmpfs / tmp file system in its place, as follows:



# umount / tmp

# Mount tmpfs / tmp -t tmpfs -o size = 64M

However, this The solution may not work for you. There may be many running processes with open files in / tmp; if so, you will encounter the following error when trying to uninstall / tmp:

umount: / tmp: device is busy


However, with the latest 2.4 kernel, you can mount your new / tmp file system without encountering a "device is busy" error:



# mount tmpfs / tmp -t tmpfs -o size = 64m


with a command, your new The tmpfs / tmp file system is mounted on / tmp and mounted on the partition that is already installed and can no longer be accessed directly. However, although you cannot access the original / tmp, any process that has open files on the original file system can continue to access them. Moreover, if you unmount / tmp based on tmpfs, the originally installed / tmp file system will reappear. In fact, you can install any number of file systems on the same mount point, the mount point is like a stack; unmount the current file system, the last recently installed file system will reappear.

Bind installation
Using bind installation, we can install all or even part of the installed file system to another location, and the file system can be accessed at both installation points. For example, you can use bind installation to mount your existing root file system to / home / drobbins / nifty as follows:



# mount --bind / / home / drobbins / nifty
Now, if you look inside / home / drobbins / nifty, you will see your root file system (/ home / drobbins / nifty / etc, / home / drobbins / nifty / opt, etc.). Moreover, if you modify the file in the root file system, you can also see the changes in / home / drobbins / nifty. This is because they are the same file system; the kernel simply maps the file system to two different mount points for us. Note that when you install a file system at another location, any file system installed at the mount point inside the bound installation file system will not move with it. In other words, if you have / usr on a separate file system, the bind installation we performed earlier will leave / home / drobbins / nifty / usr empty. You will need an additional bind mount command allows you to browse content located / home / drobbins / nifty / usr in / usr are:



# Mount --bind / usr / home / drobbins / nifty / usr

binding part of the file system installation
Bundled installation makes even better things possible. Suppose you have a tmpfs file system mounted in its traditional location / dev / shm, and you decide to start using tmpfs at / tmp, which is currently in the root file system. Although you can install a new tmpfs file system in / tmp (which is possible), you can also decide to have the new / tmp share the currently installed / dev / shm file system. However, although you can install / dev / shm in / tmp, your / dev / shm also contains some directories that you do not want to appear in / tmp. So, what do you do? How about this:



# mkdir / dev / shm / tmp



# chmod 1777 / dev / shm / tmp



# mount --bind / dev / shm / tmp / tmp


In this example, we first create a / dev / shm / tmp directory, and then give it 1777 permissions, appropriate permission for / tmp. Now that our directory is ready, we can install it, and we can only install / dev / shm / tmp to / tmp. So, although / tmp / foo will be mapped to / dev / shm / tmp / foo, you have no way to access the / dev / shm / bar file from / tmp.

As you can see, the binding installation is very powerful, allowing you to easily modify the file system design without any hassle. In the next article, we will talk about devfs. As for now, you may want to take a look at the following references.

References

* Read Daniel ’s previous articles in this series, where he introduced the benefits of creating logs and using ReiserFS, and showed how to install a solid ReiserFS system based on Linux 2.4.
* Linux Weekly News is a good reference for keeping up with the latest kernel development.
* util-linux (the latest link) collects various important Linux applications, including mount and unmount. You may want to upgrade to the latest available version so that you can use the mount --bind syntax (instead of using mount -o bind).
* Because tmpfs and binding installation are relatively new, most of them are new kernel features that are not documented. The best way to learn them is to learn the relevant parts of the Linux kernel source code.
* The Namesys page is where to learn more about ReiserFS.
* The ReiserFS mailing list is a great resource for a deeper understanding of current ReiserFS information. You must also take a look at the ReiserFS mailing list archive.
* In the review of Linux Gazette Journal File Systems by Juan I. Santos Florido, you can find a very in-depth explanation of the metadata differences between UFS, ext2 and ReiserFS and some other content.
* Jedi's ReiserFS / Qmail tuning page contains a lot of useful information for qmail users. You must also take a look at ReiserSMTP, where Jedi has collected many qmail components that provide powerful qmail performance.
* Read the JFS overview of Steve Best on developerWorks.

written by undercode
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚
πŸ¦‘ Speed optimization-use virtual memory (virtual memory, VM) file system and binding installation FULL WRITTEN BY UNDERCODE
This media is not supported in your browser
VIEW IN TELEGRAM
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁

πŸ¦‘First steps for anonymously : Configuration example of proxy server and router in LAN FULL BY UNDERCODE
t.me/undercodeTesting

rapid development of network technology, so that enterprises LAN access INTERNET more and more ways to share resources, For the most part, DDN special line with its stable performance, expansion The advantage of good performance has become the commonly used method. The DDN connection is simple in terms of hardware requirements. Only a router (router) and proxy server (proxy server) are required, but many system configuration Network managers are a more difficult problem. Taking CISCO routers as an example, the author introduces several successful configuration methods for the reference of peers:
1. Configuration of accessing Internet resources directly through the router
1. The general idea and the equipment connection method
In general, Internal LAN use reserved addresses on the INTERNET:
10.0.0.0/8:10.0.0.0~10.255.255.255
172.16.0.0/12:172.16.0.0~172.31.255.255
192.168. 0.0 / 16: 192.168.0.0 ~ 192.168.255.255
Under normal circumstances, when the workstations inside the unit directly use routing to access externally, they will be filtered by the router because the workstations use reserved addresses on the Internet, resulting in inaccessibility Internet resources. The solution to this problem is to use the NAT (Network Address Translation) address translation function provided by the routing operating system to convert the private address of the internal network into a legal address on the Internet, so that users who do not have a legal IP address can access it through NAT. External Internet. The advantage of this is that there is no need to be equipped with a proxy server, which reduces investment, but also saves legitimate IP addresses and improves the security of the internal network.
There are two types of NAT: Single mode and global mode.
Using the single mode of NAT, just like its name, it can map many local LAN hosts to an Internet address. All hosts in the local area network are regarded as an Internet user to the external Internet network. Hosts in the local area network continue to use the local address.
Using the global mode of NAT, the interface of the router maps many local LAN hosts to a certain range of Internet addresses (IP address pool). When the local host port is connected to a host on the Internet, an IP address in the IP address pool is automatically assigned to the local host. After the connection is interrupted, the dynamically assigned IP address will be released, and the released IP address can be used by other local hosts. use.
Taking the network environment of my unit as an example, the configuration method and process are listed below for your reference.
Our unit uses Unicom optical cable (V.35) to connect to the Internet. The router is CISCO2610. The LAN uses INTEL550 100M switch. Unicom provides us with the following four IP addresses:
211.90.137.25 (255.255.255.252) for local The WAN port of the router is
211.90.137.26 (255.255.255.252) . The port 211.90.139.41 (255.255.255.252) used by the other party (Unicom) is
for its own control.
211.90.139.42 (255.255.255.252) for its own control
2. Router configuration
(1) Schematic diagram of network connection:
Note: All workstations in the school are connected to the switch, and the router is also connected to the internal switch through the Ethernet port. The Ethernet port on the router uses the internal private address, and the two effective IP addresses assigned by Unicom are used at both ends of the fiber. In this connection mode, as long as the NAT is set inside the router, all workstations within the unit can access INTERNTE. On each workstation, only need to set the gateway to the router's Ethernet port (192.168.0.3) to access the Internet, no need Set up an agent, and save two effective IP addresses for your own discretion (such as setting up your own WEB and E-MAIL server). But there are also disadvantages: you can not enjoy the CACHE service provided by the proxy server to improve access speed. Therefore, this configuration scheme is suitable for units with a small number of workstations. For a large number of workstations in a unit, the following two methods can be used. The specific configuration on the router is as follows:
(2) The configuration of the router
en
config t
ip nat pool c2610 211.90.139.41 211.90.139.42 netmask 255.255.255.252
(define an address pool c2601, which contains two free legal IP addresses for NAT Used during conversion)
int e0 / 0
ip address 192.168.0.3 255.255.255.0
ip nat inside
exit
(set the IP address of the Ethernet port and set it as the port connected to the internal network)
interface s0 / 0
ip address 211.90.137.25 255.255.255.252
ip nat outside
exit
(Set the IP address of the WAN port and set it as the port connected to the external network)
ip route 0.0.0.0 0.0.0.0 211.90.137.26
(set the dynamic route)
access-list 2 permit 192.168.0.1 0.0.0.255
(establish the access control list )
! Dynamic NAT
!
Ip nat inside source list 2 pool c2610 overload
(establish dynamic address translation)
line console 0
exec-timeout 0 0
!
Line vty 0 4
end
wr
(save the settings made)
3.Workstation configuration
requires static IP address, set in the TCP / IP properties and set off to network 192.168.0.3 (the IP address of the router Ethernet port), the address is provided, the Internet browser and other tools to provide access to the DNS in No special settings are required.
Second, access to INTERNET resources through a proxy server configuration
1. General idea and equipment connection method
The advantage of using proxy server to access INTERNET resources is that the CACHE service provided by the proxy server can be used to improve the access speed and efficiency of INTERNET. It is more suitable for units with more workstations. The disadvantage is that it needs to be equipped with a computer as a proxy server, which increases the investment cost; and it requires two more legal IP addresses than the first legal method, and the network security is not high.
To use this scheme to access the Internet, the device connection method is as follows:
two network cards are installed on the proxy server, one is connected to the internal network, and the internal private address is set; the other is connected to the router Ethernet port, and the legal address assigned by Unicom (211.90.139.42) is set, And set its gateway to 211.90.139.41 (router Ethernet port)
router Ethernet port also set the legal IP address (211.90.139.41) assigned by Unicom
. After connecting the device, install the proxy software on the proxy server and set it on the workstation An agent can access INTERNET.
2. Router configuration
(1) Network connection diagram:
Description: In the above figure, all computers in the unit communicate directly with the internal network card (192.168.0.4) on the proxy server through the switch, and then pass under the control of the proxy service software The router accesses the Internet.
(2) Configuration of the router
en
config t
int e0 / 0
ip address 211.90.139.41 255.255.255.252
exit
(set the IP address of the Ethernet port)
interface s0 / 0
ip address 211.90.137.25 255.255.255.252
exit
(set the IP address of the WAN port)
ip route 0.0.0.0 0.0.0.0 211.90.137.26
ip routing
(set the dynamic route and activate the route)
end
wr
(save the settings made)
3.Proxy settings
proxy server must be installed by two network cards, one for connection to the internal LAN, the IP address set for the internal private address (eg: 192.168.0.4 netmask 255.255.255.0) without providing a gateway. The other is used to connect to the router, set the legal address assigned by Unicom (211.90.139.42 netmask 255.255.255.252), and set its gateway to: 211.90.139.41 (router Ethernet port).
After setting up the network card according to the above method, install another set of agent softwarewritten by undercode . (Eg: installation method MS PROXY SERVER 2.0, WINGATE, etc., agent software, please refer to other information)
4. Workstation setup
(1) INTERNET EXPLORER set
Tools menu -> internet options -> Connections -> LAN Settings -> Use a proxy server -> Address: 192.168.0.4 port: 80> OK
(2) other software settings, please refer to the software Instructions.
Third, direct access to the proxy access configuration coexistence
1. General idea and equipment connection method
Through the two methods described above, you can successfully achieve Internet access, but each method has advantages and certain disadvantages, and the advantages of the two methods are complementary. How can you combine the advantages of the two methods into one, and the third method is a solution that can have both fish and bear paws. Integrate the advantages of one or two methods, that is, save the IP address, and can improve the Internet access efficiency through the CACHE provided by the proxy server.
To use this scheme to access the Internet, the device connection method is as follows:
two network cards are installed on the proxy server, and both network cards are connected to the switch. When setting the IP address, both network cards are set with internal private addresses, but these two addresses Should not belong to a network (that is, the network address of the IP address is different), one is used to communicate with the internal network (network card 1), and one is used to communicate with the router (network card 2), otherwise the proxy cannot be realized.
Do not install NETBEUI protocol on the proxy server, only install TCP / IP protocol. (Note: This step must be done, otherwise the connection between the proxy server and the switch will be redundant and the proxy server NETBIOS computer name conflict will affect the normal communication.) The
router Ethernet port also sets an internal private address, which is due to the network card. 2. The address of 2 is on the same network (that is, the network address of the IP address is the same as that of the network card 2)
2. Router settings
(1) Network connection diagram
(2) Router configuration
en
config t
ip nat pool c2610 211.90.139.41 211.90.139.42 netmask 255.255.255.252
(define an address pool c2601, which contains two free legal IP addresses , For NAT conversion)
int e0 / 0
ip address 192.168.1.1 255.255.255.0
ip nat inside
exit
(set the IP address of the Ethernet port and set it as the port to connect to the internal network)
interface s0 / 0
ip address 211.90.137.25 255.255.255.252
ip nat outside
exit
(set the WAN port IP address and set it as the port to connect to the external network)
ip route 0.0.0.0 0.0.0.0 211.90.137.26
(set dynamic routing)
access-list 2 permit 192.168.0.1 0.0.0.255
(establish access control list)
! Dynamic NAT
!
ip nat inside source list 2 pool c2610 overload
(establish dynamic address translation)
line console 0
exec-timeout 0 0
!
line vty 0 4
end

wr
(save the settings made)
2.Proxy server settings
Two network cards are installed on the proxy server, and both network cards are connected to the switch. The network card 1 sets the IP address as 192.168.0.4, no gateway; the network card 2 sets the IP address as 192.168.1.2, and sets its gateway as 192.168.1.1 (Router Ethernet port).
After setting up the network card according to the above method, install another set of agent software. (For example: MS PROXY SERVER 2.0, WINGATE, etc., please refer to other materials for the installation and debugging methods of the agent software)
Note: When installing the agent software (using MS-PROXY 2.0 as an example), when specifying the LAT table, the address range should be 192.168 .0.0-192.168.255.255 is excluded, otherwise the proxy will not work properly.
3. The workstation is provided
under this configuration, the workstation may be provided by a proxy access may also be provided directly to the Internet through a gateway.
If you only access the Internet through a proxy, the setting method is exactly the same as Method 2.
If accessing the Internet only through the gateway, the workstation must be set with a static IP address, the IP address should be set to 192.168.1.X, which
is on the same network segment as the router Ethernet port, and the gateway should be set to: 192.168.1.1, and DNS to be the access provider The address provided.
If you want two methods to coexist, you need to set two static IP addresses in TCP / IP: 192.168.0.X and 192.168.1.X, and set the gateway to: 192.168.1.1, DNS is the address provided by the access provider . When in use, you only need to open or close the proxy settings in the browser and other software to switch between the proxy and the gateway.

written by undercode
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚
This media is not supported in your browser
VIEW IN TELEGRAM
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁

πŸ¦‘ how to set a new firewall features BY undercode :
t.me/undercodeTesting

#echo stream tcp nowait root internal
#echo dgram udp wait root internal
#discard stream tcp nowait root internal
#discard dgram udp wait root internal
#daytime stream tcp nowait root internal
#daytime dgram udp wait root internal
#chargen stream tcp nowait root internal
#chargen dgram udp wait root internal
# FTP firewall gateway -------- FTP firewall gateway
ftp-gw stream tcp nowait.400 root / usr / local / etc / ftp-gw ftp-gw
# Telnet firewall gateway ------ Telnet firewall gateway
telnet stream tcp nowait root / usr / local / etc / tn-gw / usr / local /
etc / tn-gw
# local telnet services ------ user's telnet function
telnet-a stream tcp nowait root / usr / local / etc / netacl in.telnetd
# Gopher firewall gateway ---- --Gopher firewall gateway
gopher stream tcp nowait.400 root / usr / local / etc / http-gw / usr / loca
l / etc / http-gw
# WWW firewall gateway ------ WWW firewall gateway
http stream tcp nowait.400 root / usr / local / etc / http-gw / usr / local / etc / ht
tp-gw
# SSL firewall gateway ----- -SSL firewall gateway
ssl-gw stream tcp nowait root / usr / local / etc / ssl-gw ssl-gw
# NetNews firewall proxy (using plug-gw) ------ NetNews firewall proxy server (using pl
ug- gw)
nntp stream tcp nowait root / usr / local / etc / plug-gw plug-gw nntp
#nntp stream tcp nowait root / usr / sbin / tcpd in.nntpd
# SMTP (email) firewall gateway ------ SMTP (Email) firewall gateway
#smtp stream tcp nowait root / usr / local / etc / smap smap
#
# Shell, login, exec and talk are BSD protocols ------ Shell, login, exec and
talk are all BSD protocols
#
#shell stream tcp nowait root / usr / sbin / tcpd in.rshd
#login stream tcp nowait root / usr / sbin / tcpd in.rlogind
#exec stream tcp nowait root / usr / sbin / tcpd in.rexecd
#talk dgram udp wait root / usr / sbin / tcpd in.talkd
#ntalk dgram udp wait root / usr / sbin / tcpd in.ntalkd
#dtalk stream tcp waut nobody / usr / sbin / tcpd in.dtalkd
#
# Pop and imap mail services et al- ----- Pop and imap mail function
#
# pop-2 stream tcp nowait root / usr / sbin / tcpd ipop2d
# pop-3 stream tcp nowait root / usr / sbin / tcpd ipop3d
#imap stream tcp nowait root / usr / sbin / tcpd imapd

written by undercode
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁
This media is not supported in your browser
VIEW IN TELEGRAM
▁ β–‚ β–„ ο½•π•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁

πŸ¦‘GOOD STEP FOR ANONYMOUSLY FOR BEGINER FULL BY UNDERCODE :
Proxy server configuration and application >

What is a proxy server
in the TCP / IP network, the traditional communication process is this: the client requests data to the server, the server response Should request, transfer the data to the client. After the introduction of the proxy server, the process became like this: the client initiated a request to the server, and the request was sent to the proxy server; the proxy server analyzed the request and first checked whether there was request data in its cache, if there was Send to the client, if not, send a request to the server instead of the client. After the server responds, the proxy server transmits the response data to the client, while keeping a copy of the data in its own cache. In this way, when a client requests the same data again, the proxy server can directly transmit the data to the client without the need to initiate a request to the server.

Proxy Server 1.2 features
Generally speaking, the proxy server has the following features:
1. increase access speed by caching
With the rapid development of Internet, network bandwidth is becoming increasingly rare. Therefore, in order to improve the access speed, many ISPs provide proxy servers to speed up the network access speed through the cache function of the proxy server. Generally speaking, most proxy servers support HTTP caching, but some proxy servers also support FTP caching. When choosing a proxy server, for most organizations, only the HTTP cache function is sufficient.
Generally, there are active caching and passive caching. The so-called passive cache means that the proxy server caches the data returned by the server only when the client requests data. If the data expires and the client requests the same data, the proxy server must re-initiate a new data request. When the response data is transmitted to the client, a new cache is performed. The so-called active cache means that the proxy server continuously checks the data in the cache. Once the data expires, the proxy server actively initiates a new data request to update the data. In this way, when a client requests the data, the response time is greatly shortened. It should also be noted that most authentication servers will not cache the authentication information in the data.
2. Provide a method for accessing the Internet with private IPs.
IP addresses are a non-renewable and valuable resource. If you only have a limited IP address, but need to provide Internet access for the entire organization, then you can achieve this by using a proxy server.
3. To improve network security
This is very clear, if all internal users to access the Internet through a proxy server, then the proxy server will become the only channel into the Internet; conversely, the proxy server is the only Internet access to internal network Channel, if you don't do reverse proxy, for the host on the Internet, only the proxy server is visible to your entire intranet, which greatly enhances the security of the network.
1.3 Classification and characteristics of the proxy server
is usually a proxy server classification is divided into circuit layer proxy application layer proxy, intelligent circuit layer proxy, etc. from the mechanism of implementation. Here, I would like to start from another perspective, the proxy server is divided into traditional proxy server and transparent proxy server.
I think it is necessary to understand the difference between the two. Only when you really understand the internal mechanism can you follow the rules when you encounter problems, and you will not be confused and do not know how to solve the problems. Therefore, the following we will explain through specific examples. The writing idea of ​​this chapter comes from IPCHAINS-HOWTO written by Paul Russell. The following examples are also from this article. I think the biggest gain I read from this article is that I have a clear understanding of the means by which the internal network accesses the external network and the external network accesses the internal network. Of course, the so-called internal network here refers to the internal network using private IP.
Our examples are based on the following assumptions:
your domain name is sample.com, and your intranet (192.168.1. *) Users pass proxy.sample.com (external interface eth0: 1.2.3.4; internal interface eth1: 192.168.1.1 ) The proxy server accesses the Internet. In other words, the proxy server is the only machine directly connected to the Internet and the intranet. And assume that the proxy server is running some kind of proxy server software (such as Squid). Assume that a client in the intranet is client.sample.com (192.168.1.100).

+ ------------------- +
| Intranet (192.168.1. *) | Eth1 + -------- + eth0 DDN
| + ---- -------- | proxy | <===============> Internet
| client198.168.1.100 | + -------- +
+ --- + ----------------

eth0: 1.2.3.4
eth1: 198.168.1.1


1.3.1 traditional agency
based on the above we do the following:
1. The proxy service software is bound to the 8080 port of the proxy server.
2. The client browser is configured to use port 8080 of the proxy server.
3. The client does not need to configure DNS.
4. A proxy server needs to be configured on the proxy server.
5. The client does not need to configure the default route.

When we open a web request in the client browser, such as " http://www.yourdomain.com ", the following events will occur one after another:
1. The client uses a port (such as 1025) to connect to the proxy server 8080 Port, request the web page " http://www.yourdomain.com "
2. The proxy server requests "www.yourdomain.com" from the DNS to obtain the corresponding IP address 202.99.11.120. Then, the proxy server uses a certain port (such as 1037) to initiate a web connection request to port 80 of the IP address to request a web page.
3. After receiving the response web page, the proxy server transmits the data to the client.
4. The client browser displays the page.

From the perspective of www.yourdomain.com, the connection is established between port 1037 at 1.2.3.4 and port 80 at 202.99.11.120. From the client's perspective, the connection is established between port 1025 of 192.168.1.100 and port 8080 of 1.2.3.4.

1.3.2 Transparent Proxy
Transparent proxy means that clients do not need to know the existence of the proxy server.
On the basis of the above, we do the following work:
1. Configure the transparent proxy server software to run on port 8080 of the proxy server.
2. Configure the proxy server to redirect all connections to port 80 to port 8080.
3. Configure the client browser to connect directly to the Internet.
4. Configure DNS on the client.
5. Configure the default gateway of the client as 192.168.1.1.
When we open a web request in the client browser, such as " http://www.yourdomain.com ", then The following events will occur one after another:
1. The client requests "www.yourdomain.com" from DNS and obtains the corresponding IP address 202.99.11.120. Then, the client uses a certain port (such as 1066) to initiate a web connection request to port 80 of the IP address to request a web page.
2. When the request packet passes through the transparent proxy server, it is redirected to the proxy server's binding port 8080. Therefore, the transparent proxy server uses a certain port (such as 1088) to initiate a web connection request to port 80 of 202.99.11.120 to request a web page.
3. After receiving the response web page, the proxy server transmits the data to the client.
4. The client browser displays the page.

From the perspective of www.yourdomain.com, the connection is established between port 1088 at 1.2.3.4 and port 80 at 202.99.11.120. From the client's perspective, the connection is established between port 1066 of 192.168.1.100 and port 80 of 202.99.11.120.

The above is the difference between the traditional proxy server and the transparent proxy server.

Section 2 Comparison of various proxy servers
There are many proxy server software under linux, I checked it from www.freshmeat.com (a famous linux software site), there are more than sixty. However, only Apache, socks, squid, etc. have been widely used in practice and proved to be high-performance agent software. Let's compare these software separately:

2.1 Apache
Apache is the most widely used HTTP server in the world. The reason why it is most widely used is because of its powerful functions, high efficiency, security and speed. Starting from version 1.1.x, Apache includes a proxy module. The performance advantage of using Apache as a proxy server is not obvious and is not recommended.


2.2 Socks
Socks is a network proxy protocol that allows clients to gain full access to the Internet through the Socks server. Scoks establishes a secure proxy data channel between the server and the client. From the perspective of the client, Scoks is transparent; from the perspective of the server, Socks is the client. The client does not need to have direct access to the Internet (that is, a private IP address can be used) because the Socks server can redirect connection requests from the client to the Internet. In addition, the Socks server can authenticate user connection requests, allowing legitimate users to establish proxy connections. Similarly, Socks can also prevent unauthorized Internet users from accessing the internal network. So often use Socks as a firewall.
Common browsers such as netscape, IE, etc. can directly use Socks, and we can also use the client that comes with socsk5 to enable Internet software that does not directly support socks to use Socks.
For more information, please refer to the official Socks website http://www.socks.nec.com .
2.3 Squid
For web users, Squid is a high-performance proxy cache server. Squid supports FTP, gopher, and HTTP protocols. Unlike common proxy caching software, Squid uses a single, non-modular, I / O-driven process to handle all client requests.
Squid caches data elements in memory and caches DNS query results. In addition, it also supports non-modular DNS queries to passively cache failed requests. Squid supports SSL and access control. Due to the use of ICP (Lightweight Internet Cache Protocol), Squid can implement a cascading array of agents, thereby maximizing bandwidth savings.
Squid is composed of a main service program squid, a DNS query program dnsserver, several rewriting requests and performing authentication procedures, and several management tools. When Squid is started, it can spawn a predetermined number of dnsserver processes, and each dnsserver process can perform a separate DNS query, which greatly reduces the time the server waits for DNS queries.
2.4 Select
As can be seen from the above comparison, Apache web server main function is, the proxy function is only one of its modules only, Socks powerful, but inflexible, so we recommend that you focus on the use Squid. In the following chapters, we will learn about Squid's exciting features and related installation and configuration.

Section III install Proxy Server Squid

3.1 acquisition software
you can get the software in the following ways:
1. From the official site of Squid http://www.squid-cache.org download the software;
2. From your linux release Get the software;
Generally, there are two types of Squid software packages: one is the source code, which needs to be recompiled after downloading; the executable file can be used only after decompression after downloading; the other is the rpm package used by RedHat. Below we talk about the installation methods of these two software packages.

3.2 Installing the software
we present the latest stable version of squid-2.3.STABLEX example.
3.2.1 Installation of rpm package
1. Enter / mnt / cdrom / RedHat / RPMS
2. Execute rpm -ivh squid-2.2.STABLE4-8.i386.rpm.
Of course, we can also install the software in the process of starting to install the system.

3.2.2 Installation of source code package
1. Download squid-2.3.STABLE2-src.tar.gz from http://www.squid-cache.org .
2. Copy the file to the / usr / local directory.
3. Unzip the file tar xvzf squid-2.3.STABLE2-src.tar.gz.
4. After unpacking, create a new directory squid-2.3.STABLE2 in / usr / local. For convenience, use the mv command to rename the directory to squid mv squid-2.3.STABLE2 squid;
5. Enter squid cd squid
6. ./configure can be performed ./confgure --prefix = / directory / you / want installation directory specified
default installation directory / usr / local / squid.
7. Execute make all
8. Execute make install
9. After the installation is complete, the executable file of Squid is in the bin subdirectory of the installation directory, and the configuration file is in the etc subdirectory.

Section IV configuration squid Basics - let the proxy server run up
due to the various advantages of RedHat (including ease of use, stability, etc.), the user releases worldwide more, so we are following The instructions are mainly based on Squid-2.2.STABLE4-8 version under RedHat6.1 environment. From my experience, this version of Squid is more stable than other versions. The previous version 1.1.22 is also more stable, but it lacks in function and flexibility.
Squid has a main configuration file squid.conf. In the RedHat environment, all Squid configuration files are located in the / etc / squid subdirectory.
4.1 common configuration options
because the default configuration file in question, we must first amend the contents of the configuration file in order to allow squid up and running.
Let's take a look at the structure of the squid.conf file and some commonly used options: The
squid.conf configuration file can be divided into thirteen parts. These thirteen parts are:
1.NETWORK OPTIONS (related network options)
2. OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM (relevant options for the neighbor selection algorithm)
3. OPTIONS WHICH AFFECT THE CACHE SIZE (relevant options for defining the size of the cache)
4. LOGFILE PATHNAMES AND CACHE DIRECTORIES (define the path and cache of the log file the catalog)
5. OPTIONS FOR EXTERNAL SUPPORT PROGRAMS (external support program option)
6. OPTIONS FOR TUNING THE CACHE (adjust cache option)
7. TIMEOUTS (timeout)
8. ACCESS CONTROLS (access control)
9. ADMINISTRATIVE PARAMETERS (management parameters)
10. OPTIONS FOR THE CACHE REGISTRATION SERVICE (
HTTP registration service option) 11. HTTPD-ACCELERATOR OPTIONS (HTTPD acceleration option)
12. MISCELLANEOUS (Miscellaneous)
13. DELAY POOL PARAMETERS (delay pool parameters)
Although the configuration file of Squid is very large, but if You are only providing proxy services for a small and medium-sized network, and you are only going to use one server, then you only need to modify a few options in the configuration file. These common options are:

1. http_port
Description: Define the port that Squid listens to for HTTP client connection requests. The default is 3128, or 80 if HTTPD acceleration mode is used. You can specify multiple ports, but all specified ports must be on one command line.

2.cache_mem (bytes)
Description: This option is used to specify the ideal value of the memory that Squid can use. This part of memory is used to store the following objects:
In-Transit objects (Incoming objects)
Hot Objects (hot objects, that is, objects frequently accessed by users)
Negative-Cached objects (negatively stored objects)
should be noted that this does not indicate that the memory used by Squid must not exceed this value. In fact, this option only defines one aspect of memory used by Squid. Use memory. So the memory actually used by Squid may exceed this value. The default value is 8MB.

3. cache_dir Directory-Name Mbytes Level-1 Level2
Description: Specify the size of the swap space used by Squid to store objects and its directory structure. Multiple cache_dir commands can be used to define multiple such swap spaces,
and these swap spaces can be distributed across different disk partitions. \ "directory \" indicates the top directory of the swap space. If you want to use the entire disk as swap
space, then you can load the directory as a mount point up the entire disk. The default value is / var / spool / squid. "Mbytes" defines the total amount of available space.
It should be noted that the Squid process must have read and write rights to the directory. The number of "Level-1" is established in the first stage of the top-level directory subdirectory, the default
value is 16. Similarly, "Level-2" is the number of second-level subdirectories that can be created, and the default value is 256. Why are there so many subdirectories defined? This is because if the sub-
directory is too small, the number of files stored in a subdirectory will significantly increase, which will lead to greatly increase the file system to find a certain time, so that the whole system of
body performance drastically reduced. Therefore, in order to reduce the number of files in each directory, we must increase the number of directories used. If only one subdirectory of the top
number of subdirectories under the directory level too, so we use two subdirectory structure.
So, how to determine the number of subdirectories required by your system? We can use the following formula to estimate.
Known quantity:
DS = total available swap space (in KB) / number of swap spaces
OS = average size of each object = 20k
NO = average number of objects stored in each secondary sub-directory = 256

unknown:
L1 = a number of subdirectories
number of subdirectories L2 = two

formula:
Ll X L2 = the DS / the OS / NO
Note that this is a volatile equation, can have multiple solutions.

4.acl
description: define the access control list.
The definition syntax is:
acl aclname acltype string1 ...
acl aclname acltype \ "file \" ...
When using a file, the format of the file is one entry per line.
acltype can be one of src dst srcdomain dstdomain url_pattern urlpath_pattern time port proto method browser user.
The instructions are as follows:
src indicates the source address. You can use the following methods to specify:
acl aclname src ip-address / netmask ... (customer ip address)
acl aclname src addr1-addr2 / netmask ... (address range)
dst indicates the target address. The syntax is:
acl aclname dst ip-address / netmask ... (that is, the IP address of the server requested by the customer)
srcdomain indicates the domain to which the customer belongs. The syntax is:
acl aclname srcdomain foo.com ... Squid will query DNS in reverse based on the customer ip.
dstdomain indicates the domain to which the request server belongs. The syntax is:
acl aclname dstdomain foo.com ... determined by the URL requested by the customer.
Note that if the user uses the server ip instead of the full domain name, Squid will perform a reverse DNS resolution to determine its full domain name, and record "none" if it fails.
time indicates the access time. The syntax is as follows:
acl aclname time [day-abbrevs] [h1: m1-h2: m2] [hh: mm-hh: mm]
day-abbrevs:
S-Sunday
M-Monday
T-Tuesday
W-Wednesday
H-Thursday
F- Friday
A-Saturday
h1: m1 must be less than h2: m2, expressed as [hh: mm-hh: mm].
port Specifies the access port. You can specify multiple ports, such as:
acl aclname port 80 70 21 ...
acl aclname port 0-1024 ... (specify a port range)
proto specifies the usage protocol. You can specify multiple protocols:
acl aclname proto HTTP FTP ...
method Specify the request method. For example:
acl aclname method GET POST ...
5. http_access
Description: According to the access control list to allow or prohibit a certain type of user access.
If there is no matching item for an access, the default is to apply the "NO" of the last item. For example, if the last item is allowed, the default is to prohibit. Therefore, usually the last entry should be set to \ "deny all \" or \ "allow all \" to avoid security risks.

4.2 Application Example
hypothetical scenario: A company with squid as a proxy server, the proxy server is configured to PII450 / 256M / 8.4G, the company used ip 1.2.3.0/24 segment, and would like to use as a proxy port 8080.
The corresponding Squid configuration options are:
1. http_port
http_port 8080

2. cache_mem
idea: Since the server only provides proxy services, this value can be set as large as possible.
cache_mem 194M 3.

cache_dir Directory-Name Mbytes Level-1 Level2
Idea: The hard disk is 8.4G, you should plan well when installing the system, and divide the available space for different file systems. In this example, we can divide it like this:
/ cache1 3.5G
/ cache2 3.5G
/ var 400M
the swap 127M
/ remaining portion
and, during installation, we try not to install unnecessary packet. This saves space while improving the security and stability of the system. Let's calculate the number of first-level and second-level subdirectories required.
Known quantity:
DS = total available swap space (in KB) / number of swap spaces = 7G / 2 = 3500000KB
OS = average size of each object = 20k
NO = average number of objects stored in each secondary subdirectory = 256

unknowns:
Ll = a number of subdirectories
L2 = the number of two subdirectories

formula:
Ll X L2 = the DS / the OS / NO = 3.5 million / 20 is / 256 = 684
we take
Ll = 16
L2 = 43 is
therefore, Our cache_dir statement is:
cache_dir / cache1 3500M 16 43
cache_dir / cache2 3500M 16 43

4.acl
idea: define acl through src.
Acl allow_ip src 1.2.3.4/255.255.255.0

5.http_access
http_access allow allow_ip

4.3 start and stop squid .
After configuring and saving Squid.conf, you can start Squid with the following command.
Squid
Or, use the RedHat startup script to start Squid.
/etc/rc.d/init.d/squid start
Similarly, you can also use the following script to stop running Squid or restart Squid.
/etc/rc.d/init .d / squid STOP
/etc/rc.d/init.d/squid restart

Section V configuration according to the needs of your squid-- advanced chapter

5.1 other configuration options
before making a number of advanced applications squid, it is necessary to other Useful configuration options for a comprehensive understanding. Let's talk about these options in the following categories. The options for some special applications will be discussed when we talk about this kind of application.

5.1.1 Network options

1. tcp_incoming_address
tcp_outgoing_address
udp_incoming_address
udp_outgoing_address
Description:
tcp_incoming_address specifies to listen to the bound IP address from the client or other Squid proxy server;
tcp_outgoing_address specifies the IP address to initiate a connection to the remote server or other Squid proxy server
udp_incoming_address specifies the IP address for receiving packets from other Squid proxy servers for ICP sockets. udp_outgoing_address specifies the IP address for sending packets to other Squid proxy servers for ICP sockets; by
default, no IP address is bound. The binding address can be specified with ip or with the complete domain name.

5.1.2 Swap space setting options
1. cache_swap_low (percent, 0-100)
cache_swap_high (percent, 0-100)
Description: Squid uses a lot of swap space to store objects. Then, after a certain period of time, the swap space will be used up, so you must also periodically remove objects below a certain level according to certain indicators. Squid uses the so-called "least recently used algorithm" (LRU) to do this job. When the used swap space reaches cache_swap_high, Squid calculates the value of each object according to the LRU calculation and clears the objects below a certain horizontal line. This clearing process continues until the used space reaches cache_swap_low. These two values ​​are expressed as percentages. If the swap space you use is large, it is recommended that you reduce the gap between these two values, because then a percentage point may be a few hundred megabytes, which will inevitably affect Squid's performance. The default is:
cache_swap_low 90
cache_swap_high 95

2.maximum_object_size
Note: Objects larger than this value will not be stored. If you want to increase the access speed, please lower this value; if you want to maximize the bandwidth savings and reduce costs, please increase this value. The unit is K, and the default value is:
maximum_object_size 4096 KB

5.1.3 Log options
1. cache_access_log
Description: Specify the full path of the client request to record the log (including the name of the file and the directory where it is located). The request can be an HTTP request from a general user or an ICP request from a neighbor. The default value is:
cache_access_log /var/log/squid/access.log
If you do not need the log, you can use the following statement to cancel: cache_access_log none

2.cache_store_log
Description: Specify the full path of the object storage record log (including the name of the file and The directory where). The record indicates which objects were written to the swap space and which objects were cleared from the swap space. The default path is:
cache_log /var/log/squid/cache.log
If you do not need the log, you can use the following statement to cancel: cache_store_log none

3.cache_log
Description: Specify the full path of Squid general information log (including the name of the file and The directory where).
The default path is: cache_log /var/log/squid/cache.log

4. cache_swap_log
Description: This option indicates the full path of the "swap.log" log of each swap space (including the name of the file and the directory where it is located). The log file contains metadata of objects stored in the swap space. Normally, the system automatically saves the file in the first top-level directory defined by "cache_dir", but you can also specify other paths. If you define multiple "cache_dir", the corresponding log file may be like this:
cache_swap_log.00
The number extension after cache_swap_log.01
cache_swap_log.02
corresponds to the specified multiple "cache_dir".
It should be noted that it is best not to delete such log files, otherwise Squid will not work properly.

5.pid_filename
Description: Specify the full path of the log recording the Squid process number (including the name of the file and the directory where it is located). The default path is
pid_filename /var/run/squid.pid.
If you don't need the file, you can use the following statement to cancel:

pid_filename none 6.debug_options
Description: Control the amount of information recorded when logging. It can be controlled from two aspects: section control records from several aspects; level controls the detail level of records in each aspect. The recommended way (which is also the default) is: debug_options ALL, 1
means that every aspect is recorded, but the level of detail is 1 (lowest).