1 Answer

0 votes
by
NGINX TCP and UDP Load Balancing

Nginx Plus can proxy and load balance TCP (Transmission Control Protocol) traffic. TCP is the protocol for many popular applications and services, such as MySQL, LDAP, and RTMP.

Similarly, Nginx Plus can proxy and load balance UDP traffic. User Datagram Protocol (UDP) is the protocol for many popular non-transactional applications, such as DNS, Syslog, and RADIUS.

Configure Reverse Proxy

First of all, we will need to configure reverse proxy so that Nginx open-source or Nginx Plus can forward TCP connections or UDP datagrams from clients to an upstream group or a proxied server.

Use the Nginx configuration file and do the following steps:

1. Create a top-level stream { } block.

stream {  

    # ...  

}  

2. Define one or more server { } configuration blocks for each virtual server in the top-level stream { } context.

3. Within the server { } configuration block, include the listen directive for each server to define the IP address, and/or port on which the server listens.

For UDP traffic, also add the UDP parameter. Since TCP is the default protocol for the stream context, there is no TCP parameter to listen directive:

stream {  

  

    server {  

        listen 12345;  

        # ...  

    }  

  

    server {  

        listen 53 udp;  

        # ...  

    }  

    # ...  

}  

4. Add the proxy pass directive to define the proxied server or an upstream group to which the server forwards traffic:

stream {  

  

    server {  

        listen     12345;  

        # traffic of TCP will be forwarded to the "stream_backend" upstream group  

        proxy_pass stream_backend;  

    }  

  

    server {  

        listen     12346;  

        #traffic of TCP will be forwarded to the specified server  

        proxy_pass backend.example.com:12346;  

    }  

  

    server {  

        listen     53 udp;  

        #traffic of UDP will be forwarded to the "dns_servers" upstream group  

        proxy_pass dns_servers;  

    }  

    # ...  

}  

5. If the proxy server has number of different network interfaces, you can optionally configure Nginx to use the particular source IP address when connecting to an upstream server.

Add the proxy_bind directive and the IP address of the appropriate network interface.

stream {  

    # ...  

    server {  

        listen     127.0.0.1:12345;  

        proxy_pass backend.example.com:12345;  

        proxy_bind 127.0.0.1:12345;  

    }  

}  

6. Optionally, we can tune the size of two in-memory buffers where nginx can put data from both the client and upstream connections. If there is a small amount of data, the buffers can be reduced, which may save memory resources.

If there is a large volume of data, the size of the buffer can be increased to reduce the number of socket read-write operations. Once the data is received on one connection, Nginx reads it and forwards it over the other connection. To control the buffer, use the proxy_buffer_size directive:

stream {  

    # ...  

    server {  

        listen            127.0.0.1:12345;  

        proxy_pass        backend.example.com:12345;  

        proxy_buffer_size 16k;  

    }  

}  

Configuring TCP or UDP Load Balancing

To configure the TCP or UDP load balancing:

1. First of all, create a group of servers or an upstream group whose traffic will be load balanced. Define one or more configuration blocks of upstream { } in the top-level stream { } context and set the name for the upstream group, for example, stream_backend for TCP servers and dns_servers for UDP servers:

stream {  

  

    upstream stream_backend {  

        # ...  

    }  

  

    upstream dns_servers {  

        # ...  

    }  

  

    # ...  

}  

2. Populate the upstream group with the upstream servers. Within the upstream block, include a server directive for each upstream server, specifying its hostname or IP address and an obligatory port number.

stream {  

  

    upstream stream_backend {  

        server backend1.example.com:12345;  

        server backend2.example.com:12345;  

        server backend3.example.com:12346;  

        # ...  

    }  

  

    upstream dns_servers {  

        server 192.168.136.130:53;  

        server 192.168.136.131:53;  

        # ...  

    }  

  

    # ...  

}  

3. Configure the method of load balancing used by the upstream group. We can use one of the following methods:

Round Robin: Nginx uses the Round Robin algorithm by default to load balance traffic, directing it sequentially to the servers in the configured upstream group. Because round-robin is the default method, there is no directive for it.

Simply create an upstream {} configuration block in the top-level stream { } context and include server directives.

Least Connections: Nginx selects the server with the least number of currently active connections.

Least Time: This method is for Nginx Plus only. Nginx selects the server with the lowest average latency and the least number of active connections. Parameters are:

connect - Time to connect to the upstream server

first_byte - Time to receive the data's first bye

last_byte - Time to receive the full response from the server.

upstream stream_backend {  

    least_time first_byte;  

    server backend1.example.com:12345;  

    server backend2.example.com:12345;  

    server backend3.example.com:12346;  

}  

Hash: Based on the user-defined key Nginx selects the server, for example, the source IP address ($remote_addr).

upstream stream_backend {  

    hash $remote_addr;  

    server backend1.example.com:12345;  

    server backend2.example.com:12345;  

    server backend3.example.com:12346;  

}  

Random: In this, each connection is passed to a randomly selected server. If parameter two is specified, then first Nginx randomly selects two servers taking into account server weights, and then chooses one of these servers using the specified servers.

Least_conn- The least number of active connections.

least_time= header - The least average time to receive the response header from the server.

Least_time=last_byte - The least average time to receive the full response from the server.

upstream stream_backend {  

    random two least_time=last_byte;  

    server backend1.example.com:12345;  

    server backend2.example.com:12345;  

    server backend3.example.com:12346;  

    server backend4.example.com:12346;  

}  

4. Optionally, for each upstream server define server-specific parameters including a maximum number of connections server weight and so on:

upstream stream_backend {  

    hash   $remote_addr consistent;  

    server backend1.example.com:12345 weight=5;  

    server backend2.example.com:12345;  

    server backend3.example.com:12346 max_conns=3;  

}  

upstream dns_servers {  

    least_conn;  

    server 192.168.136.130:53;  

    server 192.168.136.131:53;  

    # ...  

}  

Example of TCP and UDP load Balancing Configuration

Let's see an example for TCP and UDP load balancing configuration:

stream {  

    upstream stream_backend {  

        least_conn;  

        server backend1.example.com:12345 weight=5;  

        server backend2.example.com:12345 max_fails=2 fail_timeout=30s;  

        server backend3.example.com:12345 max_conns=3;  

    }  

      

    upstream dns_servers {  

        least_conn;  

        server 192.168.136.130:53;  

        server 192.168.136.131:53;  

        server 192.168.136.132:53;  

    }  

      

    server {  

        listen        12345;  

        proxy_pass    stream_backend;  

        proxy_timeout 3s;  

        proxy_connect_timeout 1s;  

    }  

      

    server {  

        listen     53 udp;  

        proxy_pass dns_servers;  

    }  

      

    server {  

        listen     12346;  

        proxy_pass backend4.example.com:12346;  

    }  

}  

In the above example, all TCP and UDP proxy-related functionalities are configured inside the block of the stream.

There are two blocks of named upstream, each block containing three servers that host the same content as one another. In the server for each server, the server name is followed by the obligatory port number. Connections are distributed among all the servers according to the least connections load balancing method: a connection goes to the server with the less number of active connections.

Related questions

0 votes
asked Sep 5, 2019 in NGINX by Robin
0 votes
asked Sep 5, 2019 in NGINX by Robin
...