Some times ago, I have a need to host some big files for open download. At first, I think Nginx will perform pretty well without muck configuration. In reality, there are complaints about slow and interrupted downloads which is quite annoying.
I ended up using Xender for PC to transfer the files, but after digging the Nginx docs, I did find some nice changes that can fix these problems and produce a high throughput. Here’s my tweaks made to the nginx.conf file:
- Turn off
sendfilecall is known to have throughput degradation when in high load. Disabling it helps to keep a higher throughput at high load. Also, when serving large files with
sendfile, there are no ways to control readahead.
- Enable TCP nopushTCP nopush fills the TCP packet to its maximum size before sending. This can help increase throughput if you’re serving large files.
- Use Nginx’s
directioto load fileUsing
directiocan help improving performance by skipping a bunch of steps happened in the kernel when reading files, thus speed up the throughput.
- Enable the use of
libaiofor optimal performance
libaioallows asynchronous I/O to be done in kernel, which results in faster read and write speed. However, it needs
libaioto be installed and re-compiling your Nginx in order to have it supported. I used the following flow to recompiling Nginx with aio support.
Shell1234567891011# Install libaio on RHEL/CentOSyum install libaio libaio-devel -ywget http://nginx.org/download/nginx-1.9.4.zipunzip -q nginx-1.9.4.zipcd nginx-1.9.4# Configure Nginx according to your needs, but it should also include# --with-file-aio in order to use libaio./comnfigure --with-file-aiomake
The complete nginx.conf should look like this:
directio 512; # required if you're using Linux and uses aio
There are also some lower-level tewaks like mounting your disks with noatime flags and use ext4/xfs when serving files.