NGINX + LUA = Benchmark. Who has experience?
-
The question is a little rhetorical ...
There is such a stack:
NGINX + LUA
LUA uses a check for something by a key in Redis: if there is one, then we give static ... otherwise - 404 ...
Everything is fine - everything works :)
The question is:
Before "screwing" the LUA check in NGINX using a load test via WRK a > it showed about 370 requests / sec and traffic about 10.5Mb / sec ...
After "screwing on" the check - shows about the same: 360 requests / sec and 10.3Mb / sec ...
Those. my LUA script hasn't changed much in terms of performance ...
The size of the given static file on which the test was run is about ~ 30kb ...
My internet speed, measured by speedtest.net , is about 90 Mbps ...
Server in Germany (Hetzner). I ran the test from my computer from Russia ... in 8 threads and 200 connections for 15 seconds ...
wrk -t8 -c200 -d15s --latency http://example.com/file.jpg
Is this normal kickback ?
What can be improved? Considering that without any scripts, it turns out that the benchmark is almost the same (you can write off the error, even I think) ...
Below are the main parameters of the NGINX config:
nginx.conf :
user www-data; worker_processes auto; worker_rlimit_nofile 65535; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 65535; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; keepalive_requests 100; client_body_timeout 10; client_header_timeout 15; reset_timedout_connection on; send_timeout 2; types_hash_max_size 4096; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## #access_log /var/log/nginx/access.log; #error_log /var/log/nginx/error.log; access_log off; error_log /var/log/nginx/error.log crit; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }
Here is the "host" part with the LUA script:
#.. тут стандартная "петушня"... location ~* ^ХАЛИ_ГАЛИ_ТЫРЫМ_ПЫРЫМ$ { #set $images_dir "PATH_TO_FOLDER"; # $4 - берётя из location... # Выносим весь Lua код в отдельный файл content_by_lua_file /etc/nginx/lua/img_data.lua; #lua_code_cache off; #dev } # Этот "локатион" вызывается из LUA в случае "успеха", иначе сразу - 404 прям из LUA с помощью - ngx.exit(ngx.HTTP_NOT_FOUND) location @imagedata { try_files /$images_dir/$4 =404; } #...далее тоже ничего значительного...
Iron:
Intel Xeon E3-1275v5
2x HDD SATA (RAID 1)
4x RAM 16384 MB DDR4 ECC
OS:
Debian 9
It seems to me that it can be better ... :)
UPD:
Right now, for example, I took this picture from the Yandex server (market):
https://avatars.mds.yandex.net/get-mpic / 1883514 / im ...
I ran a similar test and received about 200 requests per second ... and the same traffic was about 10Mb / s with the weight of this picture 35kb ...
For a hike with performance, all the norms work out ... if you count Yandex as a standard (but this is not accurate :)) ...
Or I stupidly run into the data transmission channel ...Nginx Anonymous, Aug 6, 2019 -
We managed to increase performance by about 5% (tears, of course, but it may be unrealistic to squeeze more out of the current stack) ...
In nginx.conf add to the events section:
accept_mutex off;
In "hosts":
aio threads;
In this case, nginx must be compiled with the option:
--with-file-aio
More details here:
https://habr.com/en/post/260669/
But this will not be acceptable in all cases ... read - google - delve into ...
After each config change I ran the test several times ...
BUT, anyway, I'm waiting for people who experimented in the same way - I will be grateful if you can tell me anything else ...
UPD:
Compared to previous tests, right now it steadily holds 100 more requests in total in 15 seconds and 15-20 more in 1 second ...
The increase in the return of "traffic" is ~ 1mb more ...
UPD2:
A text file containing "hello" is sent at a speed of ~ 2800 requests per second ...
In general, as they wrote, then the matter is in the network ... for real tests it must be excluded ...Annabelle Hartman -
Increase the number of testing threads, it's obvious that the main brakes are due to network latency.
Local nginx on a completely dead machine gives gigabytes on hello world statics and thousands of requests per second.
As proof, try running the tests locally on the server.Jackson Bird -
You are not testing the performance of the code, but the performance of the network from your computer to the server in Germany. If you want real performance tests, then you need to clean up the network. The stack you described, excluding the network, should be slightly worse than the performance of Nginx, and most likely it will be tens, or even hundreds of thousands of requests per secondAnonymous
3 Answers
Your Answer
To place the code, please use CodePen or similar tool. Thanks you!