I have a project in Larave 5.2 and I am using:
The project is mainly based on webhooks. Other website calls our webhook and i add those webhooks in a queue. Roughly, 10000 jobs an hours are being added to the queue.
I have 50 num_process set in supervisor configs.
Can you please suggest me that how can i process the jobs in queue really fast. so that I don't have to wait for hours to get my job processed.
Here is a screenshot of current status in the queue
Any help is highly appreciated.
Thank you
Supervisor Config:
[program:laravel_queue]
command=php /var/www/html/nivesh/artisan --env=production --timeout=3600 queue:listen --queue=important,urgent,high,default
autostart=true
autorestart=true
process_name=%(program_name)s_%(process_num)s
numprocs=55
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log
priority=999
numprocs_start=55
startsecs=0
redirect_stderr=true
Check if you don't have remote calls to external URLs.
Also add hints in various places to see which operation takes long.
Try to break into multiple smaller events all the queue, don't be 1 long tasks, make a chain of events.
Speed on the queues are dramatically impacted by Laravel, each time the framework is loaded. This happens when you listen on the queues.
You should run the queue with the --daemon flag to avoid reloading the framework for every queue entry:
[program:laravel_queue]
command=php /var/www/html/yopify/artisan --env=production --timeout=3600 queue:work --queue=important,urgent,high,default --daemon
autostart=true
autorestart=true
process_name=%(program_name)s_%(process_num)s
numprocs=55
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log
priority=999
numprocs_start=55
startsecs=0
redirect_stderr=true
It's also possible to boil down you Supervisor job configuration file as some of the parameters you use are already set by the default value:
[program:laravel_queue]
command=php /var/www/html/yopify/artisan --env=production --timeout=3600 queue:work --queue=important,urgent,high,default --daemon
process_name=%(program_name)s_%(process_num)s
numprocs=55
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log
numprocs_start=55
startsecs=0
redirect_stderr=true
I would recommend that you use the user
parameter, as your current job is running as the root user - this is probably unnecessary to run your queue with so high privileges and I would consider it a security risk. I'd suggest setting it to the user who owns the files in /var/www/html/yopify/
We did have a similar issue few months back, Here is what we did,
* Getting rid of the logging : Its reduced time spent on the writing logs and speeds the execution of jobs in the queue. * Avoid external Calls: External call does take time to fetch data, which then also depends on the size of data fetched. Instead try storing them internally. * Using sub queues: Use sub queue to perform sub tasks.
My suggestion try to switch to redis since its easy to track the job status while you can write some quick queries on the redis server (redis cli).