I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:
/etc/security/limits.conf
* soft nofile 65000
* hard nofile 65000
/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000
ulimit -n
//output 6500
So i think my system configure it's right.
My service is manage by supervisor, it's possible supervisor limits?
check process start by supervisor:
cat /proc/815/limits
Max open files 1024 4096 files
check process manual start:
cat /proc/900/limits
Max open files 65000 65000 files
The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.
May be supervisor start level is too high and system configure does not work when supervisor start?
edit:
system: ubuntu 12.04 64bit
It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.
update
Maybe the problem is:
Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…