Strange load numbers with lots of threads
When using LW5-based binaries, I get absurd load numbers on a
dual-dual Opteron machine that is really mostly idle (I think the
% id number below must be right):
top - 10:21:40 up 172 days, 8:33, 4 users, load average: 18.65, 20.53, 21.69
Tasks: 127 total, 1 running, 126 sleeping, 0 stopped, 0 zombie
Cpu(s): 6.0% us, 10.9% sy, 0.0% ni, 82.9% id, 0.0% wa, 0.0% hi, 0.2% si
Mem: 8251620k total, 5268472k used, 2983148k free, 206528k buffers
Swap: 2104504k total, 0k used, 2104504k free, 3503824k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10817 nftpd 15 0 969m 144m 908 S 25.3 1.8 16:42.91 nftpd
10829 nftpd 15 0 915m 122m 908 S 21.0 1.5 21:06.33 nftpd
10819 nftpd 15 0 867m 131m 908 S 15.6 1.6 12:18.42 nftpd
10825 nftpd 15 0 392m 166m 916 S 2.0 2.1 3:16.89 nftpd
26138 nftpd 15 0 615m 470m 904 S 1.7 5.8 2:03.33 nftpd
This is linux 2.6.15.6 & Debian 3.1
Is there anyone on this list that knows whether this is an artifact
caused by lots of pthreads (these 5 processes have a total of 327
threads) that is hard to avoid, or is it a known kernel bug?
(The problem is mostly that the high load numbers make our sysadmin
tasks a little harder - our nice little screen which shows xload on
all of our servers doesn't look nice anymore when this particular
machine looks as if it's close to choking ;))
--
(espen)