lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Jul 2010 09:15:24 +0800
From:	Junchang Wang <junchangwang@...il.com>
To:	Rick Jones <rick.jones2@...com>
Cc:	Ben Hutchings <bhutchings@...arflare.com>, romieu@...zoreil.com,
	netdev@...r.kernel.org
Subject: Re: Question about way that NICs deliver packets to the kernel

On Fri, Jul 16, 2010 at 10:58:46AM -0700, Rick Jones wrote:
>>Hi Ben,
>>I added options -c -C to netperf's command line. Result is as follows:
>>                    scheme 1    scheme 2    Imp.
>>Throughput:     683M        718M       5%
>>CPU usage:     47.8%       45.6%
>>
>>That really surprised me because "top" command showed the CPU usage
>>was fluctuating between 0.5% and 1.5% rather that between 45% and 50%.
>

Hi rick,
very sorry for my late reply. Just recovered from the final exam.:)

>Can you tell us a bit more about the system, and which version of
>netperf you are using?  

The target machine is a Pentium Dual-core E2200 desktop with a r8169 
gigabit NIC. (I couldn't find a better server with old pci slot.)

Another machine is a Nehalem based system with Intel 82576 NIC.

The target machine executes netserver and Nehalem machine executes netperf.
The version of netperf is 2.4.5

>Any chance that the CPU utilization you were
>looking at in top was just that being charged to netperf the process?

What I see on target machine is as follows:

top - 21:37:12 up 21 min,  6 users,  load average: 0.43, 0.28, 0.19
Tasks: 152 total,   2 running, 149 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.3%us,  1.5%sy,  0.1%ni, 89.5%id,  2.7%wa,  0.0%hi,  3.9%si,  0.0%
Mem:   2074064k total,   690200k used,  1383864k free,    39372k buffers
Swap:  2096476k total,        0k used,  2096476k free,   435044k cached

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND    
3916 root      20   0  2228  584  296 R 84.6  0.0   0:07.12 netserver    

It shows the CPU usage of taget machine is around 10%.

while Nehalem machine's report is as follows:

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.1 (192.168.2.1) port 0 AF_INET
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

87380  16384  16384    10.05       679.79   1.63     48.27    1.571   11.634 

It shows the CPU usage of target machine is 48.27%.

>"Network processing" does not often get charged to the responsible
>process, so netperf reports system-wide CPU utilization on the
>assumption it is the only thing causing the CPUs to be utilized.

My understand of your commends is:
1)except running in ksoftirqd, network processing cannot be correctly counted
  because it runs in interrupt contexts that do not get charged to a correct
  process. So "top" misses lots of CPU usage in high interrupt rate network
  situation.
2)As you have mentioned in netperf's manual, netperf uses /proc/stat on Linux
  to retrieve time spent in idle mode. In other words, it accumulates cpu time
  spent in all other modes, including hardware interrupt, software interrupt,
  etc., making the CPU usage more accurate in high interrupt situation.
3)Since most processes in target machine are in sleeping mode, the CPU usage
  of network processing is in actually very close to 48.27%. Right?

Correct me if any of them are incorrect. Thanks.

--Junchang
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ