lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 07 Aug 2007 09:19:52 -0400
From:	jamal <hadi@...erus.ca>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org, Robert.Olsson@...a.slu.se,
	shemminger@...ux-foundation.org, kaber@...sh.net
Subject: Re: fscked clock sources revisited

On Mon, 2007-30-07 at 22:14 -0400, jamal wrote:

> I am going to test with hpet when i get the chance

Couldnt figure how to turn on/off hpet, so didnt test.

> and perhaps turn off all the other sources if nothing good comes out; i
> need my numbers ;->

Here are some numbers that make the mystery even more interesting. This
is with kernel 2.6.22-rc4. Repeating with kernel 2.6.23-rc1 didnt show
anything different. I went back to test on 2.6.22-rc4 because it is the 
base for my batching patches - and since those drove me to this test, i
wanted something that reduces variables when comparing with batching.

I picked udp for this test because i can select different packet sizes.
i used iperf. The sender is a dual opteron with tg3. The receiver is a
dual xeon.

The default HZ is 250. Each packet size was run 3 times with different
clock sources. The experiment made sure that the receiver wasnt a
bottleneck (increased socket buffer sizes etc)

Packet | jiffies (1/250) |      tsc      |    acpi_pm
-------------------------|---------------|---------------
  64   |  141, 145, 142  | 131, 136, 130 | 103, 104, 110
 128   |  256, 256, 256  | 274, 260, 269 | 216, 206, 220
 512   |  513, 513, 513  | 886, 886, 886 | 828, 814, 806
1280   |  684, 684, 684  | 951, 951, 951 | 951, 951, 951

So i was wrong to declare jiffies as being good. The last batch of
experiments were based on only 64 byte UDP. Clearly as packet size goes
up, the results are worse with jiffies.
At this point, i decided to recompile the kernel with HZ=1000 and the
observations show that the jiffies results are improved.

Packet | jiffies (1/250) |      tsc      |    acpi_pm
-------------------------|---------------|---------------
  64   |  145, 135, 135  | 131, 137, 139 | 110, 110, 108
 128   |  257, 257, 257  | 270, 264, 250 | 218, 216, 217
 512   |  819, 776, 819  | 886, 886, 886 | 841, 824, 846
1280   |  855, 855, 855  | 951, 950, 951 | 951, 951, 951

Still not as good as the other two at large packet sizes.
For this machine: The ideal clock source would be jiffies with
HZ=1000 upto about 100 bytes then change to tsc. Of course i could pick
tsc but people have dissed it so far - i probably didnt hit the
condition where it goes into deep slumber.

Any insights? This makes it hard to quantify batching experimental
improvements as i feel it could be architecture or worse machine
dependent.

cheers,
jamal 

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ