lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 16 Sep 2007 12:14:34 -0400
From:	jamal <hadi@...erus.ca>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	Patrick McHardy <kaber@...sh.net>,
	Eric Dumazet <dada1@...mosbay.com>,
	Evgeniy Polyakov <johnpol@....mipt.ru>
Subject: [RFC][NET_SCHED] explict hold dev tx lock


While trying to port my batching changes to net-2.6.24 from this morning
i realized this is something i had wanted to probe people on....

Challenge:
For N Cpus, with full throttle traffic on all N CPUs, funneling traffic
to the same ethernet device, the devices queue lock is contended by all
N CPUs constantly. The TX lock is only contended by a max of 2 CPUS. 
In the current mode of operation, after all the work of entering the
dequeue region, we may endup aborting the path if we are unable to get
the tx lock and go back to contend for the queue lock. As N goes up,
this gets worse.

Testing:
I did some testing with a 4 cpu (2xdual core) with no irq binding. I run
about 10 runs of 30M packets each from the stack with a udp app i wrote
which is intended to run keep all 4 cpus busy -  and to my suprise i
found that we only bail out less than 0.1%. I may need a better test
case.

Changes:
I made changes to the code path as defined in the patch included to 
and noticed a slight increase (2-3%) in performance with both e1000 and
tg3; which was a relief because i thought the spinlock_irq (which is
needed because some drivers grab tx lock in interupts) may have negative
effects. The fact it didnt reduce performance was a good thing.
Note: This is the highest end machine ive ever laid hands on, so this
may be misleading.
 
So - what side effects do people see in doing this? If none, i will
clean it up and submit.

cheers,
jamal

View attachment "nsqr1" of type "text/x-patch" (2162 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ