lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Sep 2008 19:31:31 -0700
From:	"Alexander Duyck" <alexander.duyck@...il.com>
To:	"Jarek Poplawski" <jarkao2@...il.com>
Cc:	"Duyck, Alexander H" <alexander.h.duyck@...el.com>,
	"Herbert Xu" <herbert@...dor.apana.org.au>,
	"David Miller" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"kaber@...sh.net" <kaber@...sh.net>
Subject: Re: [PATCH take 2] pkt_sched: Fix qdisc_watchdog() vs. dev_deactivate() race

On Tue, Sep 16, 2008 at 3:47 AM, Jarek Poplawski <jarkao2@...il.com> wrote:
> On Mon, Sep 15, 2008 at 04:44:08PM -0700, Duyck, Alexander H wrote:
> ...
>> The only thing I really prefer about my solution as opposed to the solution
>> Dave implemented was that it would mean only one dequeue instead of a peek
>> followed by a dequeue.  I figure the important thing is to push the
>> discovery of us being stopped to as soon as possible in the process.
>>
>> It will probably be a few days before I have a patch with my approach ready.
>> I didn't realize how complex it would be to resolve this issue for CBQ, HTB,
>> HFSC, etc.  Also it is starting to look like I will probably need to implement
>> another function to support this since it seems like the dequeue operations
>> would need to be split into a multiqueue safe version, and a standard version
>> to support some workarounds like those found in qdisc_peek_len() for HFSC.
>
> Actually, looking at this HFSC now I start to doubt we need to
> complicate these things so much. If HFSC is OK with its simple
> hfsc_requeue() I doubt other qdiscs need much more, and we should
> reconsider David's idea to do the same on top, in dev_requeue_skb().
> Qdiscs like multiq would probably never use this, and these above
> mentioned (not mq-optimized) qdiscs could be used with multiq if
> needed. Then, it seems, it would be enough to improve multiq as a
> "leaf" adding these dedicated operations and/or flags.
>
> Thanks,
> Jarek P.
>

I am just not convinced that the requeue approach will work very well.  I am
just starting to test my patch today and the cpu savings were pretty significant
against the current configuration when using just the standard prio qdisc on a
multiqueue device.

I setup a simple test running a neterf UDP_STREAM test from my test system to
one of my clients sending 1460 byte UDP messages at line rate on an 82575
with 4 tx queues enabled.  The current dequeue/requeue approach used 2.5% cpu
whenever the test was run through queue 0, but if I ended up with packets going
out one of the other queues the cpu utilization would jump to ~12.5%.  The same
test done using my patch showed ~2.5% for every queue I tested.

I will hopefully have the patch ready to submit for comments tomorrow.  I just
need to run a few tests with the patch on versus the patch off to
verify that I didn't
break any of the qdiscs and that there isn't any negative performance impact.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ