lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Aug 2008 09:43:42 -0400
From:	jamal <hadi@...erus.ca>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, kaber@...sh.net,
	netdev@...r.kernel.org, johannes@...solutions.net,
	linux-wireless@...r.kernel.org
Subject: Re: [PATCH 20/31]: pkt_sched: Perform bulk of qdisc destruction in
	RCU.

On Fri, 2008-22-08 at 16:56 +1000, Herbert Xu wrote:
> On Tue, Jul 22, 2008 at 01:11:41AM +0800, Herbert Xu wrote:
> > On Mon, Jul 21, 2008 at 10:08:21AM -0700, David Miller wrote:

> I haven't had a chance to do the test yet but I've just had an
> idea of how we can get the best of both worlds.
> 
> The problem with always directing traffic based on the CPU alone
> is that processes move around and we don't want to introduce packet
> reordering because to that.

Assuming multi-rx queues with configurable MSI or otherwise to map
to a receive processor, then in the case of routing/bridging or
otherfavoriteformofforwarding:
If you tie static filters to a specific cpu that will always work.
So no reordering there.
Local traffic i can see migration/reordering happening.

> The problem with hashing based on packet headers alone is that
> it doesn't take CPU affinity into account at all so we may end
> up with a situation where one thread out of a thread pool (e.g.,
> a web server) has n sockets which are hashed to n different
> queues.

Indeed. In the forwarding case, the problem is not reordering rather
all flows will always end up in the same cpu. So if you may end up
just overloading one cpu while the other 1023 stayed idle.
My earlier statement was you could cook traffic scenarios where all
1024 are being fully utilized (the operative term is "cook");->

> So here's the idea, we determine the tx queue for a flow based
> on the CPU on which we saw its first packet.  Once we have decided
> on a queue we store that in a dst object (see below).  This
> ensures that all subsequent packets of that flow ends up in
> the same queue so there is no reordering.  It also avoids the
> problem where traffic genreated by one CPU gets scattered across
> queues.

Wont work with static multi-rx nic; iirc, changing those filters is
_expensive_. so you want it to stay static.

cheers,
jamal

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ