lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101130.112106.183035811.davem@davemloft.net>
Date:	Tue, 30 Nov 2010 11:21:06 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	eric.dumazet@...il.com
Cc:	therbert@...gle.com, netdev@...r.kernel.org,
	bhutchings@...arflare.com, jesse.brandeburg@...el.com
Subject: Re: [PATCH net-next-2.6] sched: use xps information for qdisc NUMA
 affinity

From: Eric Dumazet <eric.dumazet@...il.com>
Date: Tue, 30 Nov 2010 20:07:27 +0100

[ Jesse CC:'d ]

> netdev struct itself is shared by all cpus, so there is no real choice,
> unless you know one netdev will be used by a restricted set of
> cpus/nodes... Probably very unlikely in practice.

Unfortunately Jesse has found non-trivial gains by NUMA localizing the
netdev struct during routing tests in soome configurations.

> We could change (only on NUMA setups maybe)
> 
> struct netdev_queue *_tx;
> 
> to a
> 
> struct netdev_queue **_tx;
> 
> and allocate each "struct netdev_queue" on appropriate node, but adding
> one indirection level might be overkill...
> 
> For very hot small structures, (one or two cache lines), I am not sure
> its worth the pain.

Jesse, do you think this would help the case you were testing?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ