lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1287086378.2659.26.camel@edumazet-laptop>
Date:	Thu, 14 Oct 2010 21:59:38 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Tom Herbert <therbert@...gle.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Michael Chan <mchan@...adcom.com>,
	Eilon Greenstein <eilong@...adcom.com>,
	Christoph Hellwig <hch@....de>,
	Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [PATCH net-next] net: allocate skbs on local node

Le jeudi 14 octobre 2010 à 12:27 -0700, Andrew Morton a écrit :

> Can we think of any hardware configuration for which the change would
> be harmful?  Something with really expensive cross-node DMA maybe?
> 

If such hardware exists (yes it does, but not close my hands...), then
NIC IRQS probably should be handled by cpus close to the device, or it
might be very slow. This has nothing to do with skb allocation layer.

I believe we should not try to correct wrong IRQ affinities with NUMA
games. Network stack will wakeup threads and scheduler will run them on
same cpu, if possible.

If skb stay long enough on socket receive queue, application will need
to fetch again remote numa node when reading socket a few milliseconds
later, because cache will be cold : Total : two cross node transferts
per incoming frame.

The more node distances are big, the more speedup we can expect from
this patch.

Thanks


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ