lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1393032798.15717.85.camel@deadeye.wl.decadent.org.uk>
Date:	Sat, 22 Feb 2014 01:33:18 +0000
From:	Ben Hutchings <ben@...adent.org.uk>
To:	David Miller <davem@...emloft.net>
Cc:	amirv@...lanox.com, netdev@...r.kernel.org, yevgenyp@...lanox.com,
	ogerlitz@...lanox.com
Subject: Re: [PATCH net-next V1 0/3] net/mlx4: Mellanox driver update
 01-01-2014

On Wed, 2014-02-19 at 16:50 -0500, David Miller wrote:
> From: Amir Vadai <amirv@...lanox.com>
> Date: Wed, 19 Feb 2014 14:58:01 +0200
> 
> > V0 of this patch was sent before previous net-next got closed, and
> > now we would like to resume it.
> > 
> > Yuval has reworked the affinity hint patch, according to Ben's comments. The
> > patch was actually rewritten.
> > After a discussion with Yuval Mintz, use of netif_get_num_default_rss_queues()
> > is not reverted, but done in the right place. Instead of limiting the number of
> > IRQ's for the driver it will limit the number of queues in RSS.
> > 
> > Patchset was applied and tested against commit: cb6e926 "ipv6:fix checkpatch
> > errors with assignment in if condition"
> 
> Influencing IRQs to be allocated on the same NUMA code as the one where
> the card resides doesn't sound like an mlx4 specific desire to me.
> 
> Other devices, both networking and non-networking, probably might like
> that as well.
> 
> Therefore doing this by hand in a specific driver doesn't seem
> appropriate at all.

Handling network traffic only on the local node can be really good on
recent Intel processors, where DMA writes will usually go into cache on
the local node.  But on other architectures, AMD processors, older Intel
processors... I don't think there's such a big difference.  Also, where
the system and device implement PCIe Transaction Processing Hints, DMA
writes to cache should work on all nodes (following interrupt
affinity)... in theory.

So this sort of policy not only shouldn't be implemented in specific
drivers, but also ought to be configurable.

Ben.

-- 
Ben Hutchings
I haven't lost my mind; it's backed up on tape somewhere.

Download attachment "signature.asc" of type "application/pgp-signature" (812 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ