lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57C9024A16AD2D4C97DC78E552063EA3080BA318@orsmsx505.amr.corp.intel.com>
Date:	Tue, 29 Jul 2008 10:23:53 -0700
From:	"Luck, Tony" <tony.luck@...el.com>
To:	"bhutchings@...arflare.com" <bhutchings@...arflare.com>
CC:	"jgarzik@...hat.com" <jgarzik@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-ia64@...r.kernel.org" <linux-ia64@...r.kernel.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"matthew@....cx" <matthew@....cx>, Robin Holt <holt@....com>
Subject: RE: [Patch] fix ia64 build failure when CONFIG_SFC=m

> CONFIG_SFC=m uses topology_core_siblings() which, for ia64, expects
> cpu_core_map to be exported.  It is not.  This patch exports the needed
> symbol.

Ben,

Before I rush to apply this (or one of the other identical patches that
I've received) ... I'd like to ponder on whether this is the right thing
to do.

Looking at the code in drivers/net/sfc/efx.c I see that you are using
this to compute the number of RX queues based on the number of packages
with the git commit comment saying:

    Using multiple cores in the same package to handle received traffic
    does not appear to provide a performance benefit.  Therefore use CPU
    topology information to count CPU packages and use that as the default
    number of RX queues and interrupts.  We rely on interrupt balancing to
    spread the interrupts across packages.

I have some questions on this:

1) Did you measure on FSB based systems? ... It's kind of sad that the extra
cores are not helping and I wonder why. Do AMD and QPI based systems have
this same limitation?

2) Should Linux have an API to give you a useful number of threads, rather
than make you dig through the topology data structures? At some point cpumask_t
data structure is going to be too big for the kernel stack.

3) Does interrupt balancing always do the right thing to spread your interrupts
across packages?

4) In a hotplug system would you want to adjust the number of threads if cpus
were added or removed?

-Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ