lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100927220113.GD30050@sgi.com>
Date:	Mon, 27 Sep 2010 15:01:13 -0700
From:	Arthur Kepner <akepner@....com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to
	cpus w/in node

On Mon, Sep 27, 2010 at 10:46:02PM +0200, Thomas Gleixner wrote:
> ...
> Sigh. Why is this a x86 specific problem ?
>

It's obviously not. But we're particularly seeing it on x86 
systems, so an x86-specific fix would address our problem.
 
> If we setup an irq on a node then we should set the affinity to the
> target node in general. 

OK.

> .... The round robin inside the node is really not
> a problem unless you hit:
> 
>    nr_irqs_per_node * nr_cpus_per_node > max_vectors_per_cpu
> 

No, I don't think that's true. 

The problem we're seeing is that one driver asks for a large 
number of interrupts (on no CPU in particular). And because of the 
way that the vectors are initially assigned to CPUs (in 
__assign_irq_vector()), a particular CPU can have all its vectors 
consumed. 

Now, a second driver comes along, and requests an interrupt on 
a specific CPU, N. But CPU N is out of interrupts, so that driver 
fails.

This all happens before a user-space irq balancer is available.

-- 
Arthur
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ