lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170419223205.GA29640@outlook.office365.com>
Date:   Wed, 19 Apr 2017 15:32:06 -0700
From:   Andrei Vagin <avagin@...tuozzo.com>
To:     Keith Busch <keith.busch@...el.com>
CC:     <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Xiaolong Ye <xiaolong.ye@...el.com>
Subject: Re: irq/affinity: Fix extra vecs calculation

On Wed, Apr 19, 2017 at 05:53:09PM -0400, Keith Busch wrote:
> On Wed, Apr 19, 2017 at 12:53:44PM -0700, Andrei Vagin wrote:
> > On Wed, Apr 19, 2017 at 01:03:59PM -0400, Keith Busch wrote:
> > > If it's a divide by 0 as your last link indicates, that must mean there
> > > are possible nodes, but have no CPUs, and those should be skipped. If
> > > that's the case, the following should fix it, but I'm going to do some
> > > more qemu testing with various CPU topologies to confirm.
> > 
> > I printed variables from my test host, I think this can help to
> > investigate the issue:
> > 
> > irq_create_affinity_masks:116: vecs_to_assign 0 ncpus 2 extra_vecs 2 vecs_per_node 0 affv 2 curvec 2 nodes 1
> 
> That explains a lot. This setup wants 2 "pre_vectors", but I didn't
> know that was even a thing. This should fix it:

This patch works for me.
> 
> ---
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index d052947..eb8b689 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -98,13 +98,16 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>  		int ncpus, v, vecs_to_assign, vecs_per_node;
>  
>  		/* Spread the vectors per node */
> -		vecs_per_node = (affv - curvec) / nodes;
> +		vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes;
>  
>  		/* Get the cpus on this node which are in the mask */
>  		cpumask_and(nmsk, cpu_online_mask, cpumask_of_node(n));
> --

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ