lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Apr 2017 12:53:44 -0700
From:   Andrei Vagin <avagin@...tuozzo.com>
To:     Keith Busch <keith.busch@...el.com>
CC:     <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Xiaolong Ye <xiaolong.ye@...el.com>
Subject: Re: irq/affinity: Fix extra vecs calculation

On Wed, Apr 19, 2017 at 01:03:59PM -0400, Keith Busch wrote:
> On Wed, Apr 19, 2017 at 09:20:27AM -0700, Andrei Vagin wrote:
> > Hi,
> > 
> > Something is wrong with this patch. We run CRIU tests for upstream kernels.
> > And we found that a kernel with this patch can't be booted.
> > 
> > https://travis-ci.org/avagin/linux/builds/223557750
> > 
> > We don't have access to console logs and I can't reproduce this issue on
> > my nodes. I tired to revert this patch and everything works as expected.
> > 
> > https://travis-ci.org/avagin/linux/builds/223594172
> > 
> > Here is another report about this patch
> > https://lkml.org/lkml/2017/4/16/344
> 
> Yikes, okay, I've made a mistake somewhere. Sorry about that, I will
> look into this ASAP.
> 
> If it's a divide by 0 as your last link indicates, that must mean there
> are possible nodes, but have no CPUs, and those should be skipped. If
> that's the case, the following should fix it, but I'm going to do some
> more qemu testing with various CPU topologies to confirm.

I printed variables from my test host, I think this can help to
investigate the issue:

irq_create_affinity_masks:116: vecs_to_assign 0 ncpus 2 extra_vecs 2 vecs_per_node 0 affv 2 curvec 2 nodes 1

and here is a patch which I use to print them:

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index a073a6e..c43c85d 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -110,7 +110,10 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
                vecs_to_assign = min(vecs_per_node, ncpus);
 
                /* Account for rounding errors */
-               extra_vecs = ncpus - vecs_to_assign * (ncpus / vecs_to_assign);
+//             extra_vecs = ncpus - vecs_to_assign * (ncpus / vecs_to_assign);
+               extra_vecs = ncpus - vecs_to_assign;
+               printk("%s:%d: vecs_to_assign %d ncpus %d extra_vecs %d vecs_per_node %d affv %d curvec %d nodes %d\n",
+                       __func__, __LINE__, vecs_to_assign, ncpus, extra_vecs, vecs_per_node, affv, curvec, nodes);
 
                for (v = 0; curvec < last_affv && v < vecs_to_assign;
                     curvec++, v++) {


> 
> ---
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index d052947..80c45d0 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -105,6 +105,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>  
>  		/* Calculate the number of cpus per vector */
>  		ncpus = cpumask_weight(nmsk);
> +		if (!ncpus)
> +			continue;
> +
>  		vecs_to_assign = min(vecs_per_node, ncpus);
>  
>  		/* Account for rounding errors */
> --

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ