lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Jan 2014 21:41:38 -0500
From:	"Chen, Gong" <gong.chen@...ux.intel.com>
To:	Prarit Bhargava <prarit@...hat.com>
Cc:	rui wang <ruiv.wang@...il.com>, Tony Luck <tony.luck@...il.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, X86-ML <x86@...nel.org>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <ak@...ux.intel.com>,
	Seiji Aguchi <seiji.aguchi@....com>,
	Yang Zhang <yang.z.zhang@...el.com>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	janet.morgan@...el.com, "Yu, Fenghua" <fenghua.yu@...el.com>
Subject: Re: [PATCH] x86: Add check for number of available vectors before
 CPU down [v2]

On Tue, Dec 31, 2013 at 04:22:09PM -0500, Prarit Bhargava wrote:
> Okay, how about,
>                         if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
>                             ((!cpumask_empty(&affinity_new)) &&
>                               !cpumask_subset(&affinity_new, &online_new)) ||
>                              cpumask_empty(&affinity_new))
>                                 this_count++;
> 
I think it is good but a little bit complicated. How about this:

        if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
            /* add some commments to emphysize the importance of turn */
            (cpumask_empty(&affinity_new) ||
            !cpumask_subset(&affinity_new, &online_new)))

> I tried this with the following examples and AFAICT I get the correct result:
> 
> 1) affinity mask = online mask = 0xf.  CPU 3 (1000b) is down'd.
> 
> this_count is not incremented.
> 
> 2) affinity mask is a non-zero subset of the online mask (which IMO is
> the "typical" case).  For example, affinity_mask = 0x9, online mask = 0xf.  CPU
> 3 is again down'd.
> 
> this_count is not incremented.
> 
> 3) affinity_mask = 0x1, online mask = 0x3. (this is your example).  CPU
> 1 is going down.
> 
> this_count is incremented, as the resulting affinity mask will be 0.
> 
> 4) affinity_mask = 0x0, online mask = 0x7.  CPU 1 is going down.
> 
> this_count is incremented, as the affinity mask is 0.
> 
The 4th scenario is very tricky. If you try to set affinity from user space,
it will return failure because before kernel tried to change the affinity it
will verify it:
int __ioapic_set_affinity(...)
{
...
        if (!cpumask_intersects(mask, cpu_online_mask))
                return -EINVAL;
...
}

So from this point of view, affinity can't be 0. But your patch is very
special because you change it by hand:
        cpu_clear(smp_processor_id(), affinity_new);

so it is reasonable. It makes me thinking a little bit more. In fixup_irqs
we have similar logic but we don't protect it. Maybe it is because currently
the scenario 4 can't happen because we stop it in advance. But who knows
if one day we use it in other situation we will hit this subtle issue
probably.

So, Prarit, I suggest you writing another patch to fix this potential issue
for fixup_irqs. How would you think?

Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ