lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1908281337280.1869@nanos.tec.linutronix.de>
Date:   Wed, 28 Aug 2019 14:46:50 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Neil Horman <nhorman@...driver.com>
cc:     linux-kernel@...r.kernel.org, x86@...nel.org, djuran@...hat.com,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH V2] x86: Add irq spillover warning

Neil,

On Thu, 22 Aug 2019, Neil Horman wrote:

Just a few nits.

> On Intel hardware, cpus are limited in the number of irqs they can
> have affined to them (currently 240), based on section 10.5.2 of:
> https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf
>

Please do not add links to URLs which are not reliable and sections which
have a different number in the next version of the document. Just cite the
3 relevant lines or transcribe them.

Aside of that the above is not really accurate:

1) This is not restricted to Intel. All x86 CPUs of all vendors behave
   that way.

2) CPUs have a vector space of 256. CPUs reserve 32 vectors (0..31). That
   leaves 224. The kernel reserves another 22 vectors for various purposes.

   That leaves 202 vectors for assignment to devices and that's what this
   is about.

> assign_irq_vector_any_locked() will attempt to honor the affining
> request, but if the 240 vector limitation documented above is crossed, a

that means the vector space is exhausted.

> new mask will be selected that is potentially outside the requested cpu

It's not potentially outside. The point is that the requested affinity mask
has no vectors left, so it falls back to a wider cpumask and is guaranteed
to select a CPU which is not in the requested affinity mask.

> set silently.  This can lead to unexpected behavior for administrators.
> 
> Mitigate this problem by checking the affinity mask after its been
> assigned in activate_reserved so that adminstrators get a logged warning
> about the change.

Please do not describe implementation details which can be seen from the
patch itself.
 
> Tested successfully by the reporter

We have a 'Tested-by:' tag for this
 
> Change Notes:
> V1->V2)
> 	* Moved the check for this condition to activate_reserved from
> do_IRQ, taking it out of the hot path (request by tglx@...tronix.de)

Please put change notes (and thanks for providing them) below the '---'
discard line. They are not part of the final changelog in git. They are
informative for the reviewers, but if in the changelog it's manual work to
remove them while the discard section goes away automatically.

> +
> +	/*
> +	 * Check to ensure that the effective affinity mask is a subset
> +	 * the user supplied affinity mask, and warn the user if it is not
> +	 */
> +	if (!cpumask_subset(irq_data_get_effective_affinity_mask(irqd),
> +	     irq_data_get_affinity_mask(irqd)))
> +		pr_warn("irq %d has been assigned to a cpu outside of its user affinity mask\n",

s/%d/%u/  irqd->irq is unsigned int.

So you tell what, but no hint about the why. That should be:

   "irq %u: Affinity broken due to vector space exhaustion.\n"

That actually tells that it happened and gives the administrator
information why. So he knows that he tried to assign too many interrupts to
a set of CPUs.

> +			irqd->irq);

This is a multiline statement and wants brackets around it.

> +
>  	return ret;

I fixed that all up while applying, so no need to resend.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ