lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ftgb4chi.fsf@nanos.tec.linutronix.de>
Date:   Sun, 19 Jan 2020 17:50:17 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Ming Lei <ming.lei@...hat.com>, linux-kernel@...r.kernel.org
Cc:     Ming Lei <ming.lei@...hat.com>, Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Peter Xu <peterx@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>
Subject: Re: [PATCH V3] sched/isolation: isolate from handling managed interrupt

Ming,

Ming Lei <ming.lei@...hat.com> writes:
>  
> +static bool hk_should_isolate(struct irq_data *data,
> +		const struct cpumask *affinity, unsigned int cpu)

Please align the first argument on the second line with the first
argument on the first line.

> +{
> +	const struct cpumask *hk_mask;
> +
> +	if (!housekeeping_enabled(HK_FLAG_MANAGED_IRQ))
> +		return false;
> +
> +	if (!irqd_affinity_is_managed(data))
> +		return false;

Pointless. That's already checked at the begin of the calling function.

> +
> +	if (!cpumask_test_cpu(cpu, affinity))
> +		return false;

Ditto.

> +	hk_mask = housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> +	if (cpumask_subset(affinity, hk_mask))
> +		return false;
> +
> +	if (cpumask_intersects(irq_data_get_effective_affinity_mask(data),
> +				hk_mask))

I really had to think twice why this is correct. The example I gave you
is far more intuitive. It's just missing the check below.

> +		return false;
> +
> +	return cpumask_test_cpu(cpu, hk_mask);
> +}
> +
>  static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu)
>  {
>  	struct irq_data *data = irq_desc_get_irq_data(desc);
> @@ -190,7 +216,8 @@ static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu)
>  	 * CPU then it is already assigned to a CPU in the affinity
>  	 * mask. No point in trying to move it around.
>  	 */
> -	if (!irqd_is_single_target(data))
> +	if (!irqd_is_single_target(data) ||
> +			hk_should_isolate(data, affinity, cpu))

	if (!irqd_is_single_target(data) ||
	    hk_should_isolate(data, affinity, cpu))

Please.

>  		irq_set_affinity_locked(data, affinity, false);
>  }
>  
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 1753486b440c..a8af2ca806e2 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -18,6 +18,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/rt.h>
>  #include <linux/sched/task.h>
> +#include <linux/sched/isolation.h>
>  #include <uapi/linux/sched/types.h>
>  #include <linux/task_work.h>
>  
> @@ -217,7 +218,40 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
>  	if (!chip || !chip->irq_set_affinity)
>  		return -EINVAL;
>  
> -	ret = chip->irq_set_affinity(data, mask, force);
> +	/*
> +	 * If this is a managed interrupt and housekeeping is enabled on
> +	 * it check whether the requested affinity mask intersects with
> +	 * a housekeeping CPU. If so, then remove the isolated CPUs from
> +	 * the mask and just keep the housekeeping CPU(s). This prevents
> +	 * the affinity setter from routing the interrupt to an isolated
> +	 * CPU to avoid that I/O submitted from a housekeeping CPU causes
> +	 * interrupts on an isolated one.
> +	 *
> +	 * If the masks do not intersect or include online CPU(s) then
> +	 * keep the requested mask. The isolated target CPUs are only
> +	 * receiving interrupts when the I/O operation was submitted
> +	 * directly from them.
> +	 *
> +	 * If all housekeeping CPUs in the affinity mask are offline,
> +	 * we will migrate the irq from isolate CPU when any housekeeping
> +	 * CPU in the mask becomes online.
> +	 */
> +	if (irqd_affinity_is_managed(data) &&
> +			housekeeping_enabled(HK_FLAG_MANAGED_IRQ)) {

Same here.

> +		static DEFINE_RAW_SPINLOCK(prog_mask_lock);
> +		static struct cpumask prog_mask;
> +		const struct cpumask *hk_mask =
> +			housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> +

What's wrong with writing:

		const struct cpumask *hk_mask;

		hk_mask = housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);

Hmm?

> +		raw_spin_lock(&prog_mask_lock);
> +		cpumask_and(&prog_mask, mask, hk_mask);
> +		if (!cpumask_intersects(&prog_mask, cpu_online_mask))
> +			cpumask_copy(&prog_mask, mask);

Why copy?

		static struct cpumask tmp_mask;
		const struct cpumask *hk_mask, *mp;

		raw_spin_lock(&mask_lock);
		cpumask_and(&tmp_mask, mask, hk_mask);
		if (cpumask_intersects(&prog_mask, cpu_online_mask))
                	mp = &tmp_mask;
                else
                	mp = mask;
                ....
Hmm?

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ