lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a782d2f3-d2f6-795f-f4b1-9462205fd581@arm.com>
Date:   Wed, 9 Dec 2020 16:47:43 +0000
From:   James Morse <james.morse@....com>
To:     Reinette Chatre <reinette.chatre@...el.com>, fenghua.yu@...el.com
Cc:     tglx@...utronix.de, bp@...en8.de, tony.luck@...el.com,
        kuo-lang.tseng@...el.com, shakeelb@...gle.com,
        valentin.schneider@....com, mingo@...hat.com, babu.moger@....com,
        hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [PATCH 1/3] x86/resctrl: Move setting task's active CPU in a mask
 into helpers

Hi Reinette, Fenghua,

On 03/12/2020 23:25, Reinette Chatre wrote:
> From: Fenghua Yu <fenghua.yu@...el.com>
> 
> The code of setting the CPU on which a task is running in a CPU mask is
> moved into a couple of helpers. The new helper task_on_cpu() will be
> reused shortly.

> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 6f4ca4bea625..68db7d2dec8f 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -525,6 +525,38 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)

> +#ifdef CONFIG_SMP

(using IS_ENABLED(CONFIG_SMP) lets the compiler check all the code in one go, then
dead-code-remove the stuff that will never happen... its also easier on the eye!)


> +/* Get the CPU if the task is on it. */
> +static bool task_on_cpu(struct task_struct *t, int *cpu)
> +{
> +	/*
> +	 * This is safe on x86 w/o barriers as the ordering of writing to
> +	 * task_cpu() and t->on_cpu is reverse to the reading here. The
> +	 * detection is inaccurate as tasks might move or schedule before
> +	 * the smp function call takes place. In such a case the function
> +	 * call is pointless, but there is no other side effect.
> +	 */

> +	if (t->on_cpu) {

kernel/sched/core.c calls out that there can be two tasks on one CPU with this set.
(grep astute)
I think that means this series will falsely match the old task for a CPU while the
scheduler is running, and IPI it unnecessarily.

task_curr() is the helper that knows not to do this.


> +		*cpu = task_cpu(t);
> +
> +		return true;
> +	}
> +
> +	return false;
> +}


Thanks,

James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ