lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a713ba13-634d-3061-933c-25a8d62eb0be@intel.com>
Date:   Tue, 6 Dec 2022 10:56:51 -0800
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Peter Newman <peternewman@...gle.com>, <fenghua.yu@...el.com>
CC:     <bp@...en8.de>, <derkling@...gle.com>, <eranian@...gle.com>,
        <hpa@...or.com>, <james.morse@....com>, <jannh@...gle.com>,
        <kpsingh@...gle.com>, <linux-kernel@...r.kernel.org>,
        <mingo@...hat.com>, <tglx@...utronix.de>, <x86@...nel.org>
Subject: Re: [PATCH v4 1/2] x86/resctrl: Update task closid/rmid with
 task_call_func()

Hi Peter,

On 11/29/2022 3:10 AM, Peter Newman wrote:
> When the user moves a running task to a new rdtgroup using the tasks
> file interface, the resulting change in CLOSID/RMID must be immediately
> propagated to the PQR_ASSOC MSR on the task's CPU.
> 
> It is possible for a task to wake up or migrate while it is being moved
> to a new group. If __rdtgroup_move_task() fails to observe that a task
> has begun running or misses that it migrated to a new CPU, the task will
> continue to use the old CLOSID or RMID until it switches in again.
> 
> __rdtgroup_move_task() assumes that if the task migrates off of its CPU
> before it can IPI the task, then the task has already observed the
> updated CLOSID/RMID. Because this is done locklessly and an x86 CPU can
> delay stores until after loads, the following incorrect scenarios are
> possible:
> 
>  1. __rdtgroup_move_task() stores the new closid and rmid in
>     the task structure after it loads task_curr() and task_cpu().

Stating how this scenario encounters the problem would help
so perhaps something like (please feel free to change):
"If the task starts running between a reordered task_curr() check and
the CLOSID/RMID update then it will start running with the old CLOSID/RMID
until it is switched again because __rdtgroup_move_task() failed to determine 
that it needs to be interrupted to obtain the new CLOSID/RMID."

>  2. resctrl_sched_in() loads t->{closid,rmid} before the calling context
>     switch stores new task_curr() and task_cpu() values.

This scenario is not clear to me. Could you please provide more detail about it?
I was trying to follow the context_switch() flow and resctrl_sched_in() is
one of the last things done (context_switch()->switch_to()->resctrl_sched_in()).
>From what I can tell rq->curr, as used by task_curr() is set before
even context_switch() is called ... and since the next task is picked from
the CPU's runqueue (and set_task_cpu() sets the task's cpu when moved to
a runqueue) it seems to me that the value used by task_cpu() would also
be set early (before context_switch() is called). It is thus not clear to
me how the above reordering could occur so an example would help a lot.

> 
> Use task_call_func() in __rdtgroup_move_task() to serialize updates to
> the closid and rmid fields in the task_struct with context switch.

Is there a reason why there is a switch between the all caps CLOSID/RMID
at the beginning to the no caps here? 

> Signed-off-by: Peter Newman <peternewman@...gle.com>
> Reviewed-by: James Morse <james.morse@....com>
> ---
>  arch/x86/kernel/cpu/resctrl/rdtgroup.c | 78 ++++++++++++++++----------
>  1 file changed, 47 insertions(+), 31 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index e5a48f05e787..59b7ffcd53bb 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -528,6 +528,31 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)
>  	kfree(rdtgrp);
>  }
>  
> +static int update_locked_task_closid_rmid(struct task_struct *t, void *arg)
> +{
> +	struct rdtgroup *rdtgrp = arg;
> +
> +	/*
> +	 * Although task_call_func() serializes the writes below with the paired
> +	 * reads in resctrl_sched_in(), resctrl_sched_in() still needs
> +	 * READ_ONCE() due to rdt_move_group_tasks(), so use WRITE_ONCE() here
> +	 * to conform.
> +	 */
> +	if (rdtgrp->type == RDTCTRL_GROUP) {
> +		WRITE_ONCE(t->closid, rdtgrp->closid);
> +		WRITE_ONCE(t->rmid, rdtgrp->mon.rmid);
> +	} else if (rdtgrp->type == RDTMON_GROUP) {
> +		WRITE_ONCE(t->rmid, rdtgrp->mon.rmid);
> +	}
> +
> +	/*
> +	 * If the task is current on a CPU, the PQR_ASSOC MSR needs to be
> +	 * updated to make the resource group go into effect. If the task is not
> +	 * current, the MSR will be updated when the task is scheduled in.
> +	 */
> +	return task_curr(t);
> +}
> +
>  static void _update_task_closid_rmid(void *task)
>  {
>  	/*
> @@ -538,10 +563,24 @@ static void _update_task_closid_rmid(void *task)
>  		resctrl_sched_in();
>  }
>  
> -static void update_task_closid_rmid(struct task_struct *t)
> +static void update_task_closid_rmid(struct task_struct *t,
> +				    struct rdtgroup *rdtgrp)
>  {
> -	if (IS_ENABLED(CONFIG_SMP) && task_curr(t))
> -		smp_call_function_single(task_cpu(t), _update_task_closid_rmid, t, 1);
> +	/*
> +	 * Serialize the closid and rmid update with context switch. If
> +	 * task_call_func() indicates that the task was running during
> +	 * update_locked_task_closid_rmid(), then interrupt it.
> +	 */
> +	if (task_call_func(t, update_locked_task_closid_rmid, rdtgrp) &&
> +	    IS_ENABLED(CONFIG_SMP))
> +		/*
> +		 * If the task has migrated away from the CPU indicated by
> +		 * task_cpu() below, then it has already switched in on the
> +		 * new CPU using the updated closid and rmid and the call below
> +		 * is unnecessary, but harmless.
> +		 */
> +		smp_call_function_single(task_cpu(t),
> +					 _update_task_closid_rmid, t, 1);
>  	else
>  		_update_task_closid_rmid(t);
>  }
> @@ -557,39 +596,16 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
>  		return 0;
>  
>  	/*
> -	 * Set the task's closid/rmid before the PQR_ASSOC MSR can be
> -	 * updated by them.
> -	 *
> -	 * For ctrl_mon groups, move both closid and rmid.
>  	 * For monitor groups, can move the tasks only from
>  	 * their parent CTRL group.
>  	 */
> -
> -	if (rdtgrp->type == RDTCTRL_GROUP) {
> -		WRITE_ONCE(tsk->closid, rdtgrp->closid);
> -		WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
> -	} else if (rdtgrp->type == RDTMON_GROUP) {
> -		if (rdtgrp->mon.parent->closid == tsk->closid) {
> -			WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
> -		} else {
> -			rdt_last_cmd_puts("Can't move task to different control group\n");
> -			return -EINVAL;
> -		}
> +	if (rdtgrp->type == RDTMON_GROUP &&
> +	    rdtgrp->mon.parent->closid != tsk->closid) {
> +		rdt_last_cmd_puts("Can't move task to different control group\n");
> +		return -EINVAL;
>  	}
>  
> -	/*
> -	 * Ensure the task's closid and rmid are written before determining if
> -	 * the task is current that will decide if it will be interrupted.
> -	 */
> -	barrier();
> -
> -	/*
> -	 * By now, the task's closid and rmid are set. If the task is current
> -	 * on a CPU, the PQR_ASSOC MSR needs to be updated to make the resource
> -	 * group go into effect. If the task is not current, the MSR will be
> -	 * updated when the task is scheduled in.
> -	 */
> -	update_task_closid_rmid(tsk);
> +	update_task_closid_rmid(tsk, rdtgrp);
>  
>  	return 0;
>  }

The change looks good to me.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ