[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALPaoCj8bVo3Z1r9_Ag=6KvGuR2wzQesArwZKEDvudGPYbbwaA@mail.gmail.com>
Date: Tue, 1 Nov 2022 16:23:13 +0100
From: Peter Newman <peternewman@...gle.com>
To: Reinette Chatre <reinette.chatre@...el.com>
Cc: James Morse <james.morse@....com>, Tony Luck <tony.luck@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
"Eranian, Stephane" <eranian@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Babu Moger <Babu.Moger@....com>,
Gaurang Upasani <gupasani@...gle.com>
Subject: Re: [RFD] resctrl: reassigning a running container's CTRL_MON group
Hi Reinette,
On Thu, Oct 27, 2022 at 7:36 PM Reinette Chatre
<reinette.chatre@...el.com> wrote:
> On 10/27/2022 12:56 AM, Peter Newman wrote:
> > On Wed, Oct 26, 2022 at 11:12 PM Reinette Chatre
> > <reinette.chatre@...el.com> wrote:
> >> The original concern is "the stores to t->closid and t->rmid could be
> >> reordered with the task_curr(t) and task_cpu(t) reads which follow". I can see
> >> that issue. Have you considered using the compiler barrier, barrier(), instead?
> >> From what I understand it will prevent the compiler from moving the memory accesses.
> >> This is what is currently done in __rdtgroup_move_task() and could be done here also?
> >
> > A memory system (including those on x86) is allowed to reorder a store with a
> > later load, in addition to the compiler.
> >
> > Also because the locations in question can be concurrently accessed by another
> > CPU, a compiler barrier would not be sufficient.
>
> This is hard. Regarding the concurrent access from another CPU it seems
> that task_rq_lock() is available to prevent races with schedule(). Using this
> may be able to prevent task_curr(t) changing during this time and thus the local
> reordering may not be a problem. I am not familiar with task_rq_lock() though,
> surely there are many details to consider in this area.
Yes it looks like the task's rq_lock would provide the necessary
ordering. It's not feasible to ensure the IPI arrives before the target
task migrates away, but the task would need to obtain the same lock in
order to migrate off of its current CPU, so that alone would ensure the
next migration would observe the updates.
The difficulty is this lock is private to sched/, so I'd have to propose
some API.
It would make sense for the API to return the result of task_curr(t) and
task_cpu(t) to the caller to avoid giving the impression that this
function would be useful for anything other than helping someone do an
smp_call_function targeting a task's CPU.
I'll just have to push a patch and see what people say.
-Peter
Powered by blists - more mailing lists