[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70705fddfd2c7ecf4b2b06649e6a3379bf9c6916.camel@marvell.com>
Date: Sun, 4 Oct 2020 15:01:46 +0000
From: Alex Belits <abelits@...vell.com>
To: "frederic@...nel.org" <frederic@...nel.org>
CC: "mingo@...nel.org" <mingo@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"will@...nel.org" <will@...nel.org>,
Prasun Kapoor <pkapoor@...vell.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [EXT] Re: [PATCH v4 03/13] task_isolation: userspace hard
isolation from kernel
On Thu, 2020-10-01 at 16:40 +0200, Frederic Weisbecker wrote:
> External Email
>
> -------------------------------------------------------------------
> ---
> On Wed, Jul 22, 2020 at 02:49:49PM +0000, Alex Belits wrote:
> > +/**
> > + * task_isolation_kernel_enter() - clear low-level task isolation
> > flag
> > + *
> > + * This should be called immediately after entering kernel.
> > + */
> > +static inline void task_isolation_kernel_enter(void)
> > +{
> > + unsigned long flags;
> > +
> > + /*
> > + * This function runs on a CPU that ran isolated task.
> > + *
> > + * We don't want this CPU running code from the rest of kernel
> > + * until other CPUs know that it is no longer isolated.
> > + * When CPU is running isolated task until this point anything
> > + * that causes an interrupt on this CPU must end up calling
> > this
> > + * before touching the rest of kernel. That is, this function
> > or
> > + * fast_task_isolation_cpu_cleanup() or stop_isolation()
> > calling
> > + * it. If any interrupt, including scheduling timer, arrives,
> > it
> > + * will still end up here early after entering kernel.
> > + * From this point interrupts are disabled until all CPUs will
> > see
> > + * that this CPU is no longer running isolated task.
> > + *
> > + * See also fast_task_isolation_cpu_cleanup().
> > + */
> > + smp_rmb();
>
> I'm a bit confused what this read memory barrier is ordering. Also
> against
> what it pairs.
My bad, I have kept it after there were left no write accesses from
other CPUs.
>
> > + if((this_cpu_read(ll_isol_flags) & FLAG_LL_TASK_ISOLATION) ==
> > 0)
> > + return;
> > +
> > + local_irq_save(flags);
> > +
> > + /* Clear low-level flags */
> > + this_cpu_write(ll_isol_flags, 0);
> > +
> > + /*
> > + * If something happened that requires a barrier that would
> > + * otherwise be called from remote CPUs by CPU kick procedure,
> > + * this barrier runs instead of it. After this barrier, CPU
> > + * kick procedure would see the updated ll_isol_flags, so it
> > + * will run its own IPI to trigger a barrier.
> > + */
> > + smp_mb();
> > + /*
> > + * Synchronize instructions -- this CPU was not kicked while
> > + * in isolated mode, so it might require synchronization.
> > + * There might be an IPI if kick procedure happened and
> > + * ll_isol_flags was already updated while it assembled a CPU
> > + * mask. However if this did not happen, synchronize everything
> > + * here.
> > + */
> > + instr_sync();
>
> It's the first time I meet an instruction barrier. I should get
> information
> about that but what is it ordering here?
Against barriers in instruction cache flushing (flush_icache_range()
and such).
> > + local_irq_restore(flags);
> > +}
>
> Thanks.
Powered by blists - more mailing lists