[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200408100956.GA32568@willie-the-truck>
Date: Wed, 8 Apr 2020 11:09:59 +0100
From: Will Deacon <will@...nel.org>
To: Suzuki K Poulose <suzuki.poulose@....com>
Cc: mark.rutland@....com, catalin.marinas@....com,
linux-kernel@...r.kernel.org, james.morse@....com, maz@...nel.org,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 0/5] arm64: Add workaround for Cortex-A77 erratum 1542418
Hi Suzuki,
On Wed, Nov 20, 2019 at 07:18:55PM +0000, Will Deacon wrote:
> On Fri, Nov 15, 2019 at 01:14:07AM +0000, Suzuki K Poulose wrote:
> > On 11/14/2019 04:39 PM, Will Deacon wrote:
> > > addr: B foo
> > >
> > > and another CPU writes out a new function:
> > >
> > > bar:
> > > insn0
> > > ...
> > > insnN
> > >
> > > before doing any necessary maintenance and then patches the original
> > > branch to:
> > >
> > > addr: B bar
> > >
> > > The idea is that a concurrently executing CPU could mispredict the original
> > > branch to point at 'bar', fetch the instructions before they've been written
> > > out and then confirm the prediction by looking at the newly written branch
> > > instruction. Even without the prefetch-speculation-protection, that's
> > > fairly difficult to achieve in practice: you'd need to be doing something
> > > like reusing memory to hold the instructions so that the initial
> > > misprediction occurs.
> > >
> > > How does A77 stop this from occurring when the ASID is not reallocated (e.g.
> > > the example above)? Is the MOP cache flushed somehow?
> >
> > IIUC, The MOP cache is flushed on I-cache invalidate, thus it is fine.
>
> Hmm, so this is interesting. Does that mean we could do a local I-cache
> invalidation in check_and_switch_context() at the same as doing the local
> TLBI after a rollover?
>
> I still don't grok the failure case, though, because assuming A77 has IDC=0,
> then won't you see the I-cache maintenance from userspace anyway?
Please could you explain why the userspace I-cache maintenance doesn't
resolve the issue here?
Thanks,
Will
Powered by blists - more mailing lists