[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dabf5b36-d22e-e8a3-c01f-6b0f5b3be710@arm.com>
Date: Fri, 5 Apr 2019 16:17:50 +0100
From: Julien Grall <julien.grall@....com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Dave Martin <Dave.Martin@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-rt-users@...r.kernel.org, catalin.marinas@....com,
will.deacon@....com, ard.biesheuvel@...aro.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] arm64/fpsimd: Don't disable softirq when touching
FPSIMD/SVE state
Hi,
On 05/04/2019 15:39, Sebastian Andrzej Siewior wrote:
> On 2019-04-05 10:02:45 [+0100], Julien Grall wrote:
>> RT folks already saw this corruption because local_bh_disable() does not
>> preempt on RT. They are carrying a patch (see "arm64: fpsimd: use
>> preemp_disable in addition to local_bh_disable()") to disable preemption
>> along with local_bh_disable().
>>
>> Alternatively, Julia suggested to introduce a per-cpu lock to protect the
>> state. I am thinking to defer this for a follow-up patch. The changes in
>> this patch should make it easier because we now have helper to mark the
>> critical section.
>
> A per-CPU lock? It has to be a raw_spinlock_t because a normal
> spin_lock() / local_lock() would allow scheduling and might be taken as
> part of the context switch or soon after.
raw_spinlock_t would not work here without disabling preemption. Otherwise you
may end up to recurse on the lock and therefore deadlock. But then it raise the
question of the usefulness of the lock here.
However, I don't really understand why allowing the scheduling would be a
problem here. Is it a concern because we will waste cycle trying to restore/save
a context that will be scratched as soon as we release the lock?
Cheers,
--
Julien Grall
Powered by blists - more mailing lists