[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190405154259.3g3fv72miizm64hc@linutronix.de>
Date: Fri, 5 Apr 2019 17:42:59 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Julien Grall <julien.grall@....com>
Cc: Dave Martin <Dave.Martin@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-rt-users@...r.kernel.org, catalin.marinas@....com,
will.deacon@....com, ard.biesheuvel@...aro.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] arm64/fpsimd: Don't disable softirq when touching
FPSIMD/SVE state
On 2019-04-05 16:17:50 [+0100], Julien Grall wrote:
> Hi,
Hi,
> > A per-CPU lock? It has to be a raw_spinlock_t because a normal
> > spin_lock() / local_lock() would allow scheduling and might be taken as
> > part of the context switch or soon after.
> raw_spinlock_t would not work here without disabling preemption. Otherwise
> you may end up to recurse on the lock and therefore deadlock. But then it
> raise the question of the usefulness of the lock here.
>
> However, I don't really understand why allowing the scheduling would be a
> problem here. Is it a concern because we will waste cycle trying to
> restore/save a context that will be scratched as soon as we release the
> lock?
If you hold the lock within the kernel thread and every kernel thread
acquires it before doing any SIMD operations then you are good. It could
be a sleeping lock. What happens if you hold the lock, are scheduled out
and a user task is about to be scheduled? How do you force the kernel
thread out / give up the FPU registers?
That preempt_disable() + local_bh_disable() might not be the pretties
thing but how bad is it actually?
Latency wise you can't schedule(). From RT point of view you need to
enable preemption while going from page to page because of the possible
kmap() or kmalloc() (on baldy aligned src/dst) with the crypto's
page-walk code.
If that is not good enough latency wise you could do
kernel_fpu_resched() after a few iterations. Currently I'm trying to get
kernel_fpu_begin()/end() cheap on x86 so it doesn't always store/restore
the FPU context. Then kernel_fpu_resched() shouldn't be that bad.
> Cheers,
Sebastian
Powered by blists - more mailing lists