lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMj1kXEiq-s8ZQQXS-ERwAkNXnsM8G0P=Pc61orw5upTqexuCA@mail.gmail.com>
Date:   Tue, 16 Feb 2021 11:35:00 +0100
From:   Ard Biesheuvel <ardb@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Dave Martin <dave.martin@....com>,
        Mark Brown <broonie@...nel.org>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Eric Biggers <ebiggers@...nel.org>,
        Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC PATCH 0/5] running kernel mode SIMD with softirqs disabled

On Tue, 16 Feb 2021 at 11:10, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Dec 18, 2020 at 06:01:01PM +0100, Ard Biesheuvel wrote:
> > [ TL;DR for the non-ARM folks on CC: disabling softirq processing when using
> >   SIMD in kernel mode could reduce complexity and improve performance, but we
> >   need to decide whether we can do this, and how much softirq processing
> >   latency we can tolerate. If we can find a satisfactory solution for this,
> >   we might do the same for x86 and 32-bit ARM as well. ]
>
> > - could we do the same on x86, now that kernel_fpu_begin/end is no longer
> >   expensive?
>
> Can't we simply save/restore the relevant register set?
>
> So something like (note amluto was wanting to add a regset argument):
>
>         <task>
>         kernel_fpu_begin(MMX)
>                 <SIRQ>
>                 kernel_fpu_begin(SSE)
>                 kernel_fpu_end();
>                 </SIRQ>
>         ...
>         kernel_fpu_end()
>
> Would have to save the MMX regs on first SIRQ invocation of
> kernel_fpu_begin(), and then have softirq context termination </SIRQ>
> above, restore it.
>
> I mean, we already do much the same for the first kernel_fpu_begin(),
> that has to save the user registers, which will be restore when we go
> back to userspace.
>
> So why not do exactly the same for softirq context?

That is what we originally had on arm64, with per-CPU buffers of the
appropriate size. This became a bit messy when SVE support was added,
because the register file is so large (32 registers of up to 2048 bits
each), and since the kernel does not use SVE itself, we want the inner
per-CPU buffer to only cover 128 bits per register. This means we
cannot allow the <sirq></sirq> region above to interrupt the outer
preserve (which is implemented entirely in software), since resuming
the outer preserve after a sirq would preserve content that was
corrupted by the inner preserve/restore. This could be addressed by
disabling interrupts across the outer preserve, but this caused a
problem somewhere else (Dave might remember the details better than I
do). So we ended up making SIMD in task context mutually exclusive
with SIMD in softirq context, also because that is what 32-bit ARM and
x86 were already doing as well.

But I understand that these concerns may not apply to x86 at all, so
perhaps the answer there is indeed to have a alternate buffer. And
actually, Andy L. suggested the same when I asked him about it on IRC.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ