lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Sep 2020 13:02:10 +0300
From:   Ard Biesheuvel <ardb@...nel.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     Herbert Xu <herbert@...dor.apana.org.au>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Crypto Mailing List <linux-crypto@...r.kernel.org>
Subject: Re: [PATCH] crypto: lib/chacha20poly1305 - Set SG_MITER_ATOMIC unconditionally

On Tue, 15 Sep 2020 at 12:34, Thomas Gleixner <tglx@...utronix.de> wrote:
>
> On Tue, Sep 15 2020 at 17:05, Herbert Xu wrote:
> > On Mon, Sep 14, 2020 at 11:55:53PM -0700, Linus Torvalds wrote:
> >>
> >> Maybe we could hide it behind a debug option, at least.
> >>
> >> Or, alterantively, introduce a new "debug_preempt_count" that doesn't
> >> actually disable preemption, but warns about actual sleeping
> >> operations..
> >
> > I'm more worried about existing users of kmap_atomic relying on
> > the preemption disabling semantics.  Short of someone checking
> > on every single instance (and that would include derived cases
> > such as all users of sg miter), I think the safer option is to
> > create something brand new and then migrate the existing users
> > to it.  Something like
> >
> > static inline void *kmap_atomic_ifhigh(struct page *page)
> > {
> >       if (PageHighMem(page))
> >               return kmap_atomic(page);
> >       return page_address(page);
> > }
> >
> > static inline void kunmap_atomic_ifhigh(struct page *page, void *addr)
> > {
> >       if (PageHighMem(page))
> >               kunmap_atomic(addr);
> > }
>
> Hmm, that still has the issue that the code between map and unmap must
> not sleep and the conversion must carefully check whether anything in
> this region relies on preemption being disabled by kmap_atomic()
> regardless of highmem or not.
>
> kmap_atomic() is at least consistent vs. preemption, the above not so
> much.
>

But that is really the point. I don't *want* to be forced to disable
preemption in brand new code simply because some legacy highmem API
conflates being callable from atomic context with instantiating an
atomic context by disabling preemption for no good reason. IIUC, in
the past, you would really only call kmap_atomic() if you absolutely
had to, and so you would never rely on the preemption disabling
semantics accidentally. By making kmap_atomic() the preferred API even
for calls from non-atomic contexts, this line has blurred and we no
longer know why individual kmap_atomic() occurrences exist in the
first place.

> I'd rather go for a preemptible/sleepable version of highmem mapping
> which is in itself consistent for both highmen and not highmem.
>

I don't think we need to obsess about highmem, although we should
obviously take care not to regress its performance unnecessarily. What
I want to avoid is to burden a brand new subsystem with legacy highmem
baggage simply because we could not agree on how to avoid that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ