lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878sdb5qp5.fsf@nanos.tec.linutronix.de>
Date:   Tue, 15 Sep 2020 11:34:14 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Herbert Xu <herbert@...dor.apana.org.au>,
        Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Ard Biesheuvel <ardb@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Crypto Mailing List <linux-crypto@...r.kernel.org>
Subject: Re: [PATCH] crypto: lib/chacha20poly1305 - Set SG_MITER_ATOMIC unconditionally

On Tue, Sep 15 2020 at 17:05, Herbert Xu wrote:
> On Mon, Sep 14, 2020 at 11:55:53PM -0700, Linus Torvalds wrote:
>>
>> Maybe we could hide it behind a debug option, at least.
>> 
>> Or, alterantively, introduce a new "debug_preempt_count" that doesn't
>> actually disable preemption, but warns about actual sleeping
>> operations..
>
> I'm more worried about existing users of kmap_atomic relying on
> the preemption disabling semantics.  Short of someone checking
> on every single instance (and that would include derived cases
> such as all users of sg miter), I think the safer option is to
> create something brand new and then migrate the existing users
> to it.  Something like
>
> static inline void *kmap_atomic_ifhigh(struct page *page)
> {
> 	if (PageHighMem(page))
> 		return kmap_atomic(page);
> 	return page_address(page);
> }
>
> static inline void kunmap_atomic_ifhigh(struct page *page, void *addr)
> {
> 	if (PageHighMem(page))
> 		kunmap_atomic(addr);
> }

Hmm, that still has the issue that the code between map and unmap must
not sleep and the conversion must carefully check whether anything in
this region relies on preemption being disabled by kmap_atomic()
regardless of highmem or not.

kmap_atomic() is at least consistent vs. preemption, the above not so
much.

I'd rather go for a preemptible/sleepable version of highmem mapping
which is in itself consistent for both highmen and not highmem.

Thanks,

        tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ