lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFz8gByWhuAgW4fER8uw=q9E=tcN6LeS3YqCOqYsOfwPxA@mail.gmail.com>
Date:	Mon, 6 Apr 2015 13:42:26 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Sasha Levin <sasha.levin@...cle.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Dave Jones <davej@...hat.com>, Michal Hocko <mhocko@...e.cz>,
	Borislav Petkov <bp@...en8.de>,
	"the arch/x86 maintainers" <x86@...nel.org>
Subject: Re: Hang on large copy_from_user with PREEMPT_NONE

On Mon, Apr 6, 2015 at 12:08 PM, Sasha Levin <sasha.levin@...cle.com> wrote:
>
> Your patch just makes it hang in memset instead:

So it's certainly a big memset (2GB or so: original count in RDX:
0x7e777000, and "%rcx << 6" is bytes left, so it has done about 85% of
it), which is certainly going to be slow, but it shouldn't *hang*. The
kernel memory should be all there and allocated, so it should be just
limited by memory speeds, which shouldn't be enough to take 22s. The
previous "one byte at a time" case I could easily have seen being slow
enough to , but 2GB of pre-allocated memory? Weird. Any half-way
normal memory subsystem should write memory at tens of GB/s.

So it's a bit odd that the watchdog triggers.

That said, maybe there is some virtualization thing that slows down
these things by an order of magniture or two (for example, paging in
the host). At that point I can easily see the 2GB memset() taking a
long time.

The main (only, really) reason we zero the target kernel buffer is for
security reasons, but that's really mainly for copying structures from
user space or for the data copy for write() system calls etc. So we
could easily say that we limit the clearing to a single hugepage or
something, since anything bigger than that is going to be into vmalloc
space and the copyer had *better* check the return value anyway.

Alternatively, we could just limit module loading size to some (fairly
arbitrary) big number.

                         Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ