lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2016 14:00:33 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
cc:     LKML <linux-kernel@...r.kernel.org>, linux-mips@...ux-mips.org,
        linux-mm@...ck.org,
        WireGuard mailing list <wireguard@...ts.zx2c4.com>,
        k@...ka.home.kg
Subject: Re: Proposal: HAVE_SEPARATE_IRQ_STACK?

On Thu, 10 Nov 2016, Jason A. Donenfeld wrote:
> On Thu, Nov 10, 2016 at 10:03 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> > Does the slowdown come from the kmalloc overhead or mostly from the less
> > efficient code?
> >
> > If it's mainly kmalloc, then you can preallocate the buffer once for the
> > kthread you're running in and be done with it. If it's the code, then bad
> > luck.
> 
> I fear both. GCC can optimize stack variables in ways that it cannot
> optimize various memory reads and writes.

The question is how much of it is code and how much of it is the kmalloc.
 
> Strangely, the solution that appeals to me most at the moment is to
> kmalloc (or vmalloc?) a new stack, copy over thread_info, and fiddle
> with the stack registers. I don't see any APIs, however, for a
> platform independent way of doing this. And maybe this is a horrible
> idea. But at least it'd allow me to keep my stack-based code the
> same...

Do not even think about going there. That's going to be a major
mess.

As a short time workaround you can increase THREAD_SIZE_ORDER for now and
then fix it proper with switching to seperate irq stacks.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ