lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Sep 2015 10:25:39 +0100
From:	Catalin Marinas <catalin.marinas@....com>
To:	Jungseok Lee <jungseoklee85@...il.com>
Cc:	mark.rutland@....com, will.deacon@....com,
	linux-kernel@...r.kernel.org, takahiro.akashi@...aro.org,
	James Morse <james.morse@....com>,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v2] arm64: Introduce IRQ stack

On Sat, Sep 19, 2015 at 05:44:37PM +0900, Jungseok Lee wrote:
> On Sep 19, 2015, at 12:31 AM, Catalin Marinas wrote:
> > On Fri, Sep 18, 2015 at 04:03:02PM +0100, Catalin Marinas wrote:
> >> On Fri, Sep 18, 2015 at 09:57:56PM +0900, Jungseok Lee wrote:
> >>> On Sep 18, 2015, at 1:21 AM, Catalin Marinas wrote:
> >>>> So, without any better suggestion for current_thread_info(), I'm giving
> >>>> up the idea of using SPSel == 0 in the kernel. I'll look at your patch
> >>>> in more detail. BTW, I don't think we need the any count for the irq
> >>>> stack as we don't re-enter the same IRQ stack.
[...]
> >>> BTW, in this context, it is only meaningful to decide whether a current interrupt
> >>> is re-enterrant or not. Its actual value is not important, but I could not figure
> >>> out a better implementation than this one yet. Any suggestions are welcome!
[...]
> > Another thought (it seems that x86 does something similar): we know the
> > IRQ stack is not re-entered until interrupts are enabled in
> > __do_softirq. If we enable __ARCH_HAS_DO_SOFTIRQ, we can implement an
> > arm64-specific do_softirq_own_stack() which increments a counter before
> > calling __do_softirq. The difference from your patch is that
> > irq_stack_entry only reads such counter, doesn't need to write it.
> > 
> > Yet another idea is to reserve some space in the lower address part of
> > the stack with a "stack type" information. It still requires another
> > read, so I think the x86 approach is probably better.
> 
> I've realized both hardirq and softirq should be handled on a separate stack
> in order to reduce kernel stack size, which is a principal objective of this
> patch.

The objective is to reduce the kernel thread stack size (THREAD_SIZE).
This can get pretty deep on some syscalls and together with IRQs (hard
or soft), we run out of stack.

So, for now, just stick to reducing THREAD_SIZE by moving the IRQs off
this stack. If we later find that hardirqs + softirqs can't fit on the
same _IRQ_ stack, we could either increase it or allocate separate stack
for softirqs. These are static anyway, allocated during boot. But I
wouldn't get distracted with separate hard and soft IRQ stacks for now,
I doubt we would see any issues (when a softirq runs, the IRQ stack is
pretty much empty, apart from the pt_regs).

> (If I'm not missing something) It is not possible to get a big win
> with implementing do_softirq_own_stack() since hardirq is handled using a task
> stack. This prevents a size of kernel stack from being decreased.

What I meant is that hard and soft IRQs both run on the IRQ stack (not
the thread stack). But instead of incrementing a counter every time you
take a hard IRQ, just increment it in do_softirq_own_stack() with a
simple read+check in elX_irq. The "own_stack" is not the most
appropriate name because we still have the same IRQ stack but I'm not
really bothered about this.

> However, it would be meaningful to separate hard IRQ stack and soft IRQ one
> as the next step.

Only if we see IRQ stack overflowing, otherwise I don't think it's worth
the effort.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ