lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151019161816.GE11226@e104818-lin.cambridge.arm.com>
Date:	Mon, 19 Oct 2015 17:18:17 +0100
From:	Catalin Marinas <catalin.marinas@....com>
To:	Jungseok Lee <jungseoklee85@...il.com>
Cc:	mark.rutland@....com, barami97@...il.com, will.deacon@....com,
	linux-kernel@...r.kernel.org, takahiro.akashi@...aro.org,
	James Morse <james.morse@....com>,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v4 2/2] arm64: Expand the stack trace feature to support
 IRQ stack

On Sat, Oct 17, 2015 at 10:38:16PM +0900, Jungseok Lee wrote:
> On Oct 17, 2015, at 1:06 AM, Catalin Marinas wrote:
> > BTW, a static allocation (DEFINE_PER_CPU for the whole irq stack) would
> > save us from another stack address reading on the IRQ entry path. I'm
> > not sure exactly where the 16K image increase comes from but at least it
> > doesn't grow with NR_CPUS, so we can probably live with this.
> 
> I've tried the approach, a static allocation using DEFINE_PER_CPU, but
> it dose not work on a top-bit comparison method (for IRQ re-entrance
> check). The top-bit idea is based on the assumption that IRQ stack is
> aligned with THREAD_SIZE. But, tpidr_el1 is PAGE_SIZE aligned. It leads
> to IRQ re-entrance failure in case of 4KB page system.
> 
> IMHO, it is hard to avoid 16KB size increase for 64KB page support.
> Secondary cores can rely on slab.h, but a boot core cannot. So, IRQ
> stack for at least a boot cpu should be allocated statically.

Ah, I forgot about the alignment check. The problem we have with your v5
patch is that kmalloc() doesn't guarantee this either (see commit
2a0b5c0d1929, "arm64: Align less than PAGE_SIZE pgds naturally", where
we had to fix this for pgd_alloc).

I'm leaning more and more towards the x86 approach as I mentioned in the
two messages below:

http://article.gmane.org/gmane.linux.kernel/2041877
http://article.gmane.org/gmane.linux.kernel/2043002

With a per-cpu stack you can avoid another pointer read, replacing it
with a single check for the re-entrance. But note that the update only
happens during do_softirq_own_stack() and *not* for every IRQ taken.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ