lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1260781359.4165.42.camel@twins>
Date:	Mon, 14 Dec 2009 10:02:39 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	David Miller <davem@...emloft.net>
Cc:	mingo@...e.hu, tglx@...utronix.de, linux-kernel@...r.kernel.org
Subject: Re: cpu_clock() in NMIs

On Sun, 2009-12-13 at 18:25 -0800, David Miller wrote:
> The background is that I was trying to resolve a sparc64 perf
> issue when I discovered this problem.
> 
> On sparc64 I implement pseudo NMIs by simply running the kernel
> at IRQ level 14 when local_irq_disable() is called, this allows
> performance counter events to still come in at IRQ level 15.
> 
> This doesn't work if any code in an NMI handler does local_irq_save()
> or local_irq_disable() since the "disable" will kick us back to cpu
> IRQ level 14 thus letting NMIs back in and we recurse.
> 
> The only path which that does that in the perf event IRQ handling path
> is the code supporting frequency based events.  It uses cpu_clock().
> 
> cpu_clock() simply invokes sched_clock() with IRQs disabled.
> 
> And that's a fundamental bug all on it's own, particularly for the
> HAVE_UNSTABLE_SCHED_CLOCK case.  NMIs can thus get into the
> sched_clock() code interrupting the local IRQ disable code sections
> of it.
> 
> Furthermore, for the not-HAVE_UNSTABLE_SCHED_CLOCK case, the IRQ
> disabling done by cpu_clock() is just pure overhead and completely
> unnecessary.
> 
> So the core problem is that sched_clock() is not NMI safe, but we
> are invoking it from NMI contexts in the perf events code (via
> cpu_clock()).
> 
> A less important issue is the overhead of IRQ disabling when it isn't
> necessary in cpu_clock().  Maybe something simple like the patch below
> to handle that.

I'm not sure, traditionally sched_clock() was always called with IRQs
disabled, and on eg. x86 that is needed because we read the TSC and then
scale that value depending on CPUfreq state, which can be changed by a
CPUfreq interrupt. Allowing NMIs in between isn't really a problem,
allowing IRQs in is.

Now, the SPARC implementation might be good without IRQs disabled, but
we should at least look at all other arches before we do what you
propose below. As it removes the IRQ disable from the callsites whereas
it previously always had that.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ