lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 01 Oct 2008 15:34:10 +0100
From:	David Howells <dhowells@...hat.com>
To:	Nicolas Pitre <nico@....org>
Cc:	dhowells@...hat.com, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	torvalds@...l.org, akpm@...ux-foundation.org,
	linux-am33-list@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/

Nicolas Pitre <nico@....org> wrote:

> Disabling preemption is unneeded.

I think you may be wrong on that.  MEI came up with the following point:

	I think either disable preemption or disable interrupt is really
	necessary for cnt32_to_63 macro, because there seems to be assumption
	that the series of the code (1)-(4) must be executed within a half
	period of the 32-bit counter.

	-------------------------------------------------
	#define cnt32_to_63(cnt_lo) 
	({ 
	       static volatile u32 __m_cnt_hi = 0; 
	       cnt32_to_63_t __x; 
	(1)    __x.hi = __m_cnt_hi; 
	(2)    __x.lo = (cnt_lo); 
	(3)    if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
	(4)            __m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); 
	       __x.val; 
	})
	-------------------------------------------------

	If a task is preempted while executing the series of the code and
	scheduled again after the half period of the 32-bit counter, the task
	may destroy __m_cnt_hi.

Their suggested remedy is:

	So I think it's better to disable interrupt the cnt32_to_63 and to
	ensure that the series of the code are executed within a short period.

I think this is excessive...  If we're sat there with interrupts disabled for
more than a half period (65s) then we've got other troubles.  I think
disabling preemption for the duration ought to be enough.  What do you think?

Now, I'm happy to put these in sched_clock() rather then cnt32_to_63() for my
purposes (see attached patch).

David
---
MN10300: Prevent cnt32_to_63() from being preempted in sched_clock()

From: David Howells <dhowells@...hat.com>

Prevent cnt32_to_63() from being preempted in sched_clock() because it may
read its internal counter, get preempted, get delayed for more than the half
period of the 'TSC' and then write the internal counter, thus corrupting it.

Whilst some callers of sched_clock() have interrupts disabled or hold
spinlocks, not all do, and so preemption must be held here.

Note that sched_clock() is called from lockdep, but that shouldn't be a problem
because although preempt_disable() calls into lockdep, lockdep has a recursion
counter to deal with this.

Signed-off-by: David Howells <dhowells@...hat.com>
---

 arch/mn10300/kernel/time.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)


diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
index e460658..38f88bb 100644
--- a/arch/mn10300/kernel/time.c
+++ b/arch/mn10300/kernel/time.c
@@ -55,6 +55,9 @@ unsigned long long sched_clock(void)
 	unsigned long tsc, tmp;
 	unsigned product[3]; /* 96-bit intermediate value */
 
+	/* cnt32_to_63() is not safe with preemption */
+	preempt_disable();
+
 	/* read the TSC value
 	 */
 	tsc = 0 - get_cycles(); /* get_cycles() counts down */
@@ -65,6 +68,8 @@ unsigned long long sched_clock(void)
 	 */
 	tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
 
+	preempt_enable();
+
 	/* scale the 64-bit TSC value to a nanosecond value via a 96-bit
 	 * intermediate
 	 */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ