lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Sep 2014 13:56:37 +0200
From:	Arnd Bergmann <arnd@...db.de>
To:	Rik van Riel <riel@...hat.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	umgwanakikbuti@...il.com, fweisbec@...il.com,
	akpm@...ux-foundation.org, srao@...hat.com, lwoodman@...hat.com,
	atheurer@...hat.com, oleg@...hat.com,
	Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: [PATCH] sched, time: cmpxchg does not work on 64-bit variable

A recent change to update the stime/utime members of task_struct
using atomic cmpxchg broke configurations on 32-bit machines with
CONFIG_VIRT_CPU_ACCOUNTING_GEN set, because that uses 64-bit
nanoseconds, leading to a link-time error:

kernel/built-in.o: In function `cputime_adjust':
:(.text+0x25234): undefined reference to `__bad_cmpxchg'

This reverts the change that caused the problem, I suspect the real fix
is to conditionally use cmpxchg64 instead, but I have not checked if
that will work on all architectures.

Signed-off-by: Arnd Bergmann <arnd@...db.de>
Fixes: eb1b4af0a64a ("sched, time: Atomically increment stime & utime")
---
found in ARM randconfig builds on linux-next

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 64492dff8a81..e99e7e54131c 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -603,12 +603,9 @@ static void cputime_adjust(struct task_cputime *curr,
 	 * If the tick based count grows faster than the scheduler one,
 	 * the result of the scaling may go backward.
 	 * Let's enforce monotonicity.
-	 * Atomic exchange protects against concurrent cputime_adjust().
 	 */
-	while (stime > (rtime = ACCESS_ONCE(prev->stime)))
-		cmpxchg(&prev->stime, rtime, stime);
-	while (utime > (rtime = ACCESS_ONCE(prev->utime)))
-		cmpxchg(&prev->utime, rtime, utime);
+	prev->stime = max(prev->stime, stime);
+	prev->utime = max(prev->utime, utime);
 
 out:
 	*ut = prev->utime;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ