lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 16 Aug 2014 16:55:15 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	riel@...hat.com
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org,
	umgwanakikbuti@...il.com, fweisbec@...il.com,
	akpm@...ux-foundation.org, srao@...hat.com, lwoodman@...hat.com,
	atheurer@...hat.com
Subject: Re: [PATCH 3/3] sched,time: atomically increment stime & utime

On 08/15, riel@...hat.com wrote:
>
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -605,9 +605,12 @@ static void cputime_adjust(struct task_cputime *curr,
>  	 * If the tick based count grows faster than the scheduler one,
>  	 * the result of the scaling may go backward.
>  	 * Let's enforce monotonicity.
> +	 * Atomic exchange protects against concurrent cputime_adjust.
>  	 */
> -	prev->stime = max(prev->stime, stime);
> -	prev->utime = max(prev->utime, utime);
> +	while (stime > (rtime = ACCESS_ONCE(prev->stime)))
> +		cmpxchg(&prev->stime, rtime, stime);
> +	while (utime > (rtime = ACCESS_ONCE(prev->utime)))
> +		cmpxchg(&prev->utime, rtime, utime);
>
>  out:
>  	*ut = prev->utime;

I am still not sure about this change. At least I think it needs some
discussion.

Let me repeat, afaics this can lead to inconsistent results. Just
suppose that the caller of thread_group_cputime_adjusted() gets a long
preemption between thread_group_cputime() and cputime_adjust(), and
the numbers in signal->prev_cputime grow significantly when this task
resumes. If cputime_adjust() sees both prev->stime and prev->utime
updated everything is fine. But we can race with cputime_adjust() on
another CPU and miss, say, the change in ->utime.

IOW. To simplify, suppose that thread_group_cputime(T) fills task_cputime
with zeros. Then the caller X is preempted.

Another task does thread_group_cputime(T) and this time task_cputime is
{ .utime = A_LOT_U, .stime = A_LOT_S }. This task calls cputime_adjust()
and sets prev->stime = A_LOT_S.

X resumes, calls cputime_adjust(), and returns { 0, A_LOT_S }.

If you think that we do not care, probably I won't argue. But at least
this should be documented/discussed. And if we can tolerate this, then we
can probably simply remove the scale_stime recalculation and change it to
just do

	static void cputime_adjust(struct task_cputime *curr,
				   struct cputime *prev,
				   cputime_t *ut, cputime_t *st)
	{
		cputime_t rtime, stime, utime;
		/*
		 * Let's enforce monotonicity.
		 * Atomic exchange protects against concurrent cputime_adjust.
		 */
		while (stime > (rtime = ACCESS_ONCE(prev->stime)))
			cmpxchg(&prev->stime, rtime, stime);
		while (utime > (rtime = ACCESS_ONCE(prev->utime)))
			cmpxchg(&prev->utime, rtime, utime);

		*ut = prev->utime;
		*st = prev->stime;
	}

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ