[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161026105407.GE3102@twins.programming.kicks-ass.net>
Date: Wed, 26 Oct 2016 12:54:07 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
dietmar.eggemann@....com, yuyang.du@...el.com,
Morten.Rasmussen@....com, linaro-kernel@...ts.linaro.org,
pjt@...gle.com, bsegall@...gle.com, kernellwp@...il.com
Subject: Re: [PATCH 4/6 v5] sched: propagate load during synchronous
attach/detach
On Mon, Oct 17, 2016 at 11:14:11AM +0200, Vincent Guittot wrote:
> /*
> + * Signed add and clamp on underflow.
> + *
> + * Explicitly do a load-store to ensure the intermediate value never hits
> + * memory. This allows lockless observations without ever seeing the negative
> + * values.
> + */
> +#define add_positive(_ptr, _val) do { \
> + typeof(_ptr) ptr = (_ptr); \
> + typeof(_val) res, val = (_val); \
> + typeof(*ptr) var = READ_ONCE(*ptr); \
> + res = var + val; \
> + if (res < 0) \
> + res = 0; \
I think this is broken, and inconsistent with sub_positive().
The thing is, util_avg, on which you use this, is an unsigned type.
Checking for unsigned underflow can be done by comparing against either
one of the terms.
> + WRITE_ONCE(*ptr, res); \
> +} while (0)
> + add_positive(&cfs_rq->avg.util_avg, delta);
Powered by blists - more mailing lists