[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180306190241.GH25201@hirez.programming.kicks-ass.net>
Date: Tue, 6 Mar 2018 20:02:41 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Paul Turner <pjt@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...roid.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>
Subject: Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT
On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> +struct util_est {
> + unsigned int enqueued;
> + unsigned int ewma;
> +#define UTIL_EST_WEIGHT_SHIFT 2
> +};
> + ue = READ_ONCE(p->se.avg.util_est);
> + WRITE_ONCE(p->se.avg.util_est, ue);
That is actually quite dodgy... and relies on the fact that we have the
8 byte case in __write_once_size() and __read_once_size()
unconditionally. It then further relies on the compiler DTRT for 32bit
platforms, which is generating 2 32bit loads/stores.
The advantage is of course that it will use single u64 loads/stores
where available.
Powered by blists - more mailing lists