[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCrPfpREESbH+D7h71yupchcm-TQ8nx=TADuzVysqNfSA@mail.gmail.com>
Date: Tue, 29 May 2018 17:02:29 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Quentin Perret <quentin.perret@....com>
Cc: Patrick Bellasi <patrick.bellasi@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
viresh kumar <viresh.kumar@...aro.org>,
Valentin Schneider <valentin.schneider@....com>
Subject: Re: [PATCH v5 01/10] sched/pelt: Move pelt related code in a
dedicated file
Hi Quentin,
On 29 May 2018 at 16:55, Quentin Perret <quentin.perret@....com> wrote:
>
> On Friday 25 May 2018 at 19:04:55 (+0100), Patrick Bellasi wrote:
> > On 25-May 15:26, Quentin Perret wrote:
> > > And also, I understand these functions are large, but if we _really_
> > > want to inline them even though they're big, why not putting them in
> > > sched-pelt.h ?
> >
> > Had the same tought at first... but then I recalled that header is
> > generated from a script. Thus, eventually, it should be a different one.
>
> Ah, good point. This patch already introduces a pelt.h so I guess that
> could work as well.
>
> >
> > > We probably wouldn't accept that for everything, but
> > > those PELT functions are used all over the place, including latency
> > > sensitive code paths (e.g. task wake-up).
> >
> > We should better measure the overheads, if any, and check what
> > (a modern) compiler does. Maybe some hackbench run could help on the
> > first point.
>
> FWIW, I ran a few hackbench tests today on my Intel box:
> - Intel i7-6700 (4 cores / 8 threads) @ 3.40GHz
> - Base kernel: today's tip/sched/core "2539fc82aa9b sched/fair: Update
> util_est before updating schedutil"
> - Compiler: GCC 7.3.0
Which cpufreq governor are you using ?
>
> The tables below summarize the results for:
> perf stat --repeat 10 perf bench sched messaging --pipe --thread -l 50000 --group G
>
> Without patch:
> +---+-------+----------+---------+
> | G | Tasks | Duration | Stddev |
> +---+-------+----------+---------+
> | 1 | 40 | 3.906 | +-0.84% |
> | 2 | 80 | 8.569 | +-0.77% |
> | 4 | 160 | 16.384 | +-0.46% |
> | 8 | 320 | 33.686 | +-0.42% |
> +---+-------+----------+---------+
>
> With patch:
Just to make sure. You mean only this patch and not the whole patchset ?
> +---+-------+----------------+---------+
> | G | Tasks | Duration | Stddev |
> +---+-------+----------------+---------+
> | 1 | 40 | 3.953 (+1.2%) | +-1.43% |
> | 2 | 80 | 8.646 (+0.9%) | +-0.32% |
> | 4 | 160 | 16.390 (+0.0%) | +-0.38% |
> | 8 | 320 | 33.992 (+0.9%) | +-0.27% |
> +---+-------+----------------+---------+
>
> So there is (maybe) a little something on my box, but not so significant
> IMHO ... :)
>
> Thanks,
> Quentin
Powered by blists - more mailing lists