[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtC6arfWP==0LbtsfK9BE3xVoXd5CZsMHw6760o3q8MKfA@mail.gmail.com>
Date: Tue, 9 Jun 2020 17:29:35 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Mel Gorman <mgorman@...e.de>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Quentin Perret <qperret@...gle.com>,
Valentin Schneider <valentin.schneider@....com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
linux-doc@...r.kernel.org,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-fs <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/uclamp: Add a new sysctl to control RT default
boost value
H Qais,
Sorry for the late reply.
On Fri, 5 Jun 2020 at 12:45, Qais Yousef <qais.yousef@....com> wrote:
>
> On 06/04/20 14:14, Vincent Guittot wrote:
> > I have tried your patch and I don't see any difference compared to
> > previous tests. Let me give you more details of my setup:
> > I create 3 levels of cgroups and usually run the tests in the 4 levels
> > (which includes root). The result above are for the root level
> >
> > But I see a difference at other levels:
> >
> > root level 1 level 2 level 3
> >
> > /w patch uclamp disable 50097 46615 43806 41078
> > tip uclamp enable 48706(-2.78%) 45583(-2.21%) 42851(-2.18%)
> > 40313(-1.86%)
> > /w patch uclamp enable 48882(-2.43%) 45774(-1.80%) 43108(-1.59%)
> > 40667(-1.00%)
> >
> > Whereas tip with uclamp stays around 2% behind tip without uclamp, the
> > diff of uclamp with your patch tends to decrease when we increase the
> > number of level
>
> Thanks for the extra info. Let me try this.
>
> If you can run perf and verify that you see activate/deactivate_task showing up
> as overhead I'd appreciate it. Just to confirm that indeed what we're seeing
> here are symptoms of the same problem Mel is seeing.
I see call to activate_task() for each wakeup of the sched-pipi thread
>
> > Beside this, that's also interesting to notice the ~6% of perf impact
> > between each level for the same image
>
> Interesting indeed.
>
> Thanks
>
> --
> Qais Yousef
Powered by blists - more mailing lists