[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200820223742.GA120898@google.com>
Date: Thu, 20 Aug 2020 18:37:42 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: "Li, Aubrey" <aubrey.li@...ux.intel.com>
Cc: viremana@...ux.microsoft.com,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Ingo Molnar <mingo@...nel.org>,
Thomas Glexiner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Subhra Mazumdar <subhra.mazumdar@...cle.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
Aaron Lu <aaron.lwe@...il.com>,
Aubrey Li <aubrey.intel@...il.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vineeth Pillai <vineethrp@...il.com>,
Chen Yu <yu.c.chen@...el.com>,
Christian Brauner <christian.brauner@...ntu.com>,
"Ning, Hongyu" <hongyu.ning@...ux.intel.com>,
benbjiang(蒋彪) <benbjiang@...cent.com>
Subject: Re: [RFC PATCH 00/16] Core scheduling v6
On Thu, Aug 13, 2020 at 12:28:17PM +0800, Li, Aubrey wrote:
> On 2020/8/13 7:08, Joel Fernandes wrote:
> > On Wed, Aug 12, 2020 at 10:01:24AM +0800, Li, Aubrey wrote:
> >> Hi Joel,
> >>
> >> On 2020/8/10 0:44, Joel Fernandes wrote:
> >>> Hi Aubrey,
> >>>
> >>> Apologies for replying late as I was still looking into the details.
> >>>
> >>> On Wed, Aug 05, 2020 at 11:57:20AM +0800, Li, Aubrey wrote:
> >>> [...]
> >>>> +/*
> >>>> + * Core scheduling policy:
> >>>> + * - CORE_SCHED_DISABLED: core scheduling is disabled.
> >>>> + * - CORE_COOKIE_MATCH: tasks with same cookie can run
> >>>> + * on the same core concurrently.
> >>>> + * - CORE_COOKIE_TRUST: trusted task can run with kernel
> >>>> thread on the same core concurrently.
> >>>> + * - CORE_COOKIE_LONELY: tasks with cookie can run only
> >>>> + * with idle thread on the same core.
> >>>> + */
> >>>> +enum coresched_policy {
> >>>> + CORE_SCHED_DISABLED,
> >>>> + CORE_SCHED_COOKIE_MATCH,
> >>>> + CORE_SCHED_COOKIE_TRUST,
> >>>> + CORE_SCHED_COOKIE_LONELY,
> >>>> +};
> >>>>
> >>>> We can set policy to CORE_COOKIE_TRUST of uperf cgroup and fix this kind
> >>>> of performance regression. Not sure if this sounds attractive?
> >>>
> >>> Instead of this, I think it can be something simpler IMHO:
> >>>
> >>> 1. Consider all cookie-0 task as trusted. (Even right now, if you apply the
> >>> core-scheduling patchset, such tasks will share a core and sniff on each
> >>> other. So let us not pretend that such tasks are not trusted).
> >>>
> >>> 2. All kernel threads and idle task would have a cookie 0 (so that will cover
> >>> ksoftirqd reported in your original issue).
> >>>
> >>> 3. Add a config option (CONFIG_SCHED_CORE_DEFAULT_TASKS_UNTRUSTED). Default
> >>> enable it. Setting this option would tag all tasks that are forked from a
> >>> cookie-0 task with their own cookie. Later on, such tasks can be added to
> >>> a group. This cover's PeterZ's ask about having 'default untrusted').
> >>> (Users like ChromeOS that don't want to userspace system processes to be
> >>> tagged can disable this option so such tasks will be cookie-0).
> >>>
> >>> 4. Allow prctl/cgroup interfaces to create groups of tasks and override the
> >>> above behaviors.
> >>
> >> How does uperf in a cgroup work with ksoftirqd? Are you suggesting I set uperf's
> >> cookie to be cookie-0 via prctl?
> >
> > Yes, but let me try to understand better. There are 2 problems here I think:
> >
> > 1. ksoftirqd getting idled when HT is turned on, because uperf is sharing a
> > core with it: This should not be any worse than SMT OFF, because even SMT OFF
> > would also reduce ksoftirqd's CPU time just core sched is doing. Sure
> > core-scheduling adds some overhead with IPIs but such a huge drop of perf is
> > strange. Peter any thoughts on that?
> >
> > 2. Interface: To solve the performance problem, you are saying you want uperf
> > to share a core with ksoftirqd so that it is not forced into idle. Why not
> > just keep uperf out of the cgroup?
>
> I guess this is unacceptable for who runs their apps in container and vm.
I think let us forget about #2, that's just a workaround. #1 is probably
what we should look into for your problem. Was talking to Vineeth earlier, is
it possible that the fairness issues that Aaron and Peter are looking into is
causing the performance problem here?
So like, if ksoftirqd being higher prio is making the vruntime delta between
2 CFS tasks sharing a core to be quite high, then it causes the core-wide
min_vruntime to be high. Then if uperf gets enqueued, it will get starved by
ksoftirqd and not able to run till ksoftirqd's vruntime catches up.
Other than that, the only other thing (AFAIK) is the IPI/scheduler overhead
is giving uperf worse performance than SMT-off and we ought to reduce the
overhead some how. Does a kernel perf profile show you any smoking guns?
thanks,
- Joel
>
> Thanks,
> -Aubrey
>
> > Then it will have cookie 0 and be able to
> > share core with kernel threads. About user-user isolation that you need, if
> > you tag any "untrusted" threads by adding it to CGroup, then there will
> > automatically isolated from uperf while allowing uperf to share CPU with
> > kernel threads.
> >
> > Please let me know your thoughts and thanks,
> >
> > - Joel
> >
> >>
> >> Thanks,
> >> -Aubrey
> >>>
> >>> 5. Document everything clearly so the semantics are clear both to the
> >>> developers of core scheduling and to system administrators.
> >>>
> >>> Note that, with the concept of "system trusted cookie", we can also do
> >>> optimizations like:
> >>> 1. Disable STIBP when switching into trusted tasks.
> >>> 2. Disable L1D flushing / verw stuff for L1TF/MDS issues, when switching into
> >>> trusted tasks.
> >>>
> >>> At least #1 seems to be biting enabling HT on ChromeOS right now, and one
> >>> other engineer requested I do something like #2 already.
> >>>
> >>> Once we get full-syscall isolation working, threads belonging to a process
> >>> can also share a core so those can just share a core with the task-group
> >>> leader.
> >>>
> >>>>> Is the uperf throughput worse with SMT+core-scheduling versus no-SMT ?
> >>>>
> >>>> This is a good question, from the data we measured by uperf,
> >>>> SMT+core-scheduling is 28.2% worse than no-SMT, :(
> >>>
> >>> This is worrying for sure. :-(. We ought to debug/profile it more to see what
> >>> is causing the overhead. Me/Vineeth added it as a topic for LPC as well.
> >>>
> >>> Any other thoughts from others on this?
> >>>
> >>> thanks,
> >>>
> >>> - Joel
> >>>
> >>>
> >>>>> thanks,
> >>>>>
> >>>>> - Joel
> >>>>> PS: I am planning to write a patch behind a CONFIG option that tags
> >>>>> all processes (default untrusted) so everything gets a cookie which
> >>>>> some folks said was how they wanted (have a whitelist instead of
> >>>>> blacklist).
> >>>>>
> >>>>
> >>
>
Powered by blists - more mailing lists