lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 14 Aug 2020 00:26:02 +0000 From: benbjiang(蒋彪) <benbjiang@...cent.com> To: "Li, Aubrey" <aubrey.li@...ux.intel.com> CC: Joel Fernandes <joel@...lfernandes.org>, "viremana@...ux.microsoft.com" <viremana@...ux.microsoft.com>, Nishanth Aravamudan <naravamudan@...italocean.com>, Julien Desfossez <jdesfossez@...italocean.com>, Peter Zijlstra <peterz@...radead.org>, "Tim Chen" <tim.c.chen@...ux.intel.com>, Ingo Molnar <mingo@...nel.org>, "Thomas Glexiner" <tglx@...utronix.de>, Paul Turner <pjt@...gle.com>, Linus Torvalds <torvalds@...ux-foundation.org>, LKML <linux-kernel@...r.kernel.org>, "Subhra Mazumdar" <subhra.mazumdar@...cle.com>, Frederic Weisbecker <fweisbec@...il.com>, Kees Cook <keescook@...omium.org>, Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>, Aaron Lu <aaron.lwe@...il.com>, Aubrey Li <aubrey.intel@...il.com>, Valentin Schneider <valentin.schneider@....com>, Mel Gorman <mgorman@...hsingularity.net>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Paolo Bonzini <pbonzini@...hat.com>, Vineeth Pillai <vineethrp@...il.com>, Chen Yu <yu.c.chen@...el.com>, Christian Brauner <christian.brauner@...ntu.com>, "Ning, Hongyu" <hongyu.ning@...ux.intel.com> Subject: Re: [RFC PATCH 00/16] Core scheduling v6(Internet mail) > On Aug 13, 2020, at 12:28 PM, Li, Aubrey <aubrey.li@...ux.intel.com> wrote: > > On 2020/8/13 7:08, Joel Fernandes wrote: >> On Wed, Aug 12, 2020 at 10:01:24AM +0800, Li, Aubrey wrote: >>> Hi Joel, >>> >>> On 2020/8/10 0:44, Joel Fernandes wrote: >>>> Hi Aubrey, >>>> >>>> Apologies for replying late as I was still looking into the details. >>>> >>>> On Wed, Aug 05, 2020 at 11:57:20AM +0800, Li, Aubrey wrote: >>>> [...] >>>>> +/* >>>>> + * Core scheduling policy: >>>>> + * - CORE_SCHED_DISABLED: core scheduling is disabled. >>>>> + * - CORE_COOKIE_MATCH: tasks with same cookie can run >>>>> + * on the same core concurrently. >>>>> + * - CORE_COOKIE_TRUST: trusted task can run with kernel >>>>> thread on the same core concurrently. >>>>> + * - CORE_COOKIE_LONELY: tasks with cookie can run only >>>>> + * with idle thread on the same core. >>>>> + */ >>>>> +enum coresched_policy { >>>>> + CORE_SCHED_DISABLED, >>>>> + CORE_SCHED_COOKIE_MATCH, >>>>> + CORE_SCHED_COOKIE_TRUST, >>>>> + CORE_SCHED_COOKIE_LONELY, >>>>> +}; >>>>> >>>>> We can set policy to CORE_COOKIE_TRUST of uperf cgroup and fix this kind >>>>> of performance regression. Not sure if this sounds attractive? >>>> >>>> Instead of this, I think it can be something simpler IMHO: >>>> >>>> 1. Consider all cookie-0 task as trusted. (Even right now, if you apply the >>>> core-scheduling patchset, such tasks will share a core and sniff on each >>>> other. So let us not pretend that such tasks are not trusted). >>>> >>>> 2. All kernel threads and idle task would have a cookie 0 (so that will cover >>>> ksoftirqd reported in your original issue). >>>> >>>> 3. Add a config option (CONFIG_SCHED_CORE_DEFAULT_TASKS_UNTRUSTED). Default >>>> enable it. Setting this option would tag all tasks that are forked from a >>>> cookie-0 task with their own cookie. Later on, such tasks can be added to >>>> a group. This cover's PeterZ's ask about having 'default untrusted'). >>>> (Users like ChromeOS that don't want to userspace system processes to be >>>> tagged can disable this option so such tasks will be cookie-0). >>>> >>>> 4. Allow prctl/cgroup interfaces to create groups of tasks and override the >>>> above behaviors. >>> >>> How does uperf in a cgroup work with ksoftirqd? Are you suggesting I set uperf's >>> cookie to be cookie-0 via prctl? >> >> Yes, but let me try to understand better. There are 2 problems here I think: >> >> 1. ksoftirqd getting idled when HT is turned on, because uperf is sharing a >> core with it: This should not be any worse than SMT OFF, because even SMT OFF >> would also reduce ksoftirqd's CPU time just core sched is doing. Sure >> core-scheduling adds some overhead with IPIs but such a huge drop of perf is >> strange. Peter any thoughts on that? >> >> 2. Interface: To solve the performance problem, you are saying you want uperf >> to share a core with ksoftirqd so that it is not forced into idle. Why not >> just keep uperf out of the cgroup? > > I guess this is unacceptable for who runs their apps in container and vm. IMHO, just as Joel proposed, 1. Consider all cookie-0 task as trusted. 2. All kernel threads and idle task would have a cookie 0 In that way, all tasks with cookies(including uperf in a cgroup) could run concurrently with kernel threads. That could be a good solution for the issue. :) If with CONFIG_SCHED_CORE_DEFAULT_TASKS_UNTRUSTED enabled, maybe we should set ksoftirqd’s cookie to be cookie-0 to solve the issue. Thx. Regards, Jiang > > Thanks, > -Aubrey > >> Then it will have cookie 0 and be able to >> share core with kernel threads. About user-user isolation that you need, if >> you tag any "untrusted" threads by adding it to CGroup, then there will >> automatically isolated from uperf while allowing uperf to share CPU with >> kernel threads. >> >> Please let me know your thoughts and thanks, >> >> - Joel >> >>> >>> Thanks, >>> -Aubrey >>>> >>>> 5. Document everything clearly so the semantics are clear both to the >>>> developers of core scheduling and to system administrators. >>>> >>>> Note that, with the concept of "system trusted cookie", we can also do >>>> optimizations like: >>>> 1. Disable STIBP when switching into trusted tasks. >>>> 2. Disable L1D flushing / verw stuff for L1TF/MDS issues, when switching into >>>> trusted tasks. >>>> >>>> At least #1 seems to be biting enabling HT on ChromeOS right now, and one >>>> other engineer requested I do something like #2 already. >>>> >>>> Once we get full-syscall isolation working, threads belonging to a process >>>> can also share a core so those can just share a core with the task-group >>>> leader. >>>> >>>>>> Is the uperf throughput worse with SMT+core-scheduling versus no-SMT ? >>>>> >>>>> This is a good question, from the data we measured by uperf, >>>>> SMT+core-scheduling is 28.2% worse than no-SMT, :( >>>> >>>> This is worrying for sure. :-(. We ought to debug/profile it more to see what >>>> is causing the overhead. Me/Vineeth added it as a topic for LPC as well. >>>> >>>> Any other thoughts from others on this? >>>> >>>> thanks, >>>> >>>> - Joel >>>> >>>> >>>>>> thanks, >>>>>> >>>>>> - Joel >>>>>> PS: I am planning to write a patch behind a CONFIG option that tags >>>>>> all processes (default untrusted) so everything gets a cookie which >>>>>> some folks said was how they wanted (have a whitelist instead of >>>>>> blacklist). >>>>>> >>>>> >>> > >
Powered by blists - more mailing lists