lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 9 Aug 2020 12:44:08 -0400
From:   Joel Fernandes <>
To:     "Li, Aubrey" <>
        Nishanth Aravamudan <>,
        Julien Desfossez <>,
        Peter Zijlstra <>,
        Tim Chen <>,
        Ingo Molnar <>,
        Thomas Glexiner <>,
        Paul Turner <>,
        Linus Torvalds <>,
        LKML <>,
        Subhra Mazumdar <>,
        Frederic Weisbecker <>,
        Kees Cook <>,
        Greg Kerr <>, Phil Auld <>,
        Aaron Lu <>,
        Aubrey Li <>,
        Valentin Schneider <>,
        Mel Gorman <>,
        Pawan Gupta <>,
        Paolo Bonzini <>,
        Vineeth Pillai <>,
        Chen Yu <>,
        Christian Brauner <>,
        "Ning, Hongyu" <>,
        benbjiang(蒋彪) <>
Subject: Re: [RFC PATCH 00/16] Core scheduling v6

Hi Aubrey,

Apologies for replying late as I was still looking into the details.

On Wed, Aug 05, 2020 at 11:57:20AM +0800, Li, Aubrey wrote:
> +/*
> + * Core scheduling policy:
> + * - CORE_SCHED_DISABLED: core scheduling is disabled.
> + * - CORE_COOKIE_MATCH: tasks with same cookie can run
> + *                     on the same core concurrently.
> + * - CORE_COOKIE_TRUST: trusted task can run with kernel
> 			thread on the same core concurrently. 
> + * - CORE_COOKIE_LONELY: tasks with cookie can run only
> + *                     with idle thread on the same core.
> + */
> +enum coresched_policy {
> +};
> We can set policy to CORE_COOKIE_TRUST of uperf cgroup and fix this kind
> of performance regression. Not sure if this sounds attractive?

Instead of this, I think it can be something simpler IMHO:

1. Consider all cookie-0 task as trusted. (Even right now, if you apply the
   core-scheduling patchset, such tasks will share a core and sniff on each
   other. So let us not pretend that such tasks are not trusted).

2. All kernel threads and idle task would have a cookie 0 (so that will cover
   ksoftirqd reported in your original issue).

   enable it. Setting this option would tag all tasks that are forked from a
   cookie-0 task with their own cookie. Later on, such tasks can be added to
   a group. This cover's PeterZ's ask about having 'default untrusted').
   (Users like ChromeOS that don't want to userspace system processes to be
   tagged can disable this option so such tasks will be cookie-0).

4. Allow prctl/cgroup interfaces to create groups of tasks and override the
   above behaviors.

5. Document everything clearly so the semantics are clear both to the
   developers of core scheduling and to system administrators.

Note that, with the concept of "system trusted cookie", we can also do
optimizations like:
1. Disable STIBP when switching into trusted tasks.
2. Disable L1D flushing / verw stuff for L1TF/MDS issues, when switching into
   trusted tasks.

At least #1 seems to be biting enabling HT on ChromeOS right now, and one
other engineer requested I do something like #2 already.

Once we get full-syscall isolation working, threads belonging to a process
can also share a core so those can just share a core with the task-group

> > Is the uperf throughput worse with SMT+core-scheduling versus no-SMT ?
> This is a good question, from the data we measured by uperf,
> SMT+core-scheduling is 28.2% worse than no-SMT, :(

This is worrying for sure. :-(. We ought to debug/profile it more to see what
is causing the overhead. Me/Vineeth added it as a topic for LPC as well.

Any other thoughts from others on this?


 - Joel

> > thanks,
> > 
> >  - Joel
> > PS: I am planning to write a patch behind a CONFIG option that tags
> > all processes (default untrusted) so everything gets a cookie which
> > some folks said was how they wanted (have a whitelist instead of
> > blacklist).
> > 

Powered by blists - more mailing lists