lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Aug 2020 12:53:24 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     "Li, Aubrey" <aubrey.li@...ux.intel.com>
Cc:     viremana@...ux.microsoft.com,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Glexiner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
        Aaron Lu <aaron.lwe@...il.com>,
        Aubrey Li <aubrey.intel@...il.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Joel Fernandes <joelaf@...gle.com>,
        Vineeth Pillai <vineethrp@...il.com>,
        Chen Yu <yu.c.chen@...el.com>,
        Christian Brauner <christian.brauner@...ntu.com>,
        "Ning, Hongyu" <hongyu.ning@...ux.intel.com>,
        benbjiang(蒋彪) <benbjiang@...cent.com>
Subject: Re: [RFC PATCH 00/16] Core scheduling v6

Hi Aubrey,

On Mon, Aug 3, 2020 at 4:23 AM Li, Aubrey <aubrey.li@...ux.intel.com> wrote:
>
> On 2020/7/1 5:32, Vineeth Remanan Pillai wrote:
> > Sixth iteration of the Core-Scheduling feature.
> >
> > Core scheduling is a feature that allows only trusted tasks to run
> > concurrently on cpus sharing compute resources (eg: hyperthreads on a
> > core). The goal is to mitigate the core-level side-channel attacks
> > without requiring to disable SMT (which has a significant impact on
> > performance in some situations). Core scheduling (as of v6) mitigates
> > user-space to user-space attacks and user to kernel attack when one of
> > the siblings enters the kernel via interrupts. It is still possible to
> > have a task attack the sibling thread when it enters the kernel via
> > syscalls.
> >
> > By default, the feature doesn't change any of the current scheduler
> > behavior. The user decides which tasks can run simultaneously on the
> > same core (for now by having them in the same tagged cgroup). When a
> > tag is enabled in a cgroup and a task from that cgroup is running on a
> > hardware thread, the scheduler ensures that only idle or trusted tasks
> > run on the other sibling(s). Besides security concerns, this feature
> > can also be beneficial for RT and performance applications where we
> > want to control how tasks make use of SMT dynamically.
> >
> > This iteration is mostly a cleanup of v5 except for a major feature of
> > pausing sibling when a cpu enters kernel via nmi/irq/softirq. Also
> > introducing documentation and includes minor crash fixes.
> >
> > One major cleanup was removing the hotplug support and related code.
> > The hotplug related crashes were not documented and the fixes piled up
> > over time leading to complex code. We were not able to reproduce the
> > crashes in the limited testing done. But if they are reroducable, we
> > don't want to hide them. We should document them and design better
> > fixes if any.
> >
> > In terms of performance, the results in this release are similar to
> > v5. On a x86 system with N hardware threads:
> > - if only N/2 hardware threads are busy, the performance is similar
> >   between baseline, corescheduling and nosmt
> > - if N hardware threads are busy with N different corescheduling
> >   groups, the impact of corescheduling is similar to nosmt
> > - if N hardware threads are busy and multiple active threads share the
> >   same corescheduling cookie, they gain a performance improvement over
> >   nosmt.
> >   The specific performance impact depends on the workload, but for a
> >   really busy database 12-vcpu VM (1 coresched tag) running on a 36
> >   hardware threads NUMA node with 96 mostly idle neighbor VMs (each in
> >   their own coresched tag), the performance drops by 54% with
> >   corescheduling and drops by 90% with nosmt.
> >
>
> We found uperf(in cgroup) throughput drops by ~50% with corescheduling.
>
> The problem is, uperf triggered a lot of softirq and offloaded softirq
> service to *ksoftirqd* thread.
>
> - default, ksoftirqd thread can run with uperf on the same core, we saw
>   100% CPU utilization.
> - coresched enabled, ksoftirqd's core cookie is different from uperf, so
>   they can't run concurrently on the same core, we saw ~15% forced idle.
>
> I guess this kind of performance drop can be replicated by other similar
> (a lot of softirq activities) workloads.
>
> Currently core scheduler picks cookie-match tasks for all SMT siblings, does
> it make sense we add a policy to allow cookie-compatible task running together?
> For example, if a task is trusted(set by admin), it can work with kernel thread.
> The difference from corescheduling disabled is that we still have user to user
> isolation.

In ChromeOS we are considering all cookie-0 tasks as trusted.
Basically if you don't trust a task, then that is when you assign the
task a tag. We do this for the sandboxed processes.

Is the uperf throughput worse with SMT+core-scheduling versus no-SMT ?

thanks,

 - Joel
PS: I am planning to write a patch behind a CONFIG option that tags
all processes (default untrusted) so everything gets a cookie which
some folks said was how they wanted (have a whitelist instead of
blacklist).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ