lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 22 Aug 2020 16:22:27 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     LKML <linux-kernel@...r.kernel.org>
Cc:     Aaron Lu <aaron.lwe@...il.com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Kees Cook <keescook@...omium.org>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Paul Turner <pjt@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Tim Chen <tim.c.chen@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Vineeth Pillai <viremana@...ux.microsoft.com>,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Chen Yu <yu.c.chen@...el.com>,
        Christian Brauner <christian.brauner@...ntu.com>
Subject: Re: [PATCH RFC 00/12] Core-sched v6+: kernel protection and hotplug fixes

On Fri, Aug 14, 2020 at 11:19 PM Joel Fernandes (Google)
<joel@...lfernandes.org> wrote:
>
> Hello!
>
> This series is continuation of main core-sched v6 series [1] and adds support
> for syscall and IRQ isolation from usermode processes and guests. It is key to
> safely entering kernel mode in an HT while the other HT is in use by a user or
> guest. The series also fixes CPU hotplug issues arising because of the
> cpu_smt_mask changing while the next task is being picked. These hotplug fixes
> are needed also for kernel protection to work correctly.
>
> The series is based on Thomas's x86/entry tree.
>
> [1]  https://lwn.net/Articles/824918/

Hello,
Just wanted to mention that we are talking about this series during
the refereed talk on Monday at 16:00 UTC :
https://linuxplumbersconf.org/event/7/contributions/648/

The slides are here with some nice pictures showing the kernel protection stuff:
https://docs.google.com/presentation/d/1VzeQo3AyGTN35DJ3LKoPWBfiZHZJiF8q0NrX9eVYG70/edit?usp=sharing

And Julien has some promising data to share which he just collected
with this series (and will add them to the slides).

Looking forward to possibly seeing you there and your participation
for these topics both during the refereed talk and the scheduler MC,
thanks!

 - Joel


>
> Background:
>
> Core-scheduling prevents hyperthreads in usermode from attacking each
> other, but it does not do anything about one of the hyperthreads
> entering the kernel for any reason. This leaves the door open for MDS
> and L1TF attacks with concurrent execution sequences between
> hyperthreads.
>
> This series adds support for protecting all syscall and IRQ kernel mode entries
> by cleverly tracking when any sibling in a core enter the kernel, and when all
> the siblings exit the kernel. IPIs are sent to force siblings into the kernel.
>
> Care is taken to avoid waiting in IRQ-disabled sections as Thomas suggested
> thus avoiding stop_machine deadlocks. Every attempt is made to avoid
> unnecessary IPIs.
>
> Performance tests:
> sysbench is used to test the performance of the patch series. Used a 8 cpu/4
> Core VM and ran 2 sysbench tests in parallel. Each sysbench test runs 4 tasks:
> sysbench --test=cpu --cpu-max-prime=100000 --num-threads=4 run
>
> Compared the performance results for various combinations as below.
> The metric below is 'events per second':
>
> 1. Coresched disabled
>     sysbench-1/sysbench-2 => 175.7/175.6
>
> 2. Coreched enabled, both sysbench tagged
>   sysbench-1/sysbench-2 => 168.8/165.6
>
> 3. Coresched enabled, sysbench-1 tagged and sysbench-2 untagged
>     sysbench-1/sysbench-2 => 96.4/176.9
>
> 4. smt off
>     sysbench-1/sysbench-2 => 97.9/98.8
>
> When both sysbench-es are tagged, there is a perf drop of ~4%. With a
> tagged/untagged case, the tagged one suffers because it always gets
> stalled when the sibiling enters kernel. But this is no worse than smtoff.
>
> Also a modified rcutorture was used to heavily stress the kernel to make sure
> there is not crash or instability.
>
> Joel Fernandes (Google) (5):
> irq_work: Add support to detect if work is pending
> entry/idle: Add a common function for activites during idle entry/exit
> arch/x86: Add a new TIF flag for untrusted tasks
> kernel/entry: Add support for core-wide protection of kernel-mode
> entry/idle: Enter and exit kernel protection during idle entry and
> exit
>
> Vineeth Pillai (7):
> entry/kvm: Protect the kernel when entering from guest
> bitops: Introduce find_next_or_bit
> cpumask: Introduce a new iterator for_each_cpu_wrap_or
> sched/coresched: Use for_each_cpu(_wrap)_or for pick_next_task
> sched/coresched: Make core_pick_seq per run-queue
> sched/coresched: Check for dynamic changes in smt_mask
> sched/coresched: rq->core should be set only if not previously set
>
> arch/x86/include/asm/thread_info.h |   2 +
> arch/x86/kvm/x86.c                 |   3 +
> include/asm-generic/bitops/find.h  |  16 ++
> include/linux/cpumask.h            |  42 +++++
> include/linux/entry-common.h       |  22 +++
> include/linux/entry-kvm.h          |  12 ++
> include/linux/irq_work.h           |   1 +
> include/linux/sched.h              |  12 ++
> kernel/entry/common.c              |  88 +++++----
> kernel/entry/kvm.c                 |  12 ++
> kernel/irq_work.c                  |  11 ++
> kernel/sched/core.c                | 281 ++++++++++++++++++++++++++---
> kernel/sched/idle.c                |  17 +-
> kernel/sched/sched.h               |  11 +-
> lib/cpumask.c                      |  53 ++++++
> lib/find_bit.c                     |  56 ++++--
> 16 files changed, 564 insertions(+), 75 deletions(-)
>
> --
> 2.28.0.220.ged08abb693-goog
>

Powered by blists - more mailing lists