lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 Mar 2020 21:03:07 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     Tim Chen <tim.c.chen@...ux.intel.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Vineeth Remanan Pillai <vpillai@...italocean.com>,
        Aubrey Li <aubrey.intel@...il.com>,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Ingo Molnar <mingo@...nel.org>, Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Dario Faggioli <dfaggioli@...e.com>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
        Aaron Lu <aaron.lwe@...il.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        "Luck, Tony" <tony.luck@...el.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4

Hi Thomas,

Thanks for the detailed email. I am on the same page with all your points, I
had a question on one of the points below (which I agree with as well) but
just to confirm,

On Tue, Mar 17, 2020 at 10:17:47PM +0100, Thomas Gleixner wrote:
[..] 
> >> 4. HT1 is idle, and HT2 is running a victim process. Now HT1 starts running
> >>    hostile code on guest or host. HT2 is being forced idle. However, there is
> >>    an overlap between HT1 starting to execute hostile code and HT2's victim
> >>    process getting scheduled out.
> >>    Speaking to Vineeth, we discussed an idea to monitor the core_sched_seq
> >>    counter of the sibling being idled to detect that it is now idle.
> >>    However we discussed today that looking at this data, it is not really an
> >>    issue since it is such a small window.
> 
> If the victim HT is kicked out of execution with an IPI then the overlap
> depends on the contexts:
> 
>         HT1 (attack)		HT2 (victim)
> 
>  A      idle -> user space      user space -> idle
> 
>  B      idle -> user space      guest -> idle
> 
>  C      idle -> guest           user space -> idle
> 
>  D      idle -> guest           guest -> idle
> 
> The IPI from HT1 brings HT2 immediately into the kernel when HT2 is in
> host user mode or brings it immediately into VMEXIT when HT2 is in guest
> mode.
> 
> #A On return from handling the IPI HT2 immediately reschedules to idle.
>    To have an overlap the return to user space on HT1 must be faster.
> 
> #B Coming back from VEMXIT into schedule/idle might take slightly longer
>    than #A.
> 
> #C Similar to #A, but reentering guest mode in HT1 after sending the IPI
>    will probably take longer.
> 
> #D Similar to #C if you make the assumption that VMEXIT on HT2 and
>    rescheduling into idle is not significantly slower than reaching
>    VMENTER after sending the IPI.
> 
> In all cases the data exposed by a potential overlap shouldn't be that
> interesting (e.g. scheduler state), but that obviously depends on what
> the attacker is looking for.

About the "shouldn't be that interesting" part, you are saying, the overlap
should not be that interesting because the act of one sibling IPI'ing the
other implies the sibling HT immediately entering kernel mode, right?

Thanks, your email really helped!!!

 - Joel


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ