[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez0aTJMREb1JL6SaT5rNLs6oUZVevj+BGrUzVEprZNFKOw@mail.gmail.com>
Date: Fri, 12 Oct 2018 15:25:46 +0200
From: Jann Horn <jannh@...gle.com>
To: sneves@....uc.pt
Cc: Andy Lutomirski <luto@...nel.org>, kristen@...ux.intel.com,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86: entry: flush the cache if syscall error
On Fri, Oct 12, 2018 at 11:41 AM Samuel Neves <sneves@....uc.pt> wrote:
>
> On Thu, Oct 11, 2018 at 8:25 PM Andy Lutomirski <luto@...nel.org> wrote:
> > What exactly is this trying to protect against? And how many cycles
> > should we expect L1D_FLUSH to take?
>
> As far as I could measure, I got 1660 cycles per wrmsr 0x10b, 0x1 on a
> Skylake chip, and 1220 cycles on a Skylake-SP.
Is that with L1D mostly empty, with L1D mostly full with clean lines,
or with L1D full of dirty lines that need to be written back?
Powered by blists - more mailing lists