[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4iuRkQWsWa-YfTMDJUTUr1QouEsS6zD_LAjcpbLGXCPEQ@mail.gmail.com>
Date: Fri, 21 Sep 2018 17:06:24 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: "Elliott, Robert (Persistent Memory)" <elliott@....com>
Cc: Kees Cook <keescook@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <MHocko@...e.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Toshi Kani <toshi.kani@....com>
Subject: Re: [PATCH 0/3] mm: Randomize free memory
On Fri, Sep 21, 2018 at 4:51 PM Elliott, Robert (Persistent Memory)
<elliott@....com> wrote:
>
>
> > -----Original Message-----
> > From: linux-kernel-owner@...r.kernel.org <linux-kernel-
> > owner@...r.kernel.org> On Behalf Of Kees Cook
> > Sent: Friday, September 21, 2018 2:13 PM
> > Subject: Re: [PATCH 0/3] mm: Randomize free memory
> ...
> > I'd be curious to hear more about the mentioned cache performance
> > improvements. I love it when a security feature actually _improves_
> > performance. :)
>
> It's been a problem in the HPC space:
> http://www.nersc.gov/research-and-development/knl-cache-mode-performance-coe/
>
> A kernel module called zonesort is available to try to help:
> https://software.intel.com/en-us/articles/xeon-phi-software
>
> and this abandoned patch series proposed that for the kernel:
> https://lkml.org/lkml/2017/8/23/195
>
> Dan's patch series doesn't attempt to ensure buffers won't conflict, but
> also reduces the chance that the buffers will. This will make performance
> more consistent, albeit slower than "optimal" (which is near impossible
> to attain in a general-purpose kernel). That's better than forcing
> users to deploy remedies like:
> "To eliminate this gradual degradation, we have added a Stream
> measurement to the Node Health Check that follows each job;
> nodes are rebooted whenever their measured memory bandwidth
> falls below 300 GB/s."
Robert, thanks for that! Yes, instead of run-to-run variations
alternating between almost-never-conflict and nearly-always-conflict,
we'll get a random / average distribution of cache conflicts.
Powered by blists - more mailing lists