lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ad8dcf32-ecc3-a39d-9c6f-78c6bfbbb566@intel.com>
Date:   Fri, 25 Aug 2017 06:10:49 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Vlastimil Babka <vbabka@...e.cz>,
        Ɓukasz Daniluk <lukasz.daniluk@...el.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc:     lukasz.anaczkowski@...el.com
Subject: Re: [RESEND PATCH 0/3] mm: Add cache coloring mechanism

On 08/25/2017 02:04 AM, Vlastimil Babka wrote:
> On 08/24/2017 06:08 PM, Dave Hansen wrote:
>> On 08/24/2017 05:47 AM, Vlastimil Babka wrote:
>>> So the obvious question, what about THPs? Their size should be enough to
>>> contain all the colors with current caches, no? Even on KNL I didn't
>>> find more than "32x 1 MB 16-way L2 caches". This is in addition to the
>>> improved TLB performance, which you want to get as well for such workloads?
>> The cache in this case is "MCDRAM" which is 16GB in size.  It can be
>> used as normal RAM or a cache.  This patch deals with when "MCDRAM" is
>> in its cache mode.
> Hm, 16GB direct mapped, that means 8k colors for 2MB THP's. Is that
> really practical? Wouldn't such workload use 1GB hugetlbfs pages? Then
> it's still 16 colors to manage, but could be done purely in userspace
> since they should not move in physical memory and userspace can control
> where to map each phase in the virtual layout.

There are lots of options for applications that are written with
specific knowledge of MCDRAM.  The easiest option from the kernel's
perspective is to just turn the caching mode off and treat MCDRAM as
normal RAM (it shows up in a separate NUMA node in that case).

But, one of the reasons for the cache mode in the first place was to
support applications that don't have specific knowledge of MCDRAM.  Or,
even old binaries that were compiled long ago.

In other words, I don't think this is something we can easily punt to
userspace.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ