lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABCjUKDa+AQLrXf1h2QPqDqVePQoL_mJo4uUiOZss2vmeGoN5g@mail.gmail.com>
Date:   Fri, 18 Oct 2019 17:11:38 +0900
From:   Suleiman Souhlal <suleiman@...gle.com>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>,
        Linux Kernel <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org, dan.j.williams@...el.com,
        Shakeel Butt <shakeelb@...gle.com>,
        Jonathan Adams <jwadams@...gle.com>
Subject: Re: [PATCH 0/4] [RFC] Migrate Pages in lieu of discard

On Fri, Oct 18, 2019 at 1:32 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 10/17/19 9:01 AM, Suleiman Souhlal wrote:
> > One problem that came up is that if you get into direct reclaim,
> > because persistent memory can have pretty low write throughput, you
> > can end up stalling users for a pretty long time while migrating
> > pages.
>
> Basically, you're saying that memory load spikes turn into latency spikes?

Yes, exactly.

> FWIW, we have been benchmarking this sucker with benchmarks that claim
> to care about latency.  In general, compared to DRAM, we do see worse
> latency, but nothing catastrophic yet.  I'd be interested if you have
> any workloads that act as reasonable proxies for your latency requirements.

Sorry, I don't know of any specific workloads I can share. :-(
Maybe Jonathan or Shakeel have something more.

I realize it's not very useful without giving specific examples, but
even disregarding persistent memory, we've had latency issues with
direct reclaim when using zswap. It's been such a problem that we're
conducting experiments with not doing zswap compression in direct
reclaim (but still doing it proactively).
The low write throughput of persistent memory would make this worse.

I think the case where we're most likely to run into this is when the
machine is close to OOM situation and we end up thrashing rather than
OOM killing.

Somewhat related, I noticed that this patch series ratelimits
migrations from persistent memory to DRAM, but it might also make
sense to ratelimit migrations from DRAM to persistent memory. If all
the write bandwidth is taken by migrations, there might not be any
more available for applications accessing pages in persistent memory,
resulting in higher latency.


Another issue we ran into, that I think might also apply to this patch
series, is that because kernel memory can't be allocated on persistent
memory, it's possible for all of DRAM to get filled by user memory and
have kernel allocations fail even though there is still a lot of free
persistent memory. This is easy to trigger, just start an application
that is bigger than DRAM.
To mitigate that, we introduced a new watermark for DRAM zones above
which user memory can't be allocated, to leave some space for kernel
allocations.

-- Suleiman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ