lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkq6cvS4L4DYnr+oyggfXzZTKegfpdNUi_XHA+-67HZYNA@mail.gmail.com>
Date:   Thu, 17 Oct 2019 10:20:30 -0700
From:   Yang Shi <shy828301@...il.com>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Shakeel Butt <shakeelb@...gle.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Jonathan Adams <jwadams@...gle.com>,
        "Chen, Tim C" <tim.c.chen@...el.com>
Subject: Re: [PATCH 0/4] [RFC] Migrate Pages in lieu of discard

On Thu, Oct 17, 2019 at 7:26 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 10/16/19 8:45 PM, Shakeel Butt wrote:
> > On Wed, Oct 16, 2019 at 3:49 PM Dave Hansen <dave.hansen@...ux.intel.com> wrote:
> >> This set implements a solution to these problems.  At the end of the
> >> reclaim process in shrink_page_list() just before the last page
> >> refcount is dropped, the page is migrated to persistent memory instead
> >> of being dropped.
> ..> The memory cgroup part of the story is missing here. Since PMEM is
> > treated as slow DRAM, shouldn't its usage be accounted to the
> > corresponding memcg's memory/memsw counters and the migration should
> > not happen for memcg limit reclaim? Otherwise some jobs can hog the
> > whole PMEM.
>
> My expectation (and I haven't confirmed this) is that the any memory use
> is accounted to the owning cgroup, whether it is DRAM or PMEM.  memcg
> limit reclaim and global reclaim both end up doing migrations and
> neither should have a net effect on the counters.

Yes, your expectation is correct. As long as PMEM is a NUMA node, it
is treated as regular memory by memcg. But, I don't think memcg limit
reclaim should do migration since limit reclaim is used to reduce
memory usage, but migration doesn't reduce usage, it just moves memory
from one node to the other.

In my implementation, I just skip migration for memcg limit reclaim,
please see: https://lore.kernel.org/linux-mm/1560468577-101178-7-git-send-email-yang.shi@linux.alibaba.com/

>
> There is certainly a problem here because DRAM is a more valuable
> resource vs. PMEM, and memcg accounts for them as if they were equally
> valuable.  I really want to see memcg account for this cost discrepancy
> at some point, but I'm not quite sure what form it would take.  Any
> feedback from you heavy memcg users out there would be much appreciated.

We did have some demands to control the ratio between DRAM and PMEM as
I mentioned in LSF/MM. Mel Gorman did suggest make memcg account DRAM
and PMEM respectively or something similar.

>
> > Also what happens when PMEM is full? Can the memory migrated to PMEM
> > be reclaimed (or discarded)?
>
> Yep.  The "migration path" can be as long as you want, but once the data
> hits a "terminal node" it will stop getting migrated and normal discard
> at the end of reclaim happens.

I recalled I had a hallway conversation with Keith about this in
LSF/MM. We all agree there should be not a cycle. But, IMHO, I don't
think exporting migration path to userspace (or letting user to define
migration path) and having multiple migration stops are good ideas in
general.

>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ