lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkrmxyzH4R7a9sJQavrUyKCEiNYeA543+sdJLsgRPrwBwQ@mail.gmail.com>
Date:   Mon, 28 Nov 2022 14:24:03 -0800
From:   Yang Shi <shy828301@...il.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Mina Almasry <almasrymina@...gle.com>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Yosry Ahmed <yosryahmed@...gle.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>, weixugc@...gle.com,
        shakeelb@...gle.com, gthelen@...gle.com, fvdl@...gle.com,
        Michal Hocko <mhocko@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH V1] mm: Disable demotion from proactive reclaim

On Wed, Nov 23, 2022 at 9:52 PM Huang, Ying <ying.huang@...el.com> wrote:
>
> Hi, Johannes,
>
> Johannes Weiner <hannes@...xchg.org> writes:
> [...]
> >
> > The fallback to reclaim actually strikes me as wrong.
> >
> > Think of reclaim as 'demoting' the pages to the storage tier. If we
> > have a RAM -> CXL -> storage hierarchy, we should demote from RAM to
> > CXL and from CXL to storage. If we reclaim a page from RAM, it means
> > we 'demote' it directly from RAM to storage, bypassing potentially a
> > huge amount of pages colder than it in CXL. That doesn't seem right.
> >
> > If demotion fails, IMO it shouldn't satisfy the reclaim request by
> > breaking the layering. Rather it should deflect that pressure to the
> > lower layers to make room. This makes sure we maintain an aging
> > pipeline that honors the memory tier hierarchy.
>
> Yes.  I think that we should avoid to fall back to reclaim as much as
> possible too.  Now, when we allocate memory for demotion
> (alloc_demote_page()), __GFP_KSWAPD_RECLAIM is used.  So, we will trigger
> kswapd reclaim on lower tier node to free some memory to avoid fall back
> to reclaim on current (higher tier) node.  This may be not good enough,
> for example, the following patch from Hasan may help via waking up
> kswapd earlier.

For the ideal case, I do agree with Johannes to demote the page tier
by tier rather than reclaiming them from the higher tiers. But I also
agree with your premature OOM concern.

>
> https://lore.kernel.org/linux-mm/b45b9bf7cd3e21bca61d82dcd1eb692cd32c122c.1637778851.git.hasanalmaruf@fb.com/
>
> Do you know what is the next step plan for this patch?
>
> Should we do even more?

In my initial implementation I implemented a simple throttle logic
when the demotion is not going to succeed if the demotion target has
not enough free memory (just check the watermark) to make migration
succeed without doing any reclamation. Shall we resurrect that?

Waking kswapd sooner is fine to me, but it may be not enough, for
example, the kswapd may not keep up so remature OOM may happen on
higher tiers or reclaim may still happen. I think throttling the
reclaimer/demoter until kswapd makes progress could avoid both. And
since the lower tiers memory typically is quite larger than the higher
tiers, so the throttle should happen very rarely IMHO.

>
> From another point of view, I still think that we can use falling back
> to reclaim as the last resort to avoid OOM in some special situations,
> for example, most pages in the lowest tier node are mlock() or too hot
> to be reclaimed.
>
> > So I'm hesitant to design cgroup controls around the current behavior.
> >
>
> Best Regards,
> Huang, Ying
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ