lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 01 Dec 2022 09:51:08 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Mina Almasry <almasrymina@...gle.com>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Yosry Ahmed <yosryahmed@...gle.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>, weixugc@...gle.com,
        shakeelb@...gle.com, gthelen@...gle.com, fvdl@...gle.com,
        Michal Hocko <mhocko@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH V1] mm: Disable demotion from proactive reclaim

Yang Shi <shy828301@...il.com> writes:

> On Tue, Nov 29, 2022 at 9:33 PM Huang, Ying <ying.huang@...el.com> wrote:
>>
>> Yang Shi <shy828301@...il.com> writes:
>>
>> > On Mon, Nov 28, 2022 at 4:54 PM Huang, Ying <ying.huang@...el.com> wrote:
>> >>
>> >> Yang Shi <shy828301@...il.com> writes:
>> >>
>> >> > On Wed, Nov 23, 2022 at 9:52 PM Huang, Ying <ying.huang@...el.com> wrote:
>> >> >>
>> >> >> Hi, Johannes,
>> >> >>
>> >> >> Johannes Weiner <hannes@...xchg.org> writes:
>> >> >> [...]
>> >> >> >
>> >> >> > The fallback to reclaim actually strikes me as wrong.
>> >> >> >
>> >> >> > Think of reclaim as 'demoting' the pages to the storage tier. If we
>> >> >> > have a RAM -> CXL -> storage hierarchy, we should demote from RAM to
>> >> >> > CXL and from CXL to storage. If we reclaim a page from RAM, it means
>> >> >> > we 'demote' it directly from RAM to storage, bypassing potentially a
>> >> >> > huge amount of pages colder than it in CXL. That doesn't seem right.
>> >> >> >
>> >> >> > If demotion fails, IMO it shouldn't satisfy the reclaim request by
>> >> >> > breaking the layering. Rather it should deflect that pressure to the
>> >> >> > lower layers to make room. This makes sure we maintain an aging
>> >> >> > pipeline that honors the memory tier hierarchy.
>> >> >>
>> >> >> Yes.  I think that we should avoid to fall back to reclaim as much as
>> >> >> possible too.  Now, when we allocate memory for demotion
>> >> >> (alloc_demote_page()), __GFP_KSWAPD_RECLAIM is used.  So, we will trigger
>> >> >> kswapd reclaim on lower tier node to free some memory to avoid fall back
>> >> >> to reclaim on current (higher tier) node.  This may be not good enough,
>> >> >> for example, the following patch from Hasan may help via waking up
>> >> >> kswapd earlier.
>> >> >
>> >> > For the ideal case, I do agree with Johannes to demote the page tier
>> >> > by tier rather than reclaiming them from the higher tiers. But I also
>> >> > agree with your premature OOM concern.
>> >> >
>> >> >>
>> >> >> https://lore.kernel.org/linux-mm/b45b9bf7cd3e21bca61d82dcd1eb692cd32c122c.1637778851.git.hasanalmaruf@fb.com/
>> >> >>
>> >> >> Do you know what is the next step plan for this patch?
>> >> >>
>> >> >> Should we do even more?
>> >> >
>> >> > In my initial implementation I implemented a simple throttle logic
>> >> > when the demotion is not going to succeed if the demotion target has
>> >> > not enough free memory (just check the watermark) to make migration
>> >> > succeed without doing any reclamation. Shall we resurrect that?
>> >>
>> >> Can you share the link to your throttle patch?  Or paste it here?
>> >
>> > I just found this on the mailing list.
>> > https://lore.kernel.org/linux-mm/1560468577-101178-8-git-send-email-yang.shi@linux.alibaba.com/
>>
>> Per my understanding, this patch will avoid demoting if there's no free
>> space on demotion target?  If so, I think that we should trigger kswapd
>> reclaiming on demotion target before that.  And we can simply avoid to
>> fall back to reclaim firstly, then avoid to scan as an improvement as
>> that in your patch above.
>
> Yes, it should. The rough idea looks like:
>
> if (the demote target is contended)
>     wake up kswapd
>     reclaim_throttle(VMSCAN_THROTTLE_DEMOTION)
>     retry demotion
>
> The kswapd is responsible for clearing the contention flag.

We may do this, at least for demotion in kswapd.  But I think that this
could be the second step optimization after we make correct choice
between demotion/reclaim.  What if the pages in demotion target is too
hot to be reclaimed first?  Should we reclaim in fast memory node to
avoid OOM?

Best Regards,
Huang, Ying

>>
>> > But it didn't have the throttling logic, I may not submit that version
>> > to the mailing list since we decided to drop this and merge mine and
>> > Dave's.
>> >
>> > Anyway it is not hard to add the throttling logic, we already have a
>> > few throttling cases in vmscan, for example, "mm/vmscan: throttle
>> > reclaim until some writeback completes if congested".
>> >>
>> >> > Waking kswapd sooner is fine to me, but it may be not enough, for
>> >> > example, the kswapd may not keep up so remature OOM may happen on
>> >> > higher tiers or reclaim may still happen. I think throttling the
>> >> > reclaimer/demoter until kswapd makes progress could avoid both. And
>> >> > since the lower tiers memory typically is quite larger than the higher
>> >> > tiers, so the throttle should happen very rarely IMHO.
>> >> >
>> >> >>
>> >> >> From another point of view, I still think that we can use falling back
>> >> >> to reclaim as the last resort to avoid OOM in some special situations,
>> >> >> for example, most pages in the lowest tier node are mlock() or too hot
>> >> >> to be reclaimed.
>> >> >>
>> >> >> > So I'm hesitant to design cgroup controls around the current behavior.
>> >>
>> >> Best Regards,
>> >> Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ