[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201023004739.GH300658@carbon.dhcp.thefacebook.com>
Date: Thu, 22 Oct 2020 17:47:39 -0700
From: Roman Gushchin <guro@...com>
To: Zi Yan <ziy@...dia.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Mike Kravetz <mike.kravetz@...cle.com>,
<saberlily.xia@...ilicon.com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <kernel-team@...com>
Subject: Re: [PATCH v1 0/2] mm: cma: introduce a non-blocking version of
cma_release()
On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote:
> On 22 Oct 2020, at 18:53, Roman Gushchin wrote:
>
> > This small patchset introduces a non-blocking version of cma_release()
> > and simplifies the code in hugetlbfs, where previously we had to
> > temporarily drop hugetlb_lock around the cma_release() call.
> >
> > It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
> > THP under a memory pressure requires a cma_release() call. If it's
>
> Thanks for the patch. But during 1GB THP split, we only clear
> the bitmaps without releasing the pages. Also in cma_release_nowait(),
> the first page in the allocated CMA region is reused to store
> struct cma_clear_bitmap_work, but the same method cannot be used
> during THP split, since the first page is still in-use. We might
> need to allocate some new memory for struct cma_clear_bitmap_work,
> which might not be successful under memory pressure. Any suggestion
> on where to store struct cma_clear_bitmap_work when I only want to
> clear bitmap without releasing the pages?
It means we can't use cma_release() there either, because it does clear
individual pages. We need to clear the cma bitmap without touching pages.
Can you handle an error there?
If so, we can introduce something like int cma_schedule_bitmap_clearance(),
which will allocate a work structure and will be able to return -ENOMEM
in the unlikely case of error.
Will it work for you?
Thanks!
Powered by blists - more mailing lists