[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7649db4a-fd21-46c4-efbc-a98893b72104@redhat.com>
Date: Tue, 5 Oct 2021 10:48:09 +0200
From: David Hildenbrand <david@...hat.com>
To: Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Michal Hocko <mhocko@...e.com>, Oscar Salvador <osalvador@...e.de>,
Zi Yan <ziy@...dia.com>,
Muchun Song <songmuchun@...edance.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
David Rientjes <rientjes@...gle.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages
are in CMA
On 01.10.21 19:52, Mike Kravetz wrote:
> Add new interface cma_pages_valid() which indicates if the specified
> pages are part of a CMA region. This interface will be used in a
> subsequent patch by hugetlb code.
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> include/linux/cma.h | 1 +
> mm/cma.c | 21 +++++++++++++++++----
> 2 files changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 53fd8c3cdbd0..bd801023504b 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -46,6 +46,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> struct cma **res_cma);
> extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
> bool no_warn);
> +extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
> extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
>
> extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> diff --git a/mm/cma.c b/mm/cma.c
> index 995e15480937..960994b88c7f 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -524,6 +524,22 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> return page;
> }
>
> +bool cma_pages_valid(struct cma *cma, const struct page *pages,
> + unsigned long count)
> +{
> + unsigned long pfn;
> +
> + if (!cma || !pages)
> + return false;
> +
> + pfn = page_to_pfn(pages);
> +
> + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> + return false;
> +
> + return true;
> +}
> +
> /**
> * cma_release() - release allocated pages
> * @cma: Contiguous memory region for which the allocation is performed.
> @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
> {
> unsigned long pfn;
>
> - if (!cma || !pages)
> + if (!cma_pages_valid(cma, pages, count))
> return false;
>
> pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>
> pfn = page_to_pfn(pages);
>
> - if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> - return false;
> -
> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>
> free_contig_range(pfn, count);
>
Agreed that we might want to perform the pr_debug() now earlier, or do
another pr_warn before returning false.
Acked-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists