[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad521f4f-47aa-4728-916f-3704bf01f770@redhat.com>
Date: Tue, 26 Aug 2025 13:04:33 +0200
From: David Hildenbrand <david@...hat.com>
To: Alexandru Elisei <alexandru.elisei@....com>
Cc: linux-kernel@...r.kernel.org, Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>, Christoph Lameter <cl@...two.org>,
Dennis Zhou <dennis@...nel.org>, Dmitry Vyukov <dvyukov@...gle.com>,
dri-devel@...ts.freedesktop.org, intel-gfx@...ts.freedesktop.org,
iommu@...ts.linux.dev, io-uring@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>, Jens Axboe <axboe@...nel.dk>,
Johannes Weiner <hannes@...xchg.org>, John Hubbard <jhubbard@...dia.com>,
kasan-dev@...glegroups.com, kvm@...r.kernel.org,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>, linux-arm-kernel@...s.com,
linux-arm-kernel@...ts.infradead.org, linux-crypto@...r.kernel.org,
linux-ide@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-mips@...r.kernel.org, linux-mmc@...r.kernel.org, linux-mm@...ck.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-scsi@...r.kernel.org, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Marco Elver <elver@...gle.com>, Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Hocko <mhocko@...e.com>, Mike Rapoport <rppt@...nel.org>,
Muchun Song <muchun.song@...ux.dev>, netdev@...r.kernel.org,
Oscar Salvador <osalvador@...e.de>, Peter Xu <peterx@...hat.com>,
Robin Murphy <robin.murphy@....com>, Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>, virtualization@...ts.linux.dev,
Vlastimil Babka <vbabka@...e.cz>, wireguard@...ts.zx2c4.com, x86@...nel.org,
Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH RFC 21/35] mm/cma: refuse handing out non-contiguous page
ranges
>>
>> pr_debug("%s(): memory range at pfn 0x%lx %p is busy, retrying\n",
>> - __func__, pfn, pfn_to_page(pfn));
>> + __func__, pfn, page);
>>
>> trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
>
> Nitpick: I think you already have the page here.
Indeed, forgot to clean that up as well.
>
>> count, align);
>> - /* try again with a bit different memory target */
>> - start = bitmap_no + mask + 1;
>> }
>> out:
>> - *pagep = page;
>> + if (!ret)
>> + *pagep = page;
>> return ret;
>> }
>>
>> @@ -882,7 +892,7 @@ static struct page *__cma_alloc(struct cma *cma, unsigned long count,
>> */
>> if (page) {
>> for (i = 0; i < count; i++)
>> - page_kasan_tag_reset(nth_page(page, i));
>> + page_kasan_tag_reset(page + i);
>
> Had a look at it, not very familiar with CMA, but the changes look equivalent to
> what was before. Not sure that's worth a Reviewed-by tag, but here it in case
> you want to add it:
>
> Reviewed-by: Alexandru Elisei <alexandru.elisei@....com>
Thanks!
>
> Just so I can better understand the problem being fixed, I guess you can have
> two consecutive pfns with non-consecutive associated struct page if you have two
> adjacent memory sections spanning the same physical memory region, is that
> correct?
Exactly. Essentially on SPARSEMEM without SPARSEMEM_VMEMMAP it is not
guaranteed that
pfn_to_page(pfn + 1) == pfn_to_page(pfn) + 1
when we cross memory section boundaries.
It can be the case for early boot memory if we allocated consecutive
areas from memblock when allocating the memmap (struct pages) per memory
section, but it's not guaranteed.
So we rule out that case.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists