[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aGzXH8Rqk8K-oVip@pc636>
Date: Tue, 8 Jul 2025 10:30:23 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Baoquan He <bhe@...hat.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@...il.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>
Subject: Re: [RFC 4/7] mm/kasan, mm/vmalloc: Respect GFP flags in
kasan_populate_vmalloc()
On Tue, Jul 08, 2025 at 09:15:19AM +0800, Baoquan He wrote:
> On 07/07/25 at 09:47am, Baoquan He wrote:
> > On 07/04/25 at 05:25pm, Uladzislau Rezki (Sony) wrote:
> > ......snip.......
> > > diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> > > index d2c70cd2afb1..5edfc1f6b53e 100644
> > > --- a/mm/kasan/shadow.c
> > > +++ b/mm/kasan/shadow.c
> > > @@ -335,13 +335,13 @@ static void ___free_pages_bulk(struct page **pages, int nr_pages)
> > > }
> > > }
> > >
> > > -static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
> > > +static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask)
> > > {
> > > unsigned long nr_populated, nr_total = nr_pages;
> > > struct page **page_array = pages;
> > >
> > > while (nr_pages) {
> > > - nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages);
> > > + nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages);
> > > if (!nr_populated) {
> > > ___free_pages_bulk(page_array, nr_total - nr_pages);
> > > return -ENOMEM;
> > > @@ -353,25 +353,33 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
> > > return 0;
> > > }
> > >
> > > -static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
> > > +static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask)
> > > {
> > > unsigned long nr_pages, nr_total = PFN_UP(end - start);
> > > + bool noblock = !gfpflags_allow_blocking(gfp_mask);
> > > struct vmalloc_populate_data data;
> > > + unsigned int flags;
> > > int ret = 0;
> > >
> > > - data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO);
> > > + data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO);
> > > if (!data.pages)
> > > return -ENOMEM;
> > >
> > > while (nr_total) {
> > > nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0]));
> > > - ret = ___alloc_pages_bulk(data.pages, nr_pages);
> > > + ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask);
> > > if (ret)
> > > break;
> > >
> > > data.start = start;
> > > + if (noblock)
> > > + flags = memalloc_noreclaim_save();
> > > +
> > > ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
> > > kasan_populate_vmalloc_pte, &data);
> >
> > This series is a great enhancement, thanks.
> >
> > When checking code, seems apply_to_page_range() will lead to page table
> > allocation which uses GFP_PGTABLE_KERNEL. Not sure if we need to handle
> > this either.
>
> I am fool, didn't see the obvious added scope between
> memalloc_noreclaim_save/srestore(). Please ignore this noise.
>
No worries :)
--
Uladzislau Rezki
Powered by blists - more mailing lists