[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <q52ddqgi42mgknla4y6i5l65nj57qck6vuuruwcm6lpez7bxmp@3luv4iwjppa6>
Date: Sun, 4 May 2025 15:14:38 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Vitaly Wool <vitaly.wool@...sulko.se>,
Igor Belousov <igor.b@...dev.am>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, Nhat Pham <nphamcs@...il.com>,
Shakeel Butt <shakeel.butt@...ux.dev>, Johannes Weiner <hannes@...xchg.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH] mm/zblock: use vmalloc for page allocations
On (25/05/04 14:02), Sergey Senozhatsky wrote:
> On (25/05/03 20:46), Vitaly Wool wrote:
> > > Right, and it looks like this:
> > >
> > > [ 762.499278] bug_handler+0x0/0xa8
> > > [ 762.499433] die_kernel_fault+0x1c4/0x36c
> > > [ 762.499616] fault_from_pkey+0x0/0x98
> > > [ 762.499784] do_translation_fault+0x3c/0x94
> > > [ 762.499969] do_mem_abort+0x44/0x94
> > > [ 762.500140] el1_abort+0x40/0x64
> > > [ 762.500306] el1h_64_sync_handler+0xa4/0x120
> > > [ 762.500502] el1h_64_sync+0x6c/0x70
> > > [ 762.500718] __pi_memcpy_generic+0x1e4/0x22c (P)
> > > [ 762.500931] zs_zpool_obj_write+0x10/0x1c
> > > [ 762.501117] zpool_obj_write+0x18/0x24
> > > [ 762.501305] zswap_store+0x490/0x7c4
> > > [ 762.501474] swap_writepage+0x260/0x448
> > > [ 762.501654] pageout+0x148/0x340
> > > [ 762.501816] shrink_folio_list+0xa7c/0xf34
> > > [ 762.502008] shrink_lruvec+0x6fc/0xbd0
> > > [ 762.502189] shrink_node+0x52c/0x960
> > > [ 762.502359] balance_pgdat+0x344/0x738
> > > [ 762.502537] kswapd+0x210/0x37c
> > > [ 762.502691] kthread+0x12c/0x204
> > > [ 762.502920] ret_from_fork+0x10/0x20
> >
> > In fact we don’t know if zsmalloc is actually supposed to work with
> > 16K pages.
>
> Hmm I think it is supposed to work, can't think of a reason why it
> shouldn't.
I'm able to repro, I think. Will try to take a look later today/tonight.
Thank you for the report.
// Feel free to send a patch if you have a fix already.
Powered by blists - more mailing lists