[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z5f6BVfyWb5loBpI@google.com>
Date: Mon, 27 Jan 2025 21:26:29 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 5/6] zsmalloc: introduce handle mapping API
On Mon, Jan 27, 2025 at 04:59:30PM +0900, Sergey Senozhatsky wrote:
> Introduce new API to map/unmap zsmalloc handle/object. The key
> difference is that this API does not impose atomicity restrictions
> on its users, unlike zs_map_object() which returns with page-faults
> and preemption disabled
I think that's not entirely accurate, see below.
[..]
> @@ -1309,12 +1297,14 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> goto out;
> }
>
> - /* this object spans two pages */
> - zpdescs[0] = zpdesc;
> - zpdescs[1] = get_next_zpdesc(zpdesc);
> - BUG_ON(!zpdescs[1]);
> + ret = area->vm_buf;
> + /* disable page faults to match kmap_local_page() return conditions */
> + pagefault_disable();
Is this accurate/necessary? I am looking at kmap_local_page() and I
don't see it. Maybe that's remnant from the old code using
kmap_atomic()?
> + if (mm != ZS_MM_WO) {
> + /* this object spans two pages */
> + zs_obj_copyin(area->vm_buf, zpdesc, off, class->size);
> + }
>
> - ret = __zs_map_object(area, zpdescs, off, class->size);
> out:
> if (likely(!ZsHugePage(zspage)))
> ret += ZS_HANDLE_SIZE;
Powered by blists - more mailing lists