[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWcZbJ-YPUQA0CJB@casper.infradead.org>
Date: Wed, 14 Jan 2026 04:19:56 +0000
From: Matthew Wilcox <willy@...radead.org>
To: wang.yaxin@....com.cn
Cc: akpm@...ux-foundation.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, david@...nel.org, vbabka@...e.cz,
jannh@...gle.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
xu.xin16@....com.cn, yang.yang29@....com.cn, fan.yu9@....com.cn,
he.peilin@....com.cn, tu.qiang35@....com.cn, qiu.yutan@....com.cn,
jiang.kun2@....com.cn, lu.zhongjun@....com.cn
Subject: Re: [PATCH linux-next] mm/madvise: prefer VMA lock for MADV_REMOVE
On Wed, Jan 14, 2026 at 11:24:17AM +0800, wang.yaxin@....com.cn wrote:
> From: Jiang Kun <jiang.kun2@....com.cn>
>
> MADV_REMOVE currently runs under the process-wide mmap_read_lock() and
> temporarily drops and reacquires it around filesystem hole-punching.
> For single-VMA, local-mm, non-UFFD-armed ranges we can safely operate
> under the finer-grained per-VMA read lock to reduce contention and lock
> hold time, while preserving semantics.
Oh, and do you have any performance measurements? You're introducing
complexity, so it'd be good to quantify what performance we're getting
in return for this complexity. A real workload would be best, but even
an artificial benchmark would be better than nothing.
Powered by blists - more mailing lists