[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220215141236.de1a3eca3a8a52d04507c50f@linux-foundation.org>
Date: Tue, 15 Feb 2022 14:12:36 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: cgel.zte@...il.com
Cc: hughd@...gle.com, mike.kravetz@...cle.com, kirill@...temov.name,
songliubraving@...com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, yang.yang29@....com.cn,
wang.yong12@....com.cn, Zeal Robot <zealci@....com.cn>
Subject: Re: [PATCH linux-next] Fix shmem huge page failed to set
F_SEAL_WRITE attribute problem
On Tue, 15 Feb 2022 07:37:43 +0000 cgel.zte@...il.com wrote:
> From: wangyong <wang.yong12@....com.cn>
>
> After enabling tmpfs filesystem to support transparent hugepage with the
> following command:
> echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> The docker program adds F_SEAL_WRITE through the following command will
> prompt EBUSY.
> fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
>
> It is found that in memfd_wait_for_pins function, the page_count of
> hugepage is 512 and page_mapcount is 0, which does not meet the
> conditions:
> page_count(page) - page_mapcount(page) != 1.
> But the page is not busy at this time, therefore, the page_order of
> hugepage should be taken into account in the calculation.
What are the real-world runtime effects of this?
Do we think that this fix (or one similar to it) should be backported
into -stable kernels?
If "yes" then Mike's 5d752600a8c373 ("mm: restructure memfd code") will
get in the way because it moved lots of code around.
But then, that's four years old and perhaps that's far enough back in
time.
Powered by blists - more mailing lists