[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A447D4AD-9513-4128-945C-8161426D4C4A@redhat.com>
Date: Tue, 13 May 2025 08:20:54 -0400
From: Benjamin Coddington <bcodding@...hat.com>
To: Li Lingfeng <lilingfeng3@...wei.com>
Cc: trondmy@...nel.org, anna@...nel.org, jlayton@...nel.org,
linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
yukuai1@...weicloud.com, houtao1@...wei.com, yi.zhang@...wei.com,
yangerkun@...wei.com, lilingfeng@...weicloud.com
Subject: Re: [PATCH v2] nfs: handle failure of nfs_get_lock_context in unlock
path
On 13 May 2025, at 3:42, Li Lingfeng wrote:
> When memory is insufficient, the allocation of nfs_lock_context in
> nfs_get_lock_context() fails and returns -ENOMEM. If we mistakenly treat
> an nfs4_unlockdata structure (whose l_ctx member has been set to -ENOMEM)
> as valid and proceed to execute rpc_run_task(), this will trigger a NULL
> pointer dereference in nfs4_locku_prepare. For example:
>
> BUG: kernel NULL pointer dereference, address: 000000000000000c
> PGD 0 P4D 0
> Oops: Oops: 0000 [#1] SMP PTI
> CPU: 15 UID: 0 PID: 12 Comm: kworker/u64:0 Not tainted 6.15.0-rc2-dirty #60
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40
> Workqueue: rpciod rpc_async_schedule
> RIP: 0010:nfs4_locku_prepare+0x35/0xc2
> Code: 89 f2 48 89 fd 48 c7 c7 68 69 ef b5 53 48 8b 8e 90 00 00 00 48 89 f3
> RSP: 0018:ffffbbafc006bdb8 EFLAGS: 00010246
> RAX: 000000000000004b RBX: ffff9b964fc1fa00 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: fffffffffffffff4 RDI: ffff9ba53fddbf40
> RBP: ffff9ba539934000 R08: 0000000000000000 R09: ffffbbafc006bc38
> R10: ffffffffb6b689c8 R11: 0000000000000003 R12: ffff9ba539934030
> R13: 0000000000000001 R14: 0000000004248060 R15: ffffffffb56d1c30
> FS: 0000000000000000(0000) GS:ffff9ba5881f0000(0000) knlGS:00000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000000000000000c CR3: 000000093f244000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> __rpc_execute+0xbc/0x480
> rpc_async_schedule+0x2f/0x40
> process_one_work+0x232/0x5d0
> worker_thread+0x1da/0x3d0
> ? __pfx_worker_thread+0x10/0x10
> kthread+0x10d/0x240
> ? __pfx_kthread+0x10/0x10
> ret_from_fork+0x34/0x50
> ? __pfx_kthread+0x10/0x10
> ret_from_fork_asm+0x1a/0x30
> </TASK>
> Modules linked in:
> CR2: 000000000000000c
> ---[ end trace 0000000000000000 ]---
>
> Free the allocated nfs4_unlockdata when nfs_get_lock_context() fails and
> return NULL to terminate subsequent rpc_run_task, preventing NULL pointer
> dereference.
>
> Fixes: f30cb757f680 ("NFS: Always wait for I/O completion before unlock")
> Link: https://lore.kernel.org/all/21817f2c-2971-4568-9ae4-1ccc25f7f1ef@huawei.com/
> Signed-off-by: Li Lingfeng <lilingfeng3@...wei.com>
> ---
> Changes in v2:
> Add a comment explaining that error handling for ctx acquisition failure
> is unnecessary.
>
> fs/nfs/nfs4proc.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index 970f28dbf253..e52e2ac1ab39 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -7074,18 +7074,29 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
> struct nfs4_unlockdata *p;
> struct nfs4_state *state = lsp->ls_state;
> struct inode *inode = state->inode;
> + struct nfs_lock_context *l_ctx;
>
> p = kzalloc(sizeof(*p), GFP_KERNEL);
> if (p == NULL)
> return NULL;
> + l_ctx = nfs_get_lock_context(ctx);
> + if (!IS_ERR(l_ctx)) {
> + p->l_ctx = l_ctx;
> + } else {
> + kfree(p);
> + return NULL;
> + }
> p->arg.fh = NFS_FH(inode);
> p->arg.fl = &p->fl;
> p->arg.seqid = seqid;
> p->res.seqid = seqid;
> p->lsp = lsp;
> /* Ensure we don't close file until we're done freeing locks! */
> + /*
> + * Since the caller holds a reference to ctx, the refcount must be non-zero.
> + * Therefore, error handling for failed ctx acquisition is unnecessary here.
> + */
I think the additional comment is unnecessary, but otherwise this looks correct.
Reviewed-by: Benjamin Coddington <bcodding@...hat.com>
Ben
Powered by blists - more mailing lists