[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h78co4g9.fsf@brahms.olymp>
Date: Sat, 05 Mar 2022 14:32:22 +0000
From: Luís Henriques <lhenriques@...e.de>
To: Xiubo Li <xiubli@...hat.com>
Cc: Jeff Layton <jlayton@...nel.org>,
Ilya Dryomov <idryomov@...il.com>, ceph-devel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] ceph: fix use-after-free in ceph_readdir
Xiubo Li <xiubli@...hat.com> writes:
> On 3/5/22 2:20 AM, Jeff Layton wrote:
>> On Fri, 2022-03-04 at 16:14 +0000, Luís Henriques wrote:
>>> After calling ceph_mdsc_put_request() on dfi->last_readdir, this field
>>> should be set to NULL, otherwise we may end-up freeing it twince and get
>>> the following splat:
>>>
>>> refcount_t: underflow; use-after-free.
>>> WARNING: CPU: 0 PID: 229 at lib/refcount.c:28 refcount_warn_saturate+0xa6/0xf0
>>> ...
>>> Call Trace:
>>> <TASK>
>>> ceph_readdir+0xd35/0x1460 [ceph]
>>> ? _raw_spin_unlock+0x12/0x30
>>> ? preempt_count_add+0x73/0xa0
>>> ? _raw_spin_unlock+0x12/0x30
>>> ? __mark_inode_dirty+0x27c/0x3a0
>>> iterate_dir+0x7d/0x190
>>> __x64_sys_getdents64+0x80/0x120
>>> ? compat_fillonedir+0x160/0x160
>>> do_syscall_64+0x43/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> Signed-off-by: Luís Henriques <lhenriques@...e.de>
>>> ---
>>> fs/ceph/dir.c | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
>>> index 0bcb677d2199..934402f5e9e6 100644
>>> --- a/fs/ceph/dir.c
>>> +++ b/fs/ceph/dir.c
>>> @@ -555,6 +555,7 @@ static int ceph_readdir(struct file *file, struct dir_context *ctx)
>>> le32_to_cpu(rde->inode.in->mode) >> 12)) {
>>> dout("filldir stopping us...\n");
>>> ceph_mdsc_put_request(dfi->last_readdir);
>>> + dfi->last_readdir = NULL;
>>> err = 0;
>>> goto out;
>>> }
>> I think Xiubo fixed this in the testing branch late yesterday. It should
>> no longer be needed.
>
> Right and I have sent a new version of my previous patch to remove the buggy
> code.
Ok, cool. This definitely proofs that my local branch wasn't updated :-)
(I really need to get rid of this mails/patches backlog.)
Cheers,
--
Luís
Powered by blists - more mailing lists