lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87frr0svue.fsf@linux.dev>
Date: Mon, 19 Aug 2024 10:30:49 +0100
From: Luis Henriques <luis.henriques@...ux.dev>
To: Xiubo Li <xiubli@...hat.com>
Cc: Ilya Dryomov <idryomov@...il.com>,  Milind Changire
 <mchangir@...hat.com>,  ceph-devel@...r.kernel.org,
  linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ceph: fix memory in MDS client cap_auths

On Mon, Aug 19 2024, Xiubo Li wrote:

> Hi Luis,
>
> Good catch!
>
> On 8/14/24 18:17, Luis Henriques (SUSE) wrote:
>> The cap_auths that are allocated during an MDS session opening are never
>> released, causing a memory leak detected by kmemleak.  Fix this by freeing
>> the memory allocated when shutting down the mds client.
>>
>> Fixes: 1d17de9534cb ("ceph: save cap_auths in MDS client when session is opened")
>> Signed-off-by: Luis Henriques (SUSE) <luis.henriques@...ux.dev>
>> ---
>>   fs/ceph/mds_client.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index 276e34ab3e2c..d798d0a5b2b1 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -6015,6 +6015,20 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>>   		ceph_mdsmap_destroy(mdsc->mdsmap);
>>   	kfree(mdsc->sessions);
>>   	ceph_caps_finalize(mdsc);
>> +
>> +	if (mdsc->s_cap_auths) {
>> +		int i;
>> +
>> +		mutex_lock(&mdsc->mutex);
>
> BTW, is the lock really needed here ?
>
> IMO it should be safe to remove it because once here the sb has already been
> killed and the 'mdsc->stopping' will help guarantee that there won't be any
> other thread will access to 'mdsc', Isn't it ?

Ah, yes, good point.  Thanks, I'll send v2 shortly.

Cheers,
-- 
Luís


> Else we need to do the lock from the beginning of this function.
>
> Thanks
>
> - Xiubo
>
>> +		for (i = 0; i < mdsc->s_cap_auths_num; i++) {
>> +			kfree(mdsc->s_cap_auths[i].match.gids);
>> +			kfree(mdsc->s_cap_auths[i].match.path);
>> +			kfree(mdsc->s_cap_auths[i].match.fs_name);
>> +		}
>> +		kfree(mdsc->s_cap_auths);
>> +		mutex_unlock(&mdsc->mutex);
>> +	}
>> +
>>   	ceph_pool_perm_destroy(mdsc);
>>   }
>>   
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ