lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c7d2ef2-08d7-ea50-a82b-9e9800c5f54c@huawei.com>
Date:   Tue, 22 Mar 2022 09:50:35 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Michal Hocko <mhocko@...e.com>
CC:     <akpm@...ux-foundation.org>, <kosaki.motohiro@...fujitsu.com>,
        <mgorman@...e.de>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm/mempolicy: fix mpol_new leak in
 shared_policy_replace

On 2022/3/21 20:12, Michal Hocko wrote:
> On Tue 22-03-22 16:34:56, Miaohe Lin wrote:
>> If mpol_new is allocated but not used in restart loop, mpol_new will be
>> freed via mpol_put before returning to the caller.  But refcnt is not
>> initialized yet, so mpol_put could not do the right things and might leak
>> the unused mpol_new.
> 
> I would just add:
> 
> This would happen if mempolicy was updated on the shared shmem file
> while the sp->lock has been dropped during the memory allocation.
> 

Do you mean the below commit log?

"""
If mpol_new is allocated but not used in restart loop, mpol_new will be
freed via mpol_put before returning to the caller.  But refcnt is not
initialized yet, so mpol_put could not do the right things and might leak
the unused mpol_new. This would happen if mempolicy was updated on the
shared shmem file while the sp->lock has been dropped during the memory
allocation.

This issue could be triggered easily with the below code snippet if
there're many processes doing the below work at the same time:

  shmid = shmget((key_t)5566, 1024 * PAGE_SIZE, 0666|IPC_CREAT);
  shm = shmat(shmid, 0, 0);
  loop many times {
    mbind(shm, 1024 * PAGE_SIZE, MPOL_LOCAL, mask, maxnode, 0);
    mbind(shm + 128 * PAGE_SIZE, 128 * PAGE_SIZE, MPOL_DEFAULT, mask,
          maxnode, 0);
  }
"""

>> This issue could be triggered easily with the below
>> code snippet if there're many processes doing the below work at the same
>> time:
>>
>>   shmid = shmget((key_t)5566, 1024 * PAGE_SIZE, 0666|IPC_CREAT);
>>   shm = shmat(shmid, 0, 0);
>>   loop many times {
>>     mbind(shm, 1024 * PAGE_SIZE, MPOL_LOCAL, mask, maxnode, 0);
>>     mbind(shm + 128 * PAGE_SIZE, 128 * PAGE_SIZE, MPOL_DEFAULT, mask,
>>           maxnode, 0);
>>   }
>>
>> Fixes: 42288fe366c4 ("mm: mempolicy: Convert shared_policy mutex to spinlock")
>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>> Cc: <stable@...r.kernel.org> # 3.8
> 
> Acked-by: Michal Hocko <mhocko@...e.com>
> 
> Thanks a lot!

Many thanks for comment and Acked-by tag! :)

>> ---
>> v1->v2:
>>   Add reproducer snippet and Cc stable.
>>   Thanks Michal Hocko for review and comment!
>> ---
>>  mm/mempolicy.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index a2516d31db6c..4cdd425b2752 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -2733,6 +2733,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
>>  	mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
>>  	if (!mpol_new)
>>  		goto err_out;
>> +	refcount_set(&mpol_new->refcnt, 1);
>>  	goto restart;
>>  }
>>  
>> -- 
>> 2.23.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ