lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f568172f-e672-b6bd-584d-94701f34cbfc@huawei.com>
Date:   Tue, 11 May 2021 09:40:32 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Hugh Dickins <hughd@...gle.com>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH] ksm: Revert "use GET_KSM_PAGE_NOLOCK to get ksm page in
 remove_rmap_item_from_tree()"

On 2021/5/11 7:42, Hugh Dickins wrote:
> On Mon, 10 May 2021, Miaohe Lin wrote:
>> On 2021/5/10 13:59, Hugh Dickins wrote:
>>> This reverts commit 3e96b6a2e9ad929a3230a22f4d64a74671a0720b.
>>> General Protection Fault in rmap_walk_ksm() under memory pressure:
>>> remove_rmap_item_from_tree() needs to take page lock, of course.
>>>
>>
>> I'am really sorry about it! And many thanks for this bugfix!
>> It seems rmap_walk_ksm() relies on the page lock to protect against
>> concurrent modifications to that page's node of the stable tree.
>> Could you please add a comment in remove_rmap_item_from_tree() to
>> clarify this in case similar trouble again? Many thanks!
> 
> Sorry, no.  Page lock is held by callers of stable_tree_append() when
> adding an rmap_item to the tree, and held by callers of rmap_walk_ksm()
> (see VM_BUG_ON_PAGE there) when walking the tree: you would surely
> expect some kind of locking when removing an rmap_item from the tree,
> and the appropriate page lock is what GET_KSM_PAGE_LOCK provided.
> 
> I do not want us to go through the kernel source adding a comment
> /* We really mean to take this lock: it protects against concurrency */
> every time we take a lock in the kernel: you should generally assume
> that if a lock is taken, then the writer intended it to be taken.
> 
> There are sure to be some exceptions, where a lock is taken pointlessly:
> but please look deeper before assuming that is the case.
> 

I see. I should have been more careful. Many thanks for your detailed explanation
and sorry about trouble!

> Hugh
> 
>>
>>> Signed-off-by: Hugh Dickins <hughd@...gle.com>
>>> ---
>>>
>>>  mm/ksm.c |    3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> --- 5.13-rc1/mm/ksm.c	2021-05-09 17:03:44.010422188 -0700
>>> +++ linux/mm/ksm.c	2021-05-09 22:12:39.403008350 -0700
>>> @@ -776,11 +776,12 @@ static void remove_rmap_item_from_tree(s
>>>  		struct page *page;
>>>  
>>>  		stable_node = rmap_item->head;
>>> -		page = get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
>>> +		page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
>>>  		if (!page)
>>>  			goto out;
>>>  
>>>  		hlist_del(&rmap_item->hlist);
>>> +		unlock_page(page);
>>>  		put_page(page);
>>>  
>>>  		if (!hlist_empty(&stable_node->hlist))
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ