lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <64ec7939-0733-7925-0ec0-d333e62c5f21@suse.cz>
Date:   Thu, 23 Mar 2023 11:11:40 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     David Hildenbrand <david@...hat.com>,
        Yang Shi <shy828301@...il.com>,
        kirill.shutemov@...ux.intel.com, jannh@...gle.com,
        willy@...radead.org, akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [v4 PATCH] fs/proc: task_mmu.c: don't read mapcount for migration
 entry

On 3/23/23 11:08, David Hildenbrand wrote:
> On 23.03.23 10:52, Vlastimil Babka wrote:
>> On 2/3/22 19:26, Yang Shi wrote:
>>> --- a/fs/proc/task_mmu.c
>>> +++ b/fs/proc/task_mmu.c
>>> @@ -440,7 +440,8 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
>>>   }
>>>   
>>>   static void smaps_account(struct mem_size_stats *mss, struct page *page,
>>> -		bool compound, bool young, bool dirty, bool locked)
>>> +		bool compound, bool young, bool dirty, bool locked,
>>> +		bool migration)
>>>   {
>>>   	int i, nr = compound ? compound_nr(page) : 1;
>>>   	unsigned long size = nr * PAGE_SIZE;
>>> @@ -467,8 +468,15 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>>>   	 * page_count(page) == 1 guarantees the page is mapped exactly once.
>>>   	 * If any subpage of the compound page mapped with PTE it would elevate
>>>   	 * page_count().
>>> +	 *
>>> +	 * The page_mapcount() is called to get a snapshot of the mapcount.
>>> +	 * Without holding the page lock this snapshot can be slightly wrong as
>>> +	 * we cannot always read the mapcount atomically.  It is not safe to
>>> +	 * call page_mapcount() even with PTL held if the page is not mapped,
>>> +	 * especially for migration entries.  Treat regular migration entries
>>> +	 * as mapcount == 1.
>>>   	 */
>>> -	if (page_count(page) == 1) {
>>> +	if ((page_count(page) == 1) || migration) {
>> 
>> Since this is now apparently a CVE-2023-1582 for whatever RHeasons...
>> 
>> wonder if the patch actually works as intended when
>> (page_count() || migration) is in this particular order and not the other one?
> 
> Only the page_mapcount() call to a page that should be problematic, not 
> the page_count() call. There might be the rare chance of the page 

Oh right, page_mapcount() vs page_count(), I need more coffee.

> getting remove due to memory offlining... but we're still holding the 
> page table lock with the migration entry, so we should be protected 
> against that.
> 
> Regarding the CVE, IIUC the main reason for the CVE should be 
> RHEL-specific -- which behaves differently than other code bases; for 
> other code bases, it's just a way to trigger a BUG_ON as described here.

That's good to know so at least my bogus mail was useful for that, thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ