lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Sep 2023 21:05:35 -0700
From:   Sidhartha Kumar <sidhartha.kumar@...cle.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        akpm@...ux-foundation.org, songmuchun@...edance.com,
        willy@...radead.org
Subject: Re: [PATCH v6] mm/filemap: remove hugetlb special casing in filemap.c

On 8/21/23 11:33 AM, Mike Kravetz wrote:
> On 08/17/23 11:18, Sidhartha Kumar wrote:
>> Remove special cased hugetlb handling code within the page cache by
>> changing the granularity of each index to the base page size rather than
>> the huge page size. Adds new wrappers for hugetlb code to to interact with the
>> page cache which convert to a linear index.
> <snip>
>> @@ -237,7 +234,7 @@ void filemap_free_folio(struct address_space *mapping, struct folio *folio)
>>   	if (free_folio)
>>   		free_folio(folio);
>>   
>> -	if (folio_test_large(folio) && !folio_test_hugetlb(folio))
>> +	if (folio_test_large(folio))
>>   		refs = folio_nr_pages(folio);
>>   	folio_put_refs(folio, refs);
>>   }
>> @@ -858,14 +855,15 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>>   
>>   	if (!huge) {
>>   		int error = mem_cgroup_charge(folio, NULL, gfp);
>> -		VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio);
>>   		if (error)
>>   			return error;
>>   		charged = true;
>> -		xas_set_order(&xas, index, folio_order(folio));
>> -		nr = folio_nr_pages(folio);
>>   	}
> 
> When a hugetlb page is added to the page cache, the ref count will now
> be increased by folio_nr_pages.  So, the ref count for a 2MB hugetlb page
> on x86 will be increased by 512.
> 
> We will need a corresponding change to migrate_huge_page_move_mapping().
> For migration, the ref count is checked as follows:
> 
> 	xas_lock_irq(&xas);
> 	expected_count = 2 + folio_has_private(src);
Hi Mike,

Thanks for catching this. Changing this line to:
+	expected_count = folio_expected_refs(mapping, src);
seems to fix migration from my testing. My test was inserting a sleep() 
in the hugepage-mmap.c selftest and running the migratepages command.

With this version of the patch:
migrate_pages(44906, 65, [0x0000000000000001], [0x0000000000000002]) = 75
which means 75 pages did not migrate and after the change to 
folio_expected_refs():
migrate_pages(7344, 65, [0x0000000000000001], [0x0000000000000002]) = 0

Does that change look correct to you?

Thanks,
Sid Kumar


> 	if (!folio_ref_freeze(src, expected_count)) {
> 		xas_unlock_irq(&xas);
> 		return -EAGAIN;
> 	}
> 
> So, this patch will break hugetlb migration of hugetlb pages in the page
> cache.
> 
> Sorry for not noticing this earlier.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ