lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Jan 2023 10:26:07 -0600
From:   Sidhartha Kumar <sidhartha.kumar@...cle.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        akpm@...ux-foundation.org, songmuchun@...edance.com,
        willy@...radead.org, tsahu@...ux.ibm.com, jhubbard@...dia.com
Subject: Re: [PATCH mm-unstable 6/8] mm/hugetlb: convert
 alloc_migrate_huge_page to folios

On 1/6/23 6:54 PM, Mike Kravetz wrote:
> On 01/03/23 13:13, Sidhartha Kumar wrote:
>> Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
>> alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both functions
>> now return a folio rather than a page.
> 
>>   /* mempolicy aware migration callback */
>> @@ -2357,16 +2357,16 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
>>   {
>>   	struct mempolicy *mpol;
>>   	nodemask_t *nodemask;
>> -	struct page *page;
>> +	struct folio *folio;
>>   	gfp_t gfp_mask;
>>   	int node;
>>   
>>   	gfp_mask = htlb_alloc_mask(h);
>>   	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
>> -	page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask);
>> +	folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
>>   	mpol_cond_put(mpol);
>>   
>> -	return page;
>> +	return &folio->page;
> 
> Is it possible that folio could be NULL here and cause addressing exception?
> 
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 6932b3d5a9dd..fab706b78be1 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1622,6 +1622,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>>   	struct migration_target_control *mtc;
>>   	gfp_t gfp_mask;
>>   	unsigned int order = 0;
>> +	struct folio *hugetlb_folio = NULL;
>>   	struct folio *new_folio = NULL;
>>   	int nid;
>>   	int zidx;
>> @@ -1636,7 +1637,9 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>>   		struct hstate *h = folio_hstate(folio);
>>   
>>   		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
>> -		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
>> +		hugetlb_folio = alloc_hugetlb_folio_nodemask(h, nid,
>> +						mtc->nmask, gfp_mask);
>> +		return &hugetlb_folio->page;
> 
> and, here as well?

Hi Mike,

It is possible that the folio could be null but I believe these 
instances would not cause an addressing exception because as described 
in [1], &folio->page is safe even if the folio is NULL as the page 
offset is at 0.


[1] https://lore.kernel.org/lkml/Y7h4jsv6jl0XSIsk@casper.infradead.org/T/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ