lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a06dc7f-3a49-42ba-8221-0b4a3777ac0b@linux.alibaba.com>
Date: Fri, 23 Feb 2024 10:56:48 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: akpm@...ux-foundation.org, muchun.song@...ux.dev, david@...hat.com,
 linmiaohe@...wei.com, naoya.horiguchi@....com, mhocko@...nel.org,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/3] mm: hugetlb: make the hugetlb migration strategy
 consistent



On 2024/2/23 06:15, Oscar Salvador wrote:
> On Wed, Feb 21, 2024 at 05:27:54PM +0800, Baolin Wang wrote:
>> Based on the analysis of the various scenarios above, determine whether fallback is
>> permitted according to the migration reason in alloc_hugetlb_folio_nodemask().
> 
> Hi Baolin,
> 
> The high level reasoning makes sense to me, taking a step back and
> thinking about all cases and possible outcomes makes sense to me.
> 
> I plan to look closer, but I something that caught my eye:

Thanks for reviewing.

>>   	}
>>   	spin_unlock_irq(&hugetlb_lock);
>>   
>> +	if (gfp_mask & __GFP_THISNODE)
>> +		goto alloc_new;
>> +
>> +	/*
>> +	 * Note: the memory offline, memory failure and migration syscalls can break
>> +	 * the per-node hugetlb pool. Other cases can not allocate new hugetlb on
>> +	 * other nodes.
>> +	 */
>> +	switch (reason) {
>> +	case MR_MEMORY_HOTPLUG:
>> +	case MR_MEMORY_FAILURE:
>> +	case MR_SYSCALL:
>> +	case MR_MEMPOLICY_MBIND:
>> +		allowed_fallback = true;
>> +		break;
>> +	default:
>> +		break;
>> +	}
>> +
>> +	if (!allowed_fallback)
>> +		gfp_mask |= __GFP_THISNODE;
> 
> I think it would be better if instead of fiddling with gfp here,
> have htlb_alloc_mask() have a second argument with the MR_reason,
> do the switch there and enable GFP_THISNODE.
> Then alloc_hugetlb_folio_nodemask() would already get the right mask. >
> I think that that might be more clear as it gets encapsulated in the
> function that directly gives us the gfp.
> 
> Does that makes sense?

I previously considered passing the MR_reason argument to the 
htlb_modify_alloc_mask(), which is only used by hugetlb migration.
But in alloc_hugetlb_folio_nodemask(), if there are available hugetlb on 
other nodes, we should allow migrating, that will not break the per-node 
hugetlb pool.

That's why I just change the gfp_mask for allocating a new hguetlb when 
migration, that can break the pool.

struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int 
preferred_nid,
		nodemask_t *nmask, gfp_t gfp_mask)
{
	spin_lock_irq(&hugetlb_lock);
	if (available_huge_pages(h)) {
		struct folio *folio;

		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
						preferred_nid, nmask);
		if (folio) {
			spin_unlock_irq(&hugetlb_lock);
			return folio;
		}
	}
.....

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ