lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc27f1a8-6a53-e7a6-ec6c-e0c185912c1f@oracle.com>
Date:   Thu, 23 Sep 2021 15:08:06 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        David Hildenbrand <david@...hat.com>,
        Michal Hocko <mhocko@...e.com>,
        Oscar Salvador <osalvador@...e.de>, Zi Yan <ziy@...dia.com>,
        Muchun Song <songmuchun@...edance.com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        David Rientjes <rientjes@...gle.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>
Subject: Re: [PATCH v2 1/4] hugetlb: add demote hugetlb page sysfs interfaces

On 9/23/21 2:24 PM, Andrew Morton wrote:
> On Thu, 23 Sep 2021 10:53:44 -0700 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> 
>> Two new sysfs files are added to demote hugtlb pages.  These files are
>> both per-hugetlb page size and per node.  Files are:
>>   demote_size - The size in Kb that pages are demoted to. (read-write)
>>   demote - The number of huge pages to demote. (write-only)
>>
>> By default, demote_size is the next smallest huge page size.  Valid huge
>> page sizes less than huge page size may be written to this file.  When
>> huge pages are demoted, they are demoted to this size.
>>
>> Writing a value to demote will result in an attempt to demote that
>> number of hugetlb pages to an appropriate number of demote_size pages.
>>
>> NOTE: Demote interfaces are only provided for huge page sizes if there
>> is a smaller target demote huge page size.  For example, on x86 1GB huge
>> pages will have demote interfaces.  2MB huge pages will not have demote
>> interfaces.
>>
>> This patch does not provide full demote functionality.  It only provides
>> the sysfs interfaces.
>>
>> It also provides documentation for the new interfaces.
>>
>> ...
>>
>> +static ssize_t demote_store(struct kobject *kobj,
>> +	       struct kobj_attribute *attr, const char *buf, size_t len)
>> +{
>> +	unsigned long nr_demote;
>> +	unsigned long nr_available;
>> +	nodemask_t nodes_allowed, *n_mask;
>> +	struct hstate *h;
>> +	int err;
>> +	int nid;
>> +
>> +	err = kstrtoul(buf, 10, &nr_demote);
>> +	if (err)
>> +		return err;
>> +	h = kobj_to_hstate(kobj, &nid);
>> +
>> +	/* Synchronize with other sysfs operations modifying huge pages */
>> +	mutex_lock(&h->resize_lock);
>> +
>> +	spin_lock_irq(&hugetlb_lock);
>> +	if (nid != NUMA_NO_NODE) {
>> +		nr_available = h->free_huge_pages_node[nid];
>> +		init_nodemask_of_node(&nodes_allowed, nid);
>> +		n_mask = &nodes_allowed;
>> +	} else {
>> +		nr_available = h->free_huge_pages;
>> +		n_mask = &node_states[N_MEMORY];
>> +	}
>> +	nr_available -= h->resv_huge_pages;
>> +	if (nr_available <= 0)
>> +		goto out;
>> +	nr_demote = min(nr_available, nr_demote);
>> +
>> +	while (nr_demote) {
>> +		if (!demote_pool_huge_page(h, n_mask))
>> +			break;
>> +
>> +		/*
>> +		 * We may have dropped the lock in the routines to
>> +		 * demote/free a page.  Recompute nr_demote as counts could
>> +		 * have changed and we want to make sure we do not demote
>> +		 * a reserved huge page.
>> +		 */
> 
> This comment doesn't become true until patch #4, and is a bit confusing
> in patch #1.  Also, saying "the lock" is far less helpful than saying
> "hugetlb_lock"!

Right.  That is the result of slicing and dicing working code to create
individual patches.  Sorry.  I will correct.

The comment is also not 100% accurate.  demote_pool_huge_page will
always drop hugetlb_lock except in the quick error case which is not
really interesting.  This helps answer your next question.

> 
> 
>> +		nr_demote--;
>> +		if (nid != NUMA_NO_NODE)
>> +			nr_available = h->free_huge_pages_node[nid];
>> +		else
>> +			nr_available = h->free_huge_pages;
>> +		nr_available -= h->resv_huge_pages;
>> +		if (nr_available <= 0)
>> +			nr_demote = 0;
>> +		else
>> +			nr_demote = min(nr_available, nr_demote);
>> +	}
>> +
>> +out:
>> +	spin_unlock_irq(&hugetlb_lock);
> 
> How long can we spend with IRQs disabled here (after patch #4!)?

Not very long.  We will drop the lock on page demote.  This is because
we need to potentially allocate vmemmap pages.  We will actually go
through quite a few acquire/drop lock cycles for each demoted page.
Something like:
	dequeue page to be demoted
	drop lock
	potentially allocate vmemmap pages
	for each page of demoted size
		prep page
		acquire lock
		enqueue page to new pool
		drop lock
	reacquire lock

This is 'no worse' than the lock cycling that happens with existing pool
adjustment mechanisms such as "echo > nr_hugepages".

The updated comment will point out that there is little need to worry
about lock hold/irq disable time.
-- 
Mike Kravetz

>> +	mutex_unlock(&h->resize_lock);
>> +
>> +	return len;
>> +}
>> +HSTATE_ATTR_WO(demote);
>> +
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ