[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201221102703.GA15804@linux>
Date: Mon, 21 Dec 2020 11:27:07 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Muchun Song <songmuchun@...edance.com>
Cc: corbet@....net, mike.kravetz@...cle.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
paulmck@...nel.org, mchehab+huawei@...nel.org,
pawan.kumar.gupta@...ux.intel.com, rdunlap@...radead.org,
oneukum@...e.com, anshuman.khandual@....com, jroedel@...e.de,
almasrymina@...gle.com, rientjes@...gle.com, willy@...radead.org,
mhocko@...e.com, song.bao.hua@...ilicon.com, david@...hat.com,
naoya.horiguchi@....com, duanxiongchun@...edance.com,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v10 04/11] mm/hugetlb: Defer freeing of HugeTLB pages
On Thu, Dec 17, 2020 at 08:12:56PM +0800, Muchun Song wrote:
> In the subsequent patch, we will allocate the vmemmap pages when free
> HugeTLB pages. But update_and_free_page() is called from a non-task
> context(and hold hugetlb_lock), so we can defer the actual freeing in
> a workqueue to prevent from using GFP_ATOMIC to allocate the vmemmap
> pages.
I think we would benefit from a more complete changelog, at least I had
to stare at the code for a while in order to grasp what are we trying
to do and the reasons behind.
> +static void __free_hugepage(struct hstate *h, struct page *page);
> +
> +/*
> + * As update_and_free_page() is be called from a non-task context(and hold
> + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
> + * use GFP_ATOMIC to allocate a lot of vmemmap pages.
The above implies that update_and_free_page() is __always__ called from a
non-task context, but that is not always the case?
> +static void update_hpage_vmemmap_workfn(struct work_struct *work)
> {
> - int i;
> + struct llist_node *node;
> + struct page *page;
>
> + node = llist_del_all(&hpage_update_freelist);
> +
> + while (node) {
> + page = container_of((struct address_space **)node,
> + struct page, mapping);
> + node = node->next;
> + page->mapping = NULL;
> + __free_hugepage(page_hstate(page), page);
> +
> + cond_resched();
> + }
> +}
> +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
I wonder if this should be moved to hugetlb_vmemmap.c
> +/*
> + * This is where the call to allocate vmemmmap pages will be inserted.
> + */
I think this should go in the changelog.
> +static void __free_hugepage(struct hstate *h, struct page *page)
> +{
> + int i;
> +
> for (i = 0; i < pages_per_huge_page(h); i++) {
> page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> 1 << PG_referenced | 1 << PG_dirty |
> @@ -1313,13 +1377,17 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> set_page_refcounted(page);
> if (hstate_is_gigantic(h)) {
> /*
> - * Temporarily drop the hugetlb_lock, because
> - * we might block in free_gigantic_page().
> + * Temporarily drop the hugetlb_lock only when this type of
> + * HugeTLB page does not support vmemmap optimization (which
> + * context do not hold the hugetlb_lock), because we might
> + * block in free_gigantic_page().
"
/*
* Temporarily drop the hugetlb_lock, because we might block
* in free_gigantic_page(). Only drop it in case the vmemmap
* optimization is disabled, since that context does not hold
* the lock.
*/
" ?
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists