[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210219111703.GA20286@linux>
Date: Fri, 19 Feb 2021 12:17:11 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: Make alloc_contig_range handle free hugetlb pages
On Fri, Feb 19, 2021 at 11:55:00AM +0100, Michal Hocko wrote:
> It is not the lock that I care about but more about counters. The
> intention was that there is a single place to handle both enqueing and
> dequeing. As not all places require counters to be updated. E.g. the
> migration which just replaces one page by another.
I see.
alloc_fresh_huge_page->prep_new_huge_page increments h->nr_huge_pages{_node}
counters.
Which means:
> new_page = alloc_fresh_huge_page();
> if (!new_page)
> goto fail;
> spin_lock(hugetlb_lock);
> if (!PageHuge(old_page)) {
> /* freed from under us, nothing to do */
> __update_and_free_page(new_page);
Here we need update_and_free_page, otherwise we would be leaving a stale value
in h->nr_huge_pages{_node}.
> goto unlock;
> }
> list_del(&old_page->lru);
> __update_and_free_page(old_page);
Same here.
> __enqueue_huge_page(new_page);
This is ok since h->free_huge_pages{_node} do not need to be updated.
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists