[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04c98faf-32fe-4d39-03c5-f8dabb5677b1@oracle.com>
Date: Thu, 25 Mar 2021 13:33:12 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>,
David Rientjes <rientjes@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Peter Zijlstra <peterz@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
HORIGUCHI NAOYA <naoya.horiguchi@....com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Waiman Long <longman@...hat.com>, Peter Xu <peterx@...hat.com>,
Mina Almasry <almasrymina@...gle.com>,
Hillf Danton <hdanton@...a.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 5/8] hugetlb: call update_and_free_page without
hugetlb_lock
On 3/25/21 12:39 PM, Michal Hocko wrote:
> On Thu 25-03-21 10:12:05, Mike Kravetz wrote:
>> On 3/25/21 3:55 AM, Michal Hocko wrote:
>>> On Wed 24-03-21 17:28:32, Mike Kravetz wrote:
>>>> With the introduction of remove_hugetlb_page(), there is no need for
>>>> update_and_free_page to hold the hugetlb lock. Change all callers to
>>>> drop the lock before calling.
>>>>
>>>> With additional code modifications, this will allow loops which decrease
>>>> the huge page pool to drop the hugetlb_lock with each page to reduce
>>>> long hold times.
>>>>
>>>> The ugly unlock/lock cycle in free_pool_huge_page will be removed in
>>>> a subsequent patch which restructures free_pool_huge_page.
>>>>
>>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>>
>>> Acked-by: Michal Hocko <mhocko@...e.com>
>>>
>>> One minor thing below
>>>
>>> [...]
>>>> @@ -2563,22 +2572,37 @@ static void try_to_free_low(struct hstate *h, unsigned long count,
>>>> nodemask_t *nodes_allowed)
>>>> {
>>>> int i;
>>>> + struct list_head page_list;
>>>> + struct page *page, *next;
>>>>
>>>> if (hstate_is_gigantic(h))
>>>> return;
>>>>
>>>> + /*
>>>> + * Collect pages to be freed on a list, and free after dropping lock
>>>> + */
>>>> + INIT_LIST_HEAD(&page_list);
>>>> for_each_node_mask(i, *nodes_allowed) {
>>>> - struct page *page, *next;
>>>> struct list_head *freel = &h->hugepage_freelists[i];
>>>> list_for_each_entry_safe(page, next, freel, lru) {
>>>> if (count >= h->nr_huge_pages)
>>>> - return;
>>>> + goto out;
>>>> if (PageHighMem(page))
>>>> continue;
>>>> remove_hugetlb_page(h, page, false);
>>>> - update_and_free_page(h, page);
>>>> + INIT_LIST_HEAD(&page->lru);
>>>
>>> What is the point of rhis INIT_LIST_HEAD? Page has been removed from the
>>> list by remove_hugetlb_page so it can be added to a new one without any
>>> reinitialization.
>>
>> remove_hugetlb_page just does a list_del. list_del will poison the
>> pointers in page->lru. The following list_add will then complain about
>> list corruption.
>
> Are you sure? list_del followed by list_add is a normal API usage
> pattern AFAIK. INIT_LIST_HEAD is to do the first initialization before
> first use.
Sorry for the noise. The INIT_LIST_HEAD is indeed unnecessary.
I must have got confused while looking at a corrupt list splat in
earlier code development.
--
Mike Kravetz
Powered by blists - more mailing lists