[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a75ea34a-7512-4169-b987-95f11a7f3dd0@intel.com>
Date: Mon, 18 Mar 2024 18:00:19 +0800
From: "Yin, Fengwei" <fengwei.yin@...el.com>
To: "Huang, Ying" <ying.huang@...el.com>, Ryan Roberts <ryan.roberts@....com>
CC: David Hildenbrand <david@...hat.com>, "linux-mm@...ck.org"
<linux-mm@...ck.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Gao Xiang <xiang@...nel.org>, Yu Zhao
<yuzhao@...gle.com>, Yang Shi <shy828301@...il.com>, Michal Hocko
<mhocko@...e.com>, Kefeng Wang <wangkefeng.wang@...wei.com>, Barry Song
<21cnbao@...il.com>, Chris Li <chrisl@...nel.org>
Subject: Re: [PATCH v4 5/6] mm: vmscan: Avoid split during shrink_folio_list()
On 3/18/2024 10:16 AM, Huang, Ying wrote:
> Ryan Roberts <ryan.roberts@....com> writes:
>
>> Hi Yin Fengwei,
>>
>> On 15/03/2024 11:12, David Hildenbrand wrote:
>>> On 15.03.24 11:49, Ryan Roberts wrote:
>>>> On 15/03/2024 10:43, David Hildenbrand wrote:
>>>>> On 11.03.24 16:00, Ryan Roberts wrote:
>>>>>> Now that swap supports storing all mTHP sizes, avoid splitting large
>>>>>> folios before swap-out. This benefits performance of the swap-out path
>>>>>> by eliding split_folio_to_list(), which is expensive, and also sets us
>>>>>> up for swapping in large folios in a future series.
>>>>>>
>>>>>> If the folio is partially mapped, we continue to split it since we want
>>>>>> to avoid the extra IO overhead and storage of writing out pages
>>>>>> uneccessarily.
>>>>>>
>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>>>>>> ---
>>>>>> mm/vmscan.c | 9 +++++----
>>>>>> 1 file changed, 5 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>>>> index cf7d4cf47f1a..0ebec99e04c6 100644
>>>>>> --- a/mm/vmscan.c
>>>>>> +++ b/mm/vmscan.c
>>>>>> @@ -1222,11 +1222,12 @@ static unsigned int shrink_folio_list(struct list_head
>>>>>> *folio_list,
>>>>>> if (!can_split_folio(folio, NULL))
>>>>>> goto activate_locked;
>>>>>> /*
>>>>>> - * Split folios without a PMD map right
>>>>>> - * away. Chances are some or all of the
>>>>>> - * tail pages can be freed without IO.
>>>>>> + * Split partially mapped folios map
>>>>>> + * right away. Chances are some or all
>>>>>> + * of the tail pages can be freed
>>>>>> + * without IO.
>>>>>> */
>>>>>> - if (!folio_entire_mapcount(folio) &&
>>>>>> + if (!list_empty(&folio->_deferred_list) &&
>>>>>> split_folio_to_list(folio,
>>>>>> folio_list))
>>>>>> goto activate_locked;
>>>>>
>>>>> Not sure if we might have to annotate that with data_race().
>>>>
>>>> I asked that exact question to Matthew in another context bt didn't get a
>>>> response. There are examples of checking if the deferred list is empty with and
>>>> without data_race() in the code base. But list_empty() is implemented like this:
>>>>
>>>> static inline int list_empty(const struct list_head *head)
>>>> {
>>>> return READ_ONCE(head->next) == head;
>>>> }
>>>>
>>>> So I assumed the READ_ONCE() makes everything safe without a lock? Perhaps not
>>>> sufficient for KCSAN?
I don't think READ_ONCE() can replace the lock.
>>>
>>> Yeah, there is only one use of data_race with that list.
>>>
>>> It was added in f3ebdf042df4 ("THP: avoid lock when check whether THP is in
>>> deferred list").
>>>
>>> Looks like that was added right in v1 of that change [1], so my best guess is
>>> that it is not actually required.
>>>
>>> If not required, likely we should just cleanup the single user.
>>>
>>> [1]
>>> https://lore.kernel.org/linux-mm/20230417075643.3287513-2-fengwei.yin@intel.com/
>>
>> Do you have any recollection of why you added the data_race() markup?
>
> Per my understanding, this is used to mark that the code accesses
> folio->_deferred_list without lock intentionally, while
> folio->_deferred_list may be changed in parallel. IIUC, this is what
> data_race() is used for. Or, my understanding is wrong?
Yes. This is my understanding also.
Regards
Yin, Fengwei
>
> --
> Best Regards,
> Huang, Ying
Powered by blists - more mailing lists