[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <438d6f6d-2571-69d9-844e-9af9e6b4f820@intel.com>
Date: Wed, 19 Jul 2023 10:09:53 +0800
From: Yin Fengwei <fengwei.yin@...el.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
CC: Yu Zhao <yuzhao@...gle.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>,
<willy@...radead.org>, <david@...hat.com>, <ryan.roberts@....com>,
<shy828301@...il.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC PATCH v2 3/3] mm: mlock: update mlock_pte_range to handle
large folio
On 7/19/23 10:00, Yosry Ahmed wrote:
> On Tue, Jul 18, 2023 at 6:57 PM Yin Fengwei <fengwei.yin@...el.com> wrote:
>>
>>
>> On 7/19/23 09:52, Yosry Ahmed wrote:
>>> On Tue, Jul 18, 2023 at 6:32 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>>>> On Tue, Jul 18, 2023 at 4:47 PM Yin Fengwei <fengwei.yin@...el.com> wrote:
>>>>>
>>>>>
>>>>> On 7/19/23 06:48, Yosry Ahmed wrote:
>>>>>> On Sun, Jul 16, 2023 at 6:58 PM Yin Fengwei <fengwei.yin@...el.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 7/17/23 08:35, Yu Zhao wrote:
>>>>>>>> On Sun, Jul 16, 2023 at 6:00 PM Yin, Fengwei <fengwei.yin@...el.com> wrote:
>>>>>>>>> On 7/15/2023 2:06 PM, Yu Zhao wrote:
>>>>>>>>>> There is a problem here that I didn't have the time to elaborate: we
>>>>>>>>>> can't mlock() a folio that is within the range but not fully mapped
>>>>>>>>>> because this folio can be on the deferred split queue. When the split
>>>>>>>>>> happens, those unmapped folios (not mapped by this vma but are mapped
>>>>>>>>>> into other vmas) will be stranded on the unevictable lru.
>>>>>>>>> This should be fine unless I missed something. During large folio split,
>>>>>>>>> the unmap_folio() will be migrate(anon)/unmap(file) folio. Folio will be
>>>>>>>>> munlocked in unmap_folio(). So the head/tail pages will be evictable always.
>>>>>>>> It's close but not entirely accurate: munlock can fail on isolated folios.
>>>>>>> Yes. The munlock just clear PG_mlocked bit but with PG_unevictable left.
>>>>>>>
>>>>>>> Could this also happen against normal 4K page? I mean when user try to munlock
>>>>>>> a normal 4K page and this 4K page is isolated. So it become unevictable page?
>>>>>> Looks like it can be possible. If cpu 1 is in __munlock_folio() and
>>>>>> cpu 2 is isolating the folio for any purpose:
>>>>>>
>>>>>> cpu1 cpu2
>>>>>> isolate folio
>>>>>> folio_test_clear_lru() // 0
>>>>>> putback folio // add
>>>>>> to unevictable list
>>>>>> folio_test_clear_mlocked()
>>>>> Yes. Yu showed this sequence to me in another email. I thought the putback_lru()
>>>>> could correct the none-mlocked but unevictable folio. But it doesn't because
>>>>> of this race.
>>>> (+Hugh Dickins for vis)
>>>>
>>>> Yu, I am not familiar with the split_folio() case, so I am not sure it
>>>> is the same exact race I stated above.
>>>>
>>>> Can you confirm whether or not doing folio_test_clear_mlocked() before
>>>> folio_test_clear_lru() would fix the race you are referring to? IIUC,
>>>> in this case, we make sure we clear PG_mlocked before we try to to
>>>> clear PG_lru. If we fail to clear it, then someone else have the folio
>>>> isolated after we clear PG_mlocked, so we can be sure that when they
>>>> put the folio back it will be correctly made evictable.
>>>>
>>>> Is my understanding correct?
>>> Hmm, actually this might not be enough. In folio_add_lru() we will
>>> call folio_batch_add_and_move(), which calls lru_add_fn() and *then*
>>> sets PG_lru. Since we check folio_evictable() in lru_add_fn(), the
>>> race can still happen:
>>>
>>>
>>> cpu1 cpu2
>>> folio_evictable() //false
>>> folio_test_clear_mlocked()
>>> folio_test_clear_lru() //false
>>> folio_set_lru()
>>>
>>> Relying on PG_lru for synchronization might not be enough with the
>>> current code. We might need to revert 2262ace60713 ("mm/munlock:
>>> delete smp_mb() from __pagevec_lru_add_fn()").
>>>
>>> Sorry for going back and forth here, I am thinking out loud.
>>
>> Yes. Currently, the order in lru_add_fn() is not correct.
>>
>> I think we should move folio_test_clear_lru(folio) into
>>
>> lru locked range. As the lru lock here was expected to
>>
>> use for sync here. Check the comment in lru_add_fn().
>
> Right, I am wondering if it would be better to just revert
> 2262ace60713 and rely on the memory barrier and operations ordering
> instead of the lru lock. The lru lock is heavily contended as-is, so
> avoiding using it where possible is preferable I assume.
My understanding is set_lru after add folio to lru list is correct.
Once folio_set_lru(), others can do isolation of this folio. But if
this folio is not in lru list yet, what could happen? It's not required
to hold lru lock to do isolation.
>
>>
>>
>> Regards
>>
>> Yin, Fengwei
>>
>>
>>>
>>>> If yes, I can add this fix to my next version of the RFC series to
>>>> rework mlock_count. It would be a lot more complicated with the
>>>> current implementation (as I stated in a previous email).
>>>>
>>>>>>
>>>>>> The page would be stranded on the unevictable list in this case, no?
>>>>>> Maybe we should only try to isolate the page (clear PG_lru) after we
>>>>>> possibly clear PG_mlocked? In this case if we fail to isolate we know
>>>>>> for sure that whoever has the page isolated will observe that
>>>>>> PG_mlocked is clear and correctly make the page evictable.
>>>>>>
>>>>>> This probably would be complicated with the current implementation, as
>>>>>> we first need to decrement mlock_count to determine if we want to
>>>>>> clear PG_mlocked, and to do so we need to isolate the page as
>>>>>> mlock_count overlays page->lru. With the proposal in [1] to rework
>>>>>> mlock_count, it might be much simpler as far as I can tell. I intend
>>>>>> to refresh this proposal soon-ish.
>>>>>>
>>>>>> [1]https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/
>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Yin, Fengwei
>>>>>>>
Powered by blists - more mailing lists