[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <62d740f2-f8df-40df-b624-36e099ec1671@arm.com>
Date: Tue, 5 Mar 2024 07:41:57 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Barry Song <21cnbao@...il.com>
Cc: Matthew Wilcox <willy@...radead.org>, David Hildenbrand
<david@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>,
Huang Ying <ying.huang@...el.com>, Gao Xiang <xiang@...nel.org>,
Yu Zhao <yuzhao@...gle.com>, Yang Shi <shy828301@...il.com>,
Michal Hocko <mhocko@...e.com>, Kefeng Wang <wangkefeng.wang@...wei.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from
swap_cluster_info:flags
On 04/03/2024 05:42, Barry Song wrote:
> On Mon, Mar 4, 2024 at 5:52 PM Barry Song <21cnbao@...il.com> wrote:
>>
>> On Sat, Mar 2, 2024 at 6:08 AM Ryan Roberts <ryan.roberts@....com> wrote:
>>>
>>> On 01/03/2024 16:44, Ryan Roberts wrote:
>>>> On 01/03/2024 16:31, Matthew Wilcox wrote:
>>>>> On Fri, Mar 01, 2024 at 04:27:32PM +0000, Ryan Roberts wrote:
>>>>>> I've implemented the batching as David suggested, and I'm pretty confident it's
>>>>>> correct. The only problem is that during testing I can't provoke the code to
>>>>>> take the path. I've been pouring through the code but struggling to figure out
>>>>>> under what situation you would expect the swap entry passed to
>>>>>> free_swap_and_cache() to still have a cached folio? Does anyone have any idea?
>>>>>>
>>>>>> This is the original (unbatched) function, after my change, which caused David's
>>>>>> concern that we would end up calling __try_to_reclaim_swap() far too much:
>>>>>>
>>>>>> int free_swap_and_cache(swp_entry_t entry)
>>>>>> {
>>>>>> struct swap_info_struct *p;
>>>>>> unsigned char count;
>>>>>>
>>>>>> if (non_swap_entry(entry))
>>>>>> return 1;
>>>>>>
>>>>>> p = _swap_info_get(entry);
>>>>>> if (p) {
>>>>>> count = __swap_entry_free(p, entry);
>>>>>> if (count == SWAP_HAS_CACHE)
>>>>>> __try_to_reclaim_swap(p, swp_offset(entry),
>>>>>> TTRS_UNMAPPED | TTRS_FULL);
>>>>>> }
>>>>>> return p != NULL;
>>>>>> }
>>>>>>
>>>>>> The trouble is, whenever its called, count is always 0, so
>>>>>> __try_to_reclaim_swap() never gets called.
>>>>>>
>>>>>> My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over
>>>>>> it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this
>>>>>> function to be called for every PTE, but count is always 0 after
>>>>>> __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for
>>>>>> order-0 as well as PTE- and PMD-mapped 2M THP.
>>>>>
>>>>> I think you have to page it back in again, then it will have an entry in
>>>>> the swap cache. Maybe. I know little about anon memory ;-)
>>>>
>>>> Ahh, I was under the impression that the original folio is put into the swap
>>>> cache at swap out, then (I guess) its removed once the IO is complete? I'm sure
>>>> I'm miles out... what exactly is the lifecycle of a folio going through swap out?
>>>>
>>>> I guess I can try forking after swap out, then fault it back in in the child and
>>>> exit. Then do the munmap in the parent. I guess that could force it? Thanks for
>>>> the tip - I'll have a play.
>>>
>>> That has sort of solved it, the only problem now is that all the folios in the
>>> swap cache are small (because I don't have Barry's large swap-in series). So
>>> really I need to figure out how to avoid removing the folio from the cache in
>>> the first place...
>>
>> I am quite sure we have a chance to hit a large swapcache even using zRAM -
>> a sync swapfile and even during swap-out.
>>
>> I have a test case as below,
>> 1. two threads to run MADV_PAGEOUT
>> 2. two threads to read data being swapped-out
>>
>> in do_swap_page, from time to time, I can get a large swapcache.
>>
>> We have a short time window after add_to_swap() and before
>> __removing_mapping() of
>> vmscan, a large folio is still in swapcache.
>>
>> So Ryan, I guess you can trigger this by adding one more thread of
>> MADV_DONTNEED to do zap_pte_range?
>
> Ryan, I have modified my test case to have 4 threads:
> 1. MADV_PAGEOUT
> 2. MADV_DONTNEED
> 3. write data
> 4. read data
>
> and git push the code here so that you can get it,
> https://github.com/BarrySong666/swaptest/blob/main/swptest.c
Thanks for this, Barry!
>
> I can reproduce the issue in zap_pte_range() in just a couple of minutes.
>
>>
>>
>>>
>>>>
>>>>>
>>>>> If that doesn't work, perhaps use tmpfs, and use some memory pressure to
>>>>> force that to swap?
>>>>>
>>>>>> I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm
>>>>>> using a block ram device as my backing store - I think this does synchronous IO
>>>>>> so perhaps if I have a real block device with async IO I might have more luck?
>>>>>> Just a guess...
>>>>>>
>>>>>> Or perhaps this code path is a corner case? In which case, perhaps its not worth
>>>>>> adding the batching optimization after all?
>>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>
>
> Thanks
> Barry
Powered by blists - more mailing lists