[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87sf66f0mf.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 20 Oct 2023 11:12:24 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Gao Xiang <xiang@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [PATCH v2 2/2] mm: swap: Swap-out small-sized THP without
splitting
Ryan Roberts <ryan.roberts@....com> writes:
> On 19/10/2023 06:49, Huang, Ying wrote:
>> Ryan Roberts <ryan.roberts@....com> writes:
>>
>>> On 18/10/2023 07:55, Huang, Ying wrote:
>>>> Ryan Roberts <ryan.roberts@....com> writes:
>>>>
>>
>> [snip]
>>
>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>>>> index a073366a227c..35cbbe6509a9 100644
>>>>> --- a/include/linux/swap.h
>>>>> +++ b/include/linux/swap.h
>>>>> @@ -268,6 +268,12 @@ struct swap_cluster_info {
>>>>> struct percpu_cluster {
>>>>> struct swap_cluster_info index; /* Current cluster index */
>>>>> unsigned int next; /* Likely next allocation offset */
>>>>> + unsigned int large_next[]; /*
>>>>> + * next free offset within current
>>>>> + * allocation cluster for large folios,
>>>>> + * or UINT_MAX if no current cluster.
>>>>> + * Index is (order - 1).
>>>>> + */
>>>>> };
>>>>>
>>>>> struct swap_cluster_list {
>>>>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>>>>> index b83ad77e04c0..625964e53c22 100644
>>>>> --- a/mm/swapfile.c
>>>>> +++ b/mm/swapfile.c
>>>>> @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
>>>>> return n_ret;
>>>>> }
>>>>>
>>>>> -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
>>>>> +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot,
>>>>> + unsigned int nr_pages)
>>>>
>>>> This looks hacky. IMO, we should put the allocation logic inside
>>>> percpu_cluster framework. If percpu_cluster framework doesn't work for
>>>> you, just refactor it firstly.
>>>
>>> I'm not sure I really understand what you are suggesting - could you elaborate?
>>> What "framework"? I only see a per-cpu data structure and
>>> scan_swap_map_try_ssd_cluster(), which is very much geared towards order-0
>>> allocations.
>>
>> I suggest to share as much code as possible between order-0 and order >
>> 0 swap entry allocation. I think that we can make
>> scan_swap_map_try_ssd_cluster() works for order > 0 swap entry allocation.
>>
>
> [...]
>
>>>>> + /*
>>>>> + * If scan_swap_map_slots() can't find a free cluster, it will
>>>>> + * check si->swap_map directly. To make sure this standby
>>>>> + * cluster isn't taken by scan_swap_map_slots(), mark the swap
>>>>> + * entries bad (occupied). (same approach as discard).
>>>>> + */
>>>>> + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD,
>>>>> + SWAPFILE_CLUSTER - nr_pages);
>>>>
>>>> There's an issue with this solution. If the free space of swap device
>>>> runs low, it's possible that
>>>>
>>>> - some cluster are put in the percpu_cluster of some CPUs
>>>> the swap entries there are marked as used
>>>>
>>>> - no free swap entries elsewhere
>>>>
>>>> - nr_swap_pages isn't 0
>>>>
>>>> So, we will still scan LRU, but swap allocation fails, although there's
>>>> still free swap space.
>
> I'd like to decide how best to solve this problem before I can figure out how
> much code sharing I can do for the order-0 vs order > 0 allocators.
>
> I see a couple of potential options:
>
> 1) Manipulate nr_swap_pages to include the entries that are marked SWAP_MAP_BAD,
> so when reserving a new per-order/per-cpu cluster, subtract SWAPFILE_CLUSTER,
> and then add nr_pages for each allocation from that cluster.
>
> 2) Don't mark the entries in the reserved cluster as SWAP_MAP_BAD, which would
> allow the scanner to steal (order-0) entries. The scanner could set a flag in
> the cluster info to mark it as having been allocated from by the scanner, so the
> next attempt to allocate a high order from it would cause discarding it as the
> cpu's current cluster and trying to get a fresh cluster from the free list.
>
> While option 2 is a bit more complex, I prefer it as a solution. What do you think?
I think that this is a good choice to start with. We may build more
optimization on top of it if necessary.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists