[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <31320BED-0B77-4962-B155-AA09FA3D1E95@nvidia.com>
Date: Mon, 18 Sep 2023 13:17:52 -0400
From: Zi Yan <ziy@...dia.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"\"Matthew Wilcox (Oracle)\"" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"\"Yin, Fengwei\"" <fengwei.yin@...el.com>,
Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Rohan Puri <rohan.puri15@...il.com>,
Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
John Hubbard <jhubbard@...dia.com>
Subject: Re: [RFC PATCH 4/4] mm/compaction: enable compacting >0 order folios.
On 15 Sep 2023, at 5:41, Baolin Wang wrote:
> On 9/13/2023 12:28 AM, Zi Yan wrote:
>> From: Zi Yan <ziy@...dia.com>
>>
>> Since compaction code can compact >0 order folios, enable it during the
>> process.
>>
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> ---
>> mm/compaction.c | 25 ++++++++++---------------
>> 1 file changed, 10 insertions(+), 15 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 4300d877b824..f72af74094de 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1087,11 +1087,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>> if (PageCompound(page) && !cc->alloc_contig) {
>> const unsigned int order = compound_order(page);
>> - if (likely(order <= MAX_ORDER)) {
>> - low_pfn += (1UL << order) - 1;
>> - nr_scanned += (1UL << order) - 1;
>> + /*
>> + * Compacting > pageblock_order pages does not improve
>> + * memory fragmentation. Also skip hugetlbfs pages.
>> + */
>> + if (likely(order >= pageblock_order) || PageHuge(page)) {
>
> IMO, if the compound page order is larger than the requested cc->order, we should also fail the isolation, cause it also does not improve fragmentation, right?
>
Probably yes. I think the reasoning should be since compaction is asking for cc->order,
we should not compacting folios with orders larger than or equal to that, since
cc->order tells us the max free page order is smaller than it, otherwise the
allocation would happen already. I will add this condition in the next version.
>> + if (order <= MAX_ORDER) {
>> + low_pfn += (1UL << order) - 1;
>> + nr_scanned += (1UL << order) - 1;
>> + }
>> + goto isolate_fail;
>> }
>> - goto isolate_fail;
>> }
>> /*
>> @@ -1214,17 +1220,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>> goto isolate_abort;
>> }
>> }
>> -
>> - /*
>> - * folio become large since the non-locked check,
>> - * and it's on LRU.
>> - */
>> - if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) > - low_pfn += folio_nr_pages(folio) - 1;
>> - nr_scanned += folio_nr_pages(folio) - 1;
>> - folio_set_lru(folio);
>> - goto isolate_fail_put;
>> - }
>
> I do not think you can remove this validation, since previous validation is lockless. So under the lock, we need re-check if the compound page order is larger than pageblock_order or cc->order, that need fail to isolate.
This check should go away, but a new order check for large folios should be
added. Will add it. Thanks.
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists