[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41aa727a-7f34-3363-dc5b-a33c161c8933@suse.cz>
Date: Mon, 11 Sep 2017 08:50:01 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [patch 2/2] mm, compaction: persistently skip hugetlbfs
pageblocks
On 09/11/2017 03:12 AM, David Rientjes wrote:
> On Wed, 23 Aug 2017, Vlastimil Babka wrote:
>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -217,6 +217,20 @@ static void reset_cached_positions(struct zone *zone)
>>> pageblock_start_pfn(zone_end_pfn(zone) - 1);
>>> }
>>>
>>> +/*
>>> + * Hugetlbfs pages should consistenly be skipped until updated by the hugetlb
>>> + * subsystem. It is always pointless to compact pages of pageblock_order and
>>> + * the free scanner can reconsider when no longer huge.
>>> + */
>>> +static bool pageblock_skip_persistent(struct page *page, unsigned int order)
>>> +{
>>> + if (!PageHuge(page))
>>> + return false;
>>> + if (order != pageblock_order)
>>> + return false;
>>> + return true;
>>
>> Why just HugeTLBfs? There's also no point in migrating/finding free
>> pages in THPs. Actually, any compound page of pageblock order?
>>
>
> Yes, any page where compound_order(page) == pageblock_order would probably
> benefit from the same treatment. I haven't encountered such an issue,
> however, so I thought it was best to restrict it only to hugetlb: hugetlb
> memory usually sits in the hugetlb free pool and seldom gets freed under
> normal conditions even when unmapped whereas thp is much more likely to be
> unmapped and split. I wasn't sure that it was worth the pageblock skip.
Well, my thinking is that once we start checking page properties when
resetting the skip bits, we might as well try to get the most of it, as
there's no additional cost.
>>> +}
>>> +
>>> /*
>>> * This function is called to clear all cached information on pageblocks that
>>> * should be skipped for page isolation when the migrate and free page scanner
>>> @@ -241,6 +255,8 @@ static void __reset_isolation_suitable(struct zone *zone)
>>> continue;
>>> if (zone != page_zone(page))
>>> continue;
>>> + if (pageblock_skip_persistent(page, compound_order(page)))
>>> + continue;
>>
>> I like the idea of how persistency is achieved by rechecking in the reset.
>>
>>>
>>> clear_pageblock_skip(page);
>>> }
>>> @@ -448,13 +464,15 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>> * and the only danger is skipping too much.
>>> */
>>> if (PageCompound(page)) {
>>> - unsigned int comp_order = compound_order(page);
>>> -
>>> - if (likely(comp_order < MAX_ORDER)) {
>>> - blockpfn += (1UL << comp_order) - 1;
>>> - cursor += (1UL << comp_order) - 1;
>>> + const unsigned int order = compound_order(page);
>>> +
>>> + if (pageblock_skip_persistent(page, order)) {
>>> + set_pageblock_skip(page);
>>> + blockpfn = end_pfn;
>>> + } else if (likely(order < MAX_ORDER)) {
>>> + blockpfn += (1UL << order) - 1;
>>> + cursor += (1UL << order) - 1;
>>> }
>>
>> Is this new code (and below) really necessary? The existing code should
>> already lead to skip bit being set via update_pageblock_skip()?
>>
>
> I wanted to set the persistent pageblock skip regardless of
> cc->ignore_skip_hint without a local change to update_pageblock_skip().
After the first patch, there are no ignore_skip_hint users where it
would make that much difference overriding the flag for some pageblocks
(which this effectively does) at the cost of more complicated code.
Powered by blists - more mailing lists