[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230717135017.7ro76lsaninbazvf@techsingularity.net>
Date: Mon, 17 Jul 2023 14:50:17 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Michal Hocko <mhocko@...e.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Arjan Van De Ven <arjan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning
On Mon, Jul 17, 2023 at 05:16:11PM +0800, Huang, Ying wrote:
> Mel Gorman <mgorman@...hsingularity.net> writes:
>
> > Batch should have a much lower maximum than high because it's a deferred cost
> > that gets assigned to an arbitrary task. The worst case is where a process
> > that is a light user of the allocator incurs the full cost of a refill/drain.
> >
> > Again, intuitively this may be PID Control problem for the "Mix" case
> > to estimate the size of high required to minimise drains/allocs as each
> > drain/alloc is potentially a lock contention. The catchall for corner
> > cases would be to decay high from vmstat context based on pcp->expires. The
> > decay would prevent the "high" being pinned at an artifically high value
> > without any zone lock contention for prolonged periods of time and also
> > mitigate worst-case due to state being per-cpu. The downside is that "high"
> > would also oscillate for a continuous steady allocation pattern as the PID
> > control might pick an ideal value suitable for a long period of time with
> > the "decay" disrupting that ideal value.
>
> Maybe we can track the minimal value of pcp->count. If it's small
> enough recently, we can avoid to decay pcp->high. Because the pages in
> PCP are used for allocations instead of idle.
Implement as a separate patch. I suspect this type of heuristic will be
very benchmark specific and the complexity may not be worth it in the
general case.
>
> Another question is as follows.
>
> For example, on CPU A, a large number of pages are freed, and we
> maximize batch and high. So, a large number of pages are put in PCP.
> Then, the possible situations may be,
>
> a) a large number of pages are allocated on CPU A after some time
> b) a large number of pages are allocated on another CPU B
>
> For a), we want the pages are kept in PCP of CPU A as long as possible.
> For b), we want the pages are kept in PCP of CPU A as short as possible.
> I think that we need to balance between them. What is the reasonable
> time to keep pages in PCP without many allocations?
>
This would be a case where you're relying on vmstat to drain the PCP after
a period of time as it is a corner case. You cannot reasonably detect the
pattern on two separate per-cpu lists without either inspecting remote CPU
state or maintaining global state. Either would incur cache miss penalties
that probably cost more than the heuristic saves.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists