[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1F67895A-C2CA-481D-AB33-58E8201BCE71@gmail.com>
Date: Thu, 1 Apr 2021 12:21:46 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Mel Gorman <mgorman@...e.de>
Cc: "Huang, Ying" <ying.huang@...el.com>,
Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Peter Xu <peterx@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
Will Deacon <will@...nel.org>,
Michel Lespinasse <walken@...gle.com>,
Arjun Roy <arjunroy@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [RFC] NUMA balancing: reduce TLB flush via delaying mapping on
hint page fault
> On Apr 1, 2021, at 1:38 AM, Mel Gorman <mgorman@...e.de> wrote:
>
> On Wed, Mar 31, 2021 at 09:36:04AM -0700, Nadav Amit wrote:
>>
>>
>>> On Mar 31, 2021, at 6:16 AM, Mel Gorman <mgorman@...e.de> wrote:
>>>
>>> On Wed, Mar 31, 2021 at 07:20:09PM +0800, Huang, Ying wrote:
>>>> Mel Gorman <mgorman@...e.de> writes:
>>>>
>>>>> On Mon, Mar 29, 2021 at 02:26:51PM +0800, Huang Ying wrote:
>>>>>> For NUMA balancing, in hint page fault handler, the faulting page will
>>>>>> be migrated to the accessing node if necessary. During the migration,
>>>>>> TLB will be shot down on all CPUs that the process has run on
>>>>>> recently. Because in the hint page fault handler, the PTE will be
>>>>>> made accessible before the migration is tried. The overhead of TLB
>>>>>> shooting down is high, so it's better to be avoided if possible. In
>>>>>> fact, if we delay mapping the page in PTE until migration, that can be
>>>>>> avoided. This is what this patch doing.
>>>>>>
>>>>>
>>>>> Why would the overhead be high? It was previously inaccessibly so it's
>>>>> only parallel accesses making forward progress that trigger the need
>>>>> for a flush.
>>>>
>>>> Sorry, I don't understand this. Although the page is inaccessible, the
>>>> threads may access other pages, so TLB flushing is still necessary.
>>>>
>>>
>>> You assert the overhead of TLB shootdown is high and yes, it can be
>>> very high but you also said "the benchmark score has no visible changes"
>>> indicating the TLB shootdown cost is not a major problem for the workload.
>>> It does not mean we should ignore it though.
>>
>> If you are looking for a benchmark that is negatively affected by NUMA
>> balancing, then IIRC Parsec???s dedup is such a workload. [1]
>>
>
> Few questions;
>
> Is Parsec imparied due to NUMA balancing in general or due to TLB
> shootdowns specifically?
TLB shootdowns specifically.
>
> Are you using "gcc-pthreads" for parallelisation and the "native" size
> for Parsec?
native as it is the biggest workload, so it is most apparent with
native. I don’t remember that I played with the threading model
parameters.
>
> Is there any specific thread count that matters either in
> absolute terms or as a precentage of online CPUs?
IIRC, when thread count matches the CPU numbers (or perhaps
slightly lower), the impact is the greatest.
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists