lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200218085721.GC3420@suse.de>
Date:   Tue, 18 Feb 2020 08:57:21 +0000
From:   Mel Gorman <mgorman@...e.de>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Feng Tang <feng.tang@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>, Rik van Riel <riel@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC -V2 2/8] autonuma, memory tiering: Rate limit NUMA
 migration throughput

On Tue, Feb 18, 2020 at 04:26:28PM +0800, Huang, Ying wrote:
> From: Huang Ying <ying.huang@...el.com>
> 
> In autonuma memory tiering mode, the hot PMEM (persistent memory)
> pages could be migrated to DRAM via autonuma.  But this incurs some
> overhead too.  So that sometimes the workload performance may be hurt.
> To avoid too much disturbing to the workload, the migration throughput
> should be rate-limited.
> 
> At the other hand, in some situation, for example, some workloads
> exits, many DRAM pages become free, so that some pages of the other
> workloads can be migrated to DRAM.  To respond to the workloads
> changing quickly, it's better to migrate pages faster.
> 
> To address the above 2 requirements, a rate limit algorithm as follows
> is used,
> 
> - If there is enough free memory in DRAM node (that is, > high
>   watermark + 2 * rate limit pages), then NUMA migration throughput will
>   not be rate-limited to respond to the workload changing quickly.
> 
> - Otherwise, counting the number of pages to try to migrate to a DRAM
>   node via autonuma, if the count exceeds the limit specified by the
>   users, stop NUMA migration until the next second.
> 
> A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for
> the users to specify the limit.  If its value is 0, the default
> value (high watermark) will be used.
> 
> TODO: Add ABI document for new sysctl knob.
> 

I very strongly suggest that this only be done as a last resort and with
supporting data as to why it is necessary. NUMA balancing did have rate
limiting at one point and it was removed when balancing was smart enough
to mostly do the right thing without rate limiting. I posted a series
that reconciled NUMA balancing with the CPU load balancer recently which
further reduced spurious and unnecessary migrations. I would not like
to see rate limiting reintroduced unless there is no other way of fixing
saturation of memory bandwidth due to NUMA balancing. Even if it's
needed as a stopgap while the feature is finalised, it should be
introduced late in the series explaining why it's temporarily necessary.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ