lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230712090526.thk2l7sbdcdsllfi@techsingularity.net>
Date:   Wed, 12 Jul 2023 10:05:26 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Arjan Van De Ven <arjan@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Johannes Weiner <jweiner@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning

On Tue, Jul 11, 2023 at 01:19:46PM +0200, Michal Hocko wrote:
> On Mon 10-07-23 14:53:25, Huang Ying wrote:
> > To auto-tune PCP high for each CPU automatically, an
> > allocation/freeing depth based PCP high auto-tuning algorithm is
> > implemented in this patch.
> > 
> > The basic idea behind the algorithm is to detect the repetitive
> > allocation and freeing pattern with short enough period (about 1
> > second).  The period needs to be short to respond to allocation and
> > freeing pattern changes quickly and control the memory wasted by
> > unnecessary caching.
> 
> 1s is an ethernity from the allocation POV. Is a time based sampling
> really a good choice? I would have expected a natural allocation/freeing
> feedback mechanism. I.e. double the batch size when the batch is
> consumed and it requires to be refilled and shrink it under memory
> pressure (GFP_NOWAIT allocation fails) or when the surplus grows too
> high over batch (e.g. twice as much).  Have you considered something as
> simple as that?
> Quite honestly I am not sure time based approach is a good choice
> because memory consumptions tends to be quite bulky (e.g. application
> starts or workload transitions based on requests).
>  

I tend to agree. Tuning based on the recent allocation pattern without frees
would make more sense and also be symmetric with how free_factor works. I
suspect that time-based may be heavily orientated around the will-it-scale
benchmark. While I only glanced at this, a few things jumped out

1. Time-based heuristics are not ideal. congestion_wait() and
   friends was an obvious case where time-based heuristics fell apart even
   before the event it waited on was removed. For congestion, it happened to
   work for slow storage for a while but that was about it.  For allocation
   stream detection, it has a similar problem. If a process is allocating
   heavily, then fine, if it's in bursts of less than a second more than one
   second apart then it will not adapt. While I do not think it is explicitly
   mentioned anywhere, my understanding was that heuristics like this within
   mm/ should be driven by explicit events as much as possible and not time.

2. If time was to be used, it would be cheaper to have the simpliest possible
   state tracking in the fast paths and decay any resizing of the PCP
   within the vmstat updates (reuse pcp->expire except it applies to local
   pcps). Even this is less than ideal as the PCP may be too large for short
   periods of time but it may also act as a backstop for worst-case behaviour

3. free_factor is an existing mechanism for detecting recent patterns
   and adapting the PCP sizes. The allocation side should be symmetric
   and the events that should drive it are "refills" on the alloc side and
   "drains" on the free side. Initially it might be easier to have a single
   parameter that scales batch and high up to a limit

4. The amount of state tracked seems excessive and increases the size of
   the per-cpu structure by more than 1 cache line. That in itself may not
   be a problem but the state is tracked on every page alloc/free that goes
   through the fast path and it's relatively complex to track.  That is
   a constant penalty in fast paths that may not may not be relevant to the
   workload and only sustained bursty allocation streams may offset the
   cost.

5. Memory pressure and reclaim activity does not appear to be accounted
   for and it's not clear if pcp->high is bounded or if it's possible for
   a single PCP to hide a large number of pages from other CPUs sharing the
   same node. The max size of the PCP should probably be explicitly clamped.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ