lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZLE0aF/pT9zZeoGt@dhcp22.suse.cz>
Date:   Fri, 14 Jul 2023 13:41:28 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Arjan Van De Ven <arjan@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Johannes Weiner <jweiner@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning

On Wed 12-07-23 10:05:26, Mel Gorman wrote:
> On Tue, Jul 11, 2023 at 01:19:46PM +0200, Michal Hocko wrote:
> > On Mon 10-07-23 14:53:25, Huang Ying wrote:
> > > To auto-tune PCP high for each CPU automatically, an
> > > allocation/freeing depth based PCP high auto-tuning algorithm is
> > > implemented in this patch.
> > > 
> > > The basic idea behind the algorithm is to detect the repetitive
> > > allocation and freeing pattern with short enough period (about 1
> > > second).  The period needs to be short to respond to allocation and
> > > freeing pattern changes quickly and control the memory wasted by
> > > unnecessary caching.
> > 
> > 1s is an ethernity from the allocation POV. Is a time based sampling
> > really a good choice? I would have expected a natural allocation/freeing
> > feedback mechanism. I.e. double the batch size when the batch is
> > consumed and it requires to be refilled and shrink it under memory
> > pressure (GFP_NOWAIT allocation fails) or when the surplus grows too
> > high over batch (e.g. twice as much).  Have you considered something as
> > simple as that?
> > Quite honestly I am not sure time based approach is a good choice
> > because memory consumptions tends to be quite bulky (e.g. application
> > starts or workload transitions based on requests).
> >  
> 
> I tend to agree. Tuning based on the recent allocation pattern without frees
> would make more sense and also be symmetric with how free_factor works. I
> suspect that time-based may be heavily orientated around the will-it-scale
> benchmark. While I only glanced at this, a few things jumped out
> 
> 1. Time-based heuristics are not ideal. congestion_wait() and
>    friends was an obvious case where time-based heuristics fell apart even
>    before the event it waited on was removed. For congestion, it happened to
>    work for slow storage for a while but that was about it.  For allocation
>    stream detection, it has a similar problem. If a process is allocating
>    heavily, then fine, if it's in bursts of less than a second more than one
>    second apart then it will not adapt. While I do not think it is explicitly
>    mentioned anywhere, my understanding was that heuristics like this within
>    mm/ should be driven by explicit events as much as possible and not time.

Agreed. I would also like to point out that it is also important to
realize those events that we should care about. Remember the primary
motivation of the tuning is to reduce the lock contention. That being
said, it is less of a problem to have stream or bursty demand for
memory if that doesn't really cause the said contention, right? So any
auto-tuning should consider that as well and do not inflate the batch
in an absense of the contention. That of course means that a solely
deallocation based monitoring.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ