lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9r6LtMOPHfxr7UL@google.com>
Date:   Wed, 1 Feb 2023 15:47:58 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     Chris Goldsworthy <quic_cgoldswo@...cinc.com>
Cc:     Roman Gushchin <roman.gushchin@...ux.dev>,
        Sukadev Bhattiprolu <quic_sukadev@...cinc.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Rik van Riel <riel@...riel.com>,
        Roman Gushchin <guro@...com>, Vlastimil Babka <vbabka@...e.cz>,
        Joonsoo Kim <js1304@...il.com>,
        Georgi Djakov <quic_c_gdjako@...cinc.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm,page_alloc,cma: configurable CMA utilization

Hi Chris,

On Tue, Jan 31, 2023 at 08:06:28PM -0800, Chris Goldsworthy wrote:
> On Tue, Jan 31, 2023 at 03:59:36PM -0800, Roman Gushchin wrote:
> > On Tue, Jan 31, 2023 at 12:10:01PM -0800, Sukadev Bhattiprolu wrote:
> > > On Tue, Jan 31, 2023 at 10:10:40AM -0800, Roman Gushchin wrote:
> > > > Hi Sukadev!
> > > > 
> > > > Can you, please, share a bit more details about your setup? E.g. what is
> > > > the zone size, the cma area size and the value you want to set your sysctl to?
> > > 
> > > Hi Roman,
> > > 
> > > I currently have a device with 8GB Zone normal and 600MB of CMA. We have a
> > > slightly different implementation and use up all the available CMA region.
> > > i.e. going forward, we intend to set the ratio to 100 or even higher.
> 
> 
> Hi Roman,
> 
> > It means you want allocations be always served from a cma region first?
> 
> Exactly.
> 
> > What's the point of it?
> 
> We're operating in a resource constrained environment, and we want to maximize
> the amount of memory free / headroom for GFP_KERNEL allocations on our SoCs,
> which are especially important for DMA allocations that use an IOMMU. We need a
> large amount of CMA on our SoCs for various reasons (e.g. for devices not
> upstream of an IOMMU), but whilst that CMA memory is not in use, we want to
> route all GFP_MOVABLE allocations to the CMA regions, which will free up memory
> for GFP_KERNEL allocations. 

I like this patch for different reason but for the specific problem you
mentioned, How about making reclaimer/compaction aware of the problem:

IOW, when the GFP_KERNEL/DMA allocation happens but not enough memory
in the zones, let's migrates movable pages in those zones into CMA
area/movable zone if they are plenty of free memory.

I guess you considered but did you observe some problems?

> 
> > The idea behind the current formula is to keep cma regions free if there is
> > a plenty of other free memory, otherwise treat it on par with other memory.
> 
> With the current approach, if we have a large amount of movable memory allocated
> that has not gone into the CMA regions yet, and a DMA use case starts that
> causes the above condition to be met, we would head towards OOM conditions when
> we otherwise could have delayed this with this change.  Note that since we're
> working on Android, there is a daemon built on top of PSI called LMKD that will
> start killing things under memory pressure (before an OOM is actually reached)
> in order to free up memory. This patch should then reduce kills accordingly for
> a better user experience by keeping a larger set of background apps alive. When
> a CMA allocation does occur and pages get migrated out, there is a similar
> reduction in headroom (you probably already know this and know of the FB
> equivalent made by Johannes Weiner). 
> 
> Thanks,
> 
> Chris.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ