[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201202164834.GV17338@dhcp22.suse.cz>
Date: Wed, 2 Dec 2020 17:48:34 +0100
From: Michal Hocko <mhocko@...e.com>
To: Minchan Kim <minchan@...nel.org>
Cc: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, hyesoo.yu@...sung.com,
willy@...radead.org, iamjoonsoo.kim@....com, vbabka@...e.cz,
surenb@...gle.com, pullip.cho@...sung.com, joaodias@...gle.com,
hridya@...gle.com, sumit.semwal@...aro.org, john.stultz@...aro.org,
Brian.Starkey@....com, linux-media@...r.kernel.org,
devicetree@...r.kernel.org, robh@...nel.org,
christian.koenig@....com, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH v2 2/4] mm: introduce cma_alloc_bulk API
On Wed 02-12-20 08:15:49, Minchan Kim wrote:
> On Wed, Dec 02, 2020 at 04:49:15PM +0100, Michal Hocko wrote:
[...]
> > Well, what I can see is that this new interface is an antipatern to our
> > allocation routines. We tend to control allocations by gfp mask yet you
> > are introducing a bool parameter to make something faster... What that
> > really means is rather arbitrary. Would it make more sense to teach
> > cma_alloc resp. alloc_contig_range to recognize GFP_NOWAIT, GFP_NORETRY resp.
> > GFP_RETRY_MAYFAIL instead?
>
> If we use cma_alloc, that interface requires "allocate one big memory
> chunk". IOW, return value is just struct page and expected that the page
> is a big contiguos memory. That means it couldn't have a hole in the
> range.
> However the idea here, what we asked is much smaller chunk rather
> than a big contiguous memory so we could skip some of pages if they are
> randomly pinned(long-term/short-term whatever) and search other pages
> in the CMA area to avoid long stall. Thus, it couldn't work with exising
> cma_alloc API with simple gfp_mak.
I really do not see that as something really alient to the cma_alloc
interface. All you should care about, really, is what size of the object
you want and how hard the system should try. If you have a problem with
an internal implementation of CMA and how it chooses a range and deal
with pinned pages then it should be addressed inside the CMA allocator.
I suspect that you are effectivelly trying to workaround those problems
by a side implementation with a slightly different API. Or maybe I still
do not follow the actual problem.
> > I am not deeply familiar with the cma allocator so sorry for a
> > potentially stupid question. Why does a bulk interface performs better
> > than repeated calls to cma_alloc? Is this because a failure would help
> > to move on to the next pfn range while a repeated call would have to
> > deal with the same range?
>
> Yub, true with other overheads(e.g., migration retrial, waiting writeback
> PCP/LRU draining IPI)
Why cannot this be implemented in the cma_alloc layer? I mean you can
cache failed cases and optimize the proper pfn range search.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists