[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQu1EhQiV8h5Jsa6@bombadil.infradead.org>
Date: Wed, 20 Sep 2023 20:14:26 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: John Hubbard <jhubbard@...dia.com>
Cc: Zi Yan <ziy@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"Yin, Fengwei" <fengwei.yin@...el.com>,
Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Rohan Puri <rohan.puri15@...il.com>,
Adam Manzanares <a.manzanares@...sung.com>
Subject: Re: [RFC PATCH 0/4] Enable >0 order folio memory compaction
On Wed, Sep 20, 2023 at 07:05:25PM -0700, John Hubbard wrote:
> On 9/20/23 18:16, Luis Chamberlain wrote:
> > On Wed, Sep 20, 2023 at 05:55:51PM -0700, Luis Chamberlain wrote:
> > > Are there other known recipes test help test this stuff?
> >
> > You know, it got me wondering... since how memory fragmented a system
> > might be by just running fstests, because, well, we already have
> > that automated in kdevops and it also has LBS support for all the
> > different large block sizes on 4k sector size. So if we just had a
> > way to "measure" or "quantify" memory fragmentation with a score,
> > we could just tally up how we did after 4 hours of testing for each
> > block size with a set of memory on the guest / target node / cloud
> > system.
> >
> > Luis
>
> I thought about it, and here is one possible way to quantify
> fragmentation with just a single number. Take this with some
> skepticism because it is a first draft sort of thing:
>
> a) Let BLOCKS be the number of 4KB pages (or more generally, then number
> of smallest sized objects allowed) in the area.
>
> b) Let FRAGS be the number of free *or* allocated chunks (no need to
> consider the size of each, as that is automatically taken into
> consideration).
>
> Then:
> fragmentation percentage = (FRAGS / BLOCKS) * 100%
>
> This has some nice properties. For one thing, it's easy to calculate.
> For another, it can discern between these cases:
>
> Assume a 12-page area:
>
> Case 1) 6 pages allocated allocated unevenly:
>
> 1 page allocated | 1 page free | 1 page allocated | 5 pages free | 4 pages allocated
>
> fragmentation = (5 FRAGS / 12 BLOCKS) * 100% = 41.7%
>
> Case 2) 6 pages allocated evenly: every other page is allocated:
>
> fragmentation = (12 FRAGS / 12 BLOCKS) * 100% = 100%
Thanks! Will try this!
BTW stress-ng might also be a nice way to do other pathalogical things here.
Luis
Powered by blists - more mailing lists