[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240206015543.GH69174@google.com>
Date: Tue, 6 Feb 2024 10:55:43 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Barry Song <21cnbao@...il.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>, minchan@...nel.org,
axboe@...nel.dk, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] zram: easy the allocation of zcomp_strm's buffers with 2
pages
On (24/01/29 10:46), Barry Song wrote:
> On Mon, Jan 15, 2024 at 10:35 AM Sergey Senozhatsky
> <senozhatsky@...omium.org> wrote:
> >
> > On (24/01/06 15:38), Barry Song wrote:
> > > On Sat, Jan 6, 2024 at 9:30 AM Sergey Senozhatsky
> > > <senozhatsky@...omium.org> wrote:
> > > >
> > > > On (24/01/03 13:30), Barry Song wrote:
> > > > > There is no need to keep zcomp_strm's buffers contiguous physically.
> > > > > And rarely, 1-order allocation can fail while buddy is seriously
> > > > > fragmented.
> > > >
[..]
> > Okay, makes sense.
> > Do you see these problems in real life? I don't recall any reports.
>
> i don't have problems with the current zram which supports normal pages only.
>
> but in our out-of-tree code, we have enhanced zram/zsmalloc to support large
> folios compression/decompression, which will make zram work much better
> with large anon folios/mTHP things on which Ryan Roberts is working on.
>
> I mean, a large folio with for example 16 normal pages can be saved as
> one object in zram.
> In millions of phones, we have deployed this approach and seen huge improvement
> on compression ratio and cpu consumption decrease. in that case, we
> need a larger
> per-cpu buffer, and have seen frequent failure on allocation. that
> inspired me to send
> this patch in advance.
May I please ask you to resend this patch with updated commit mesasge?
E.g. mention cpu offlinig/onlining, etc.
[..]
> > > > I also wonder whether Android uses HW compression, in which case we
> > > > may need to have physically contig pages. Not to mention TLB shootdowns
> > > > that virt contig pages add to the picture.
> > >
> > > I don't understand how HW compression and TLB shootdown are related as zRAM
> > > is using a traditional comp API.
> >
> > Oh, those are not related. TLB shootdowns are what now will be added to
> > all compressions/decompressions, so it's sort of extra cost.
>
> i am sorry i still don't understand where the tlb shootdowns come
> from. we don't unmap
> this per-cpu buffers during compression and decompression, do we ?
>
> am i missing something?
No, I guess you are right.
Powered by blists - more mailing lists