[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <rgbcwa6rcfxpyf75k6voinza7ba2fnsht45kb6ittv4qrbrmyb@i25srryjss3i>
Date: Sat, 15 Nov 2025 11:25:55 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Yuwen Chen <ywen.chen@...mail.com>,
Richard Chang <richardycc@...gle.com>, Brian Geffon <bgeffon@...gle.com>,
Fengyu Lian <licayy@...look.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-block@...r.kernel.org
Subject: Re: [PATCHv2 1/4] zram: introduce writeback bio batching support
On (25/11/14 11:14), Minchan Kim wrote:
> > > How about moving structure definition to the upper part of the C file?
> > > Not only readability to put together data types but also better diff
> > > for reviewer to know what we changed in this patch.
> >
> > This still needs to be under #ifdef CONFIG_ZRAM_WRITEBACK so readability
> > is not significantly better. Do you still prefer moving it up?
>
> Let's move them on top of ifdef CONFIG_ZRAM_WRITEBACK, then.
> IOW, above of writeback_limit_enable_store.
Done.
> > > How about 32 since it's general queue depth for modern storage?
> >
> > So this is tricky. I don't know what number is a good default for
> > all, given the variety of devices out there, variety of specs and
> > hardware, on both sides of price range. I don't know if 32 is safe
> > wrt to performance/throughput (I may be wrong and 32 is safe for
> > everyone). On the other hand, 1 was our baseline for ages, so I
> > wanted to minimize the risks and just keep the baseline behavior.
> >
> > Do you still prefer 32 as default? (here and in the next patch)
>
> Yes, we couldn't get the perfect number everyone would be happpy
> since we don't know their configuration but the value is the
> typical UFS 3.1(even, it's little old sice UFS has higher queue depth)'s
> queue depth. More good thing with the 32 is aligned with SWAP_CLUSTER_MAX
> which is the unit of batching in the traditional split LRU reclaim.
>
> Assuming we don't encounter any significant regressions, I'd like to
> move forward with a queue depth of 32 so that all users can benefit from
> this speedup.
Done.
> > So we do this for post-processing, which allocates a bunch of memory
> > for post-processing (not only requests lists with physical pages, but
> > also candidate slots buckets). The thing is that post-processing can
> > be called under memory pressure and we don't really want to block and
> > reclaim memory from the path that is called to relive memory pressure
> > (by doing writeback or recompression).
>
> Sorry, I didn't understand what's the post-processing means.
>
> First, this writeback_store path is not critical path. Typical usecase
> is trigger the writeback store on system idle time to save zram memory.
>
> Second, If you used the flag to relieve memory pressure, that's not
> the right flag. GFP_NOIO aimed to prevent deadlock with IO context
> but the writeback_store is just process context so no reason to use
> the GFP_NOIO. (If we really want to releieve memory presure, we
> should use __GFP_NORETRY with ~__GFP_RECLAIM but I doubt)
Done.
I wouldn't necessarily call it "wrong", we do re-enter zram
user-space wb > zram writeback -> reclaim IO -> zram write page
it's not deadlock-ish, for sure, but still looked to me important enough
to avoid, so that writeback would be more robust and make faster forward
progress (by actually saving memory) in various situations, including
possible memory pressure. Changed in v3.
> > > I didn't think about much about this that we really need to be
> > > accurate like this. Maybe, next time after coffee.
> >
> > Sorry, not sure I understand this comment.
>
> I meant I didn't took close look the part, yet. :)
Ah, I see :)
Powered by blists - more mailing lists