[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yaj0KTp17AaHMQyC@cmpxchg.org>
Date: Thu, 2 Dec 2021 11:28:25 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Zhaoyang Huang <huangzhaoyang@...il.com>
Cc: Nitin Gupta <ngupta@...are.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Jens Axboe <axboe@...nel.dk>, Minchan Kim <minchan@...nel.org>,
Zhaoyang Huang <zhaoyang.huang@...soc.com>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm: count zram read/write into PSI_IO_WAIT
On Wed, Dec 01, 2021 at 07:12:30PM +0800, Zhaoyang Huang wrote:
> There is no chance for zram reading/writing to be counted in
> PSI_IO_WAIT so far as zram will deal with the request just in current
> context without invoking submit_bio and io_schedule.
Hm, but you're also not waiting for a real io device - during which
the CPU could be doing something else e.g. You're waiting for
decompression. The thread also isn't in D-state during that time. What
scenario would benefit from this accounting? How is IO pressure from
comp/decomp paths actionable to you?
What about when you use zram with disk writeback enabled, and you see
a mix of decompression and actual disk IO. Wouldn't you want to be
able to tell the two apart, to see if you're short on CPU or short on
IO bandwidth in this setup? Your patch would make that impossible.
This needs a much more comprehensive changelog.
> > @@ -1246,7 +1247,9 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
> > zram_get_element(zram, index),
> > bio, partial_io);
> > }
> > -
> > +#ifdef CONFIG_PSI
> > + psi_task_change(current, 0, TSK_IOWAIT);
> > +#endif
Add psi_iostall_enter() and leave helpers that encapsulate the ifdefs.
Powered by blists - more mailing lists