[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACOAw_wjCyTmwusY6S4+NgMuLOZm9fwGfrvCT272GJ01-RP6PQ@mail.gmail.com>
Date: Tue, 14 Jun 2022 09:46:50 -0700
From: Daeho Jeong <daeho43@...il.com>
To: Gao Xiang <hsiangkao@...ux.alibaba.com>
Cc: Eric Biggers <ebiggers@...nel.org>,
Daeho Jeong <daehojeong@...gle.com>,
Nathan Huckleberry <nhuck@...gle.com>, kernel-team@...roid.com,
linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH] f2fs: handle decompress only post processing
in softirq
>
> Some my own previous thoughts about this strategy:
>
> - If we allocate all memory and map these before I/Os, all inflight I/Os
> will keep such temporary pages all the time until decompression is
> finished. In contrast, if we allocate or reuse such pages just before
> decompression, it would minimize the memory footprints.
>
> I think it will impact the memory numbers at least on the very
> low-ended devices with bslow storage. (I've seen f2fs has some big
> mempool already)
>
> - Many compression algorithms are not suitable in the softirq contexts,
> also I vaguely remembered if softirq context lasts for > 2ms, it will
> push into ksoftirqd instead so it's actually another process context.
> And it may delay other important interrupt handling.
>
> - Go back to the non-deterministic scheduling of workqueues. I guess it
> may be just due to scheduling punishment due to a lot of CPU consuming
> due to decompression before so the priority becomes low, but that is
> just a pure guess. May be we need to use RT scheduling policy instead.
>
> At least with WQ_HIGHPRI for dm-verity at least, but I don't find
> WQ_HIGHPRI mark for dm-verity.
>
> Thanks,
> Gao Xiang
I totally understand what you are worried about. However, in the real
world, non-determinism from workqueues is more harsh than we expected.
As you know, reading I/Os in the system are critical paths most of the
time and now I/O variations with workqueue are too bad.
I also think it's better that we have RT scheduling like things here.
We could think about it more.
Thanks,
Powered by blists - more mailing lists