[<prev] [next>] [day] [month] [year] [list]
Message-ID: <89dc2886-daeb-67ff-be6d-4d70343d2d8b@linux.alibaba.com>
Date: Sun, 23 Apr 2023 14:08:49 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: Hillf Danton <hdanton@...a.com>,
Douglas Anderson <dianders@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Yu Zhao <yuzhao@...gle.com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v2 1/4] mm/filemap: Add folio_lock_timeout()
On 2023/4/22 13:18, Hillf Danton wrote:
> On 21 Apr 2023 15:12:45 -0700 Douglas Anderson <dianders@...omium.org>
>> Add a variant of folio_lock() that can timeout. This is useful to
>> avoid unbounded waits for the page lock in kcompactd.
>
> Given no mutex_lock_timeout() (perhaps because timeout makes no sense for
> spinlock), I suspect your fix lies in the right layer. If waiting for
> page under IO causes trouble for you, another simpler option is make
> IO faster (perhaps all you can do) for instance. If kcompactd is waken
> up by kswapd, waiting for slow IO is the right thing to do.
A bit out of topic. That is almost our original inital use scenarios for
EROFS [1] although we didn't actually test Chrome OS, there lies four
points:
1) 128kb compressed size unit is not suitable for memory constraint
workload, especially memory pressure scenarios, that amplify both I/Os
and memory footprints (EROFS was initially optimized with 4KiB
pclusters);
2) If you turn into a small compressed size (e.g. 4 KiB), some fs behaves
ineffective since its on-disk compressed index isn't designed to be
random accessed (another in-memory cache for random access) so you have
to count one by one to calculate physical data offset if cache miss;
3) compressed data needs to take extra memory during I/O (especially
low-ended devices) that makes the cases worse and our camera app
workloads once cannot be properly launched under heavy memory pressure,
but in order to keep user best experience we have to keep as many as
apps active so that it's hard to kill apps directly. So inplace I/O +
decompression is needed in addition to small compressed sizes for
overall performance.
4) If considering real-time performance, some algorithms are not quite
suitable for extreme pressure cases;
etc.
I could give more details on this year LSF/MM about this, although it's not
a new topic and I'm not a Android guy now.
[1] https://www.usenix.org/conference/atc19/presentation/gao
Thanks,
Gao Xiang
>
>
Powered by blists - more mailing lists