[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fdfad5c9-3e5d-efbe-a39e-c26e3fd11975@squashfs.org.uk>
Date: Tue, 10 May 2022 04:41:58 +0100
From: Phillip Lougher <phillip@...ashfs.org.uk>
To: Matthew Wilcox <willy@...radead.org>
Cc: Xiongwei Song <sxwjean@...il.com>,
Zheng Liang <zhengliang6@...wei.com>,
Zhang Yi <yi.zhang@...wei.com>, Hou Tao <houtao1@...wei.com>,
Miao Xie <miaoxie@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Hsin-Yi Wang <hsinyi@...omium.org>,
"Song, Xiongwei" <Xiongwei.Song@...driver.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"squashfs-devel@...ts.sourceforge.net"
<squashfs-devel@...ts.sourceforge.net>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: squashfs performance regression and readahea
On 10/05/2022 04:20, Phillip Lougher wrote:
> On 10/05/2022 03:35, Matthew Wilcox wrote:
>> On Tue, May 10, 2022 at 02:11:41AM +0100, Phillip Lougher wrote:
>>> On 09/05/2022 14:21, Matthew Wilcox wrote:
>>>> On Mon, May 09, 2022 at 08:43:45PM +0800, Xiongwei Song wrote:
>>>>> Hi Hsin-Yi and Matthew,
>>>>>
>>>>> With the patch from the attachment on linux 5.10, ran the command as I
>>>>> mentioned earlier,
>>>>> got the results below:
>>>>> 1:40.65 (1m + 40.65s)
>>>>> 1:10.12
>>>>> 1:11.10
>>>>> 1:11.47
>>>>> 1:11.59
>>>>> 1:11.94
>>>>> 1:11.86
>>>>> 1:12.04
>>>>> 1:12.21
>>>>> 1:12.06
>>>>>
>>>>> The performance has improved obviously, but compared to linux 4.18,
>>>>> the
>>>>> performance is not so good.
>>>>>
>>>>> Moreover, I wanted to test on linux 5.18. But I think I should revert
>>>>> 9eec1d897139 ("squashfs: provide backing_dev_info in order to disable
>>>>> read-ahead"),
>>>>> right? Otherwise, the patch doesn't work?
>>>>
>>>> I've never seen patch 9eec1d897139 before. If you're going to point
>>>> out bugs in my code, at least have the decency to cc me on it. It
>>>> should never have gone in, and should be reverted so the problem can
>>>> be fixed properly.
>>>
>>> You are not in charge of what patches goes into Squashfs, that is my
>>> perogative as maintainer of Squashfs.
>>
>> I think you mean 'prerogative'. And, no, your filesystem is not your
>> little fiefdom, it's part of a collaborative effort.
>>
>
> This isn't a spelling contest, and if that's the best you can do you
> have already failed.
>
> Be carefull here also, I have been maintainer of Squashfs for 20 years,
> and was kernel maintainer for both Ubuntu and Redhat for 10 years, and
> so I am experienced member of the community.
>
> You reply is bordering on offensive and arrogant, especially considering
> it is unwarranted. I did not set out to offend you, and I don't
> appreciate it.
>
> About 8 years ago I decided to refrain from active involvement in the
> kernel community, because I decided the level of discourse was
> disgusting, and I had enough of it.
>
> I poped up now to defend my approval of the Huawei patch. I am *quite*
> happy not to have any more involvement until necessary.
>
> So having said what I want to say, I will leave it at that. You have
> just proved why I have minimised my involvement.
>
> No doubt you'll throw your toys out the pram, but, I'm no
> longer listening so don't bother.
>
>
>>> That patch (by Huawei) fixes the performance regression in Squashfs
>>> by disabling readahead, and it is good workaround until something
>>> better.
>>
>> You *didn't even report the problem to me*. How can it be fixed if I'm
>> not aware of it?
Despite having been insulted, I have done your homework for you.
This is where the problem was raised last year, with you directly
emailed.
https://lore.kernel.org/all/CAJMQK-g9G6KQmH-V=BRGX0swZji9Wxe_2c7ht-MMAapdFy2pXw@mail.gmail.com/T/
>>
>
> There was a email discussion last year, which I responded to, and got
> ignored. I will find it out tomorrow, perhaps. But I will probably
> not bother, because life is too short.
>
Afterwards you started a thread on "Readahead for compressed data",
which I responded to.
https://lore.kernel.org/all/YXHK5HrQpJu9oy8w@casper.infradead.org/T/
> Cheers
>
> Phillip
Powered by blists - more mailing lists