lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e840d413-c1a7-d047-1a63-468b42571846@linux.alibaba.com>
Date:   Wed, 25 Jan 2023 18:05:30 +0800
From:   Gao Xiang <hsiangkao@...ux.alibaba.com>
To:     Alexander Larsson <alexl@...hat.com>,
        Amir Goldstein <amir73il@...il.com>
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        gscrivan@...hat.com, david@...morbit.com, brauner@...nel.org,
        viro@...iv.linux.org.uk, Vivek Goyal <vgoyal@...hat.com>,
        Miklos Szeredi <miklos@...redi.hu>
Subject: Re: [PATCH v3 0/6] Composefs: an opportunistically sharing verified
 image filesystem



On 2023/1/25 17:37, Alexander Larsson wrote:
> On Tue, 2023-01-24 at 21:06 +0200, Amir Goldstein wrote:
>> On Tue, Jan 24, 2023 at 3:13 PM Alexander Larsson <alexl@...hat.com>

...

>>>
>>> They are all strictly worse than squashfs in the above testing.
>>>
>>
>> It's interesting to know why and if an optimized mkfs.erofs
>> mkfs.ext4 would have done any improvement.
> 
> Even the non-loopback mounted (direct xfs backed) version performed
> worse than the squashfs one. I'm sure a erofs with sparse files would
> do better due to a more compact file, but I don't really see how it
> would perform significantly different than the squashfs code. Yes,
> squashfs lookup is linear in directory length, while erofs is log(n),
> but the directories are not so huge that this would dominate the
> runtime.
> 
> To get an estimate of this I made a broken version of the erofs image,
> where the metacopy files are actually 0 byte size rather than sparse.
> This made the erofs file 18M instead, and gained 10% in the cold cache
> case. This, while good, is not near enough to matter compared to the
> others.
> 
> I don't think the base performance here is really much dependent on the
> backing filesystem. An ls -lR workload is just a measurement of the
> actual (i.e. non-dcache) performance of the filesystem implementation
> of lookup and iterate, and overlayfs just has more work to do here,
> especially in terms of the amount of i/o needed.

I will form a formal mkfs.erofs version in one or two days since we're
cerebrating Lunar New year now.

Since you don't have more I/O traces for analysis, I have to do another
wild guess.

Could you help benchmark your v2 too? I'm not sure if such
performance also exists in v2.  The reason why I guess as this is
that it seems that you read all dir inode pages when doing the first
lookup, it can benefit to seq dir access.

I'm not sure if EROFS can make a similar number by doing forcing
readahead on dirs to read all dir data at once as well.

Apart from that I don't see significant difference, at least personally
I'd like to know where it could have such huge difference.  I don't
think that is all because of read-only on-disk format differnce.

Thanks,
Gao Xiang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ