lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a8dc0941-ee2d-6df7-cb32-c6af26bdc54c@linux.ibm.com>
Date:   Mon, 9 Dec 2019 17:39:51 +0100
From:   Zaslonko Mikhail <zaslonko@...ux.ibm.com>
To:     dsterba@...e.cz
Cc:     Josef Bacik <josef@...icpanda.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Chris Mason <clm@...com>, David Sterba <dsterba@...e.com>,
        Richard Purdie <rpurdie@...ys.net>,
        Heiko Carstens <heiko.carstens@...ibm.com>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] btrfs: Increase buffer size for zlib functions

Hi David,

On 27.11.2019 13:14, David Sterba wrote:
> On Tue, Nov 26, 2019 at 10:52:49AM -0500, Josef Bacik wrote:
>> On Tue, Nov 26, 2019 at 03:41:30PM +0100, Mikhail Zaslonko wrote:
>>> Due to the small size of zlib buffer (1 page) set in btrfs code, s390
>>> hardware compression is rather limited in terms of performance. Increasing
>>> the buffer size to 4 pages would bring significant benefit for s390
>>> hardware compression (up to 60% better performance compared to the
>>> PAGE_SIZE buffer) and should not bring much overhead in terms of memory
>>> consumption due to order 2 allocations.
>>>
>>> Signed-off-by: Mikhail Zaslonko <zaslonko@...ux.ibm.com>
>>
>> We may have to make these allocations under memory pressure in the IO context,
>> order 2 allocations here is going to be not awesome.  If you really want it then
>> you need to at least be able to fall back to single page if you fail to get the
>> allocation.  Thanks,
> 
> The allocation is only for the workspace and it does not happen on the
> IO path for each call. There's the pool and if
> 
> btrfs_get_workspace
>   alloc_workspace
> 
> fails, then there's fallback path to wait for an existing workspace to
> be free.
> 
> The order 2 allocation can put more pressure on the allocator though so
> it's possible to have effects in some corner cases, but not in general.
> I don't think the single page fallback code is needed.
> 
> And of course evaluation of the effects of the larger zlib buffer should
> be done, it could improve compression but probably at the cost of cpu
> time. Also decompression of blocks created on new code (4 pages) must
> work on the old code (1 page).
Regarding 'improve compression but probably at the cost of cpu' ... what would be 
the proper way to evaluate this effect?
As for backward compatibility, I do not see side effects of using larger buffers. 
Data in the compressed state might differ indeed, but it will sill conform to zlib
standard and thus can be decompressed.

BTW, I have sent around V2 of the patch set. I would appreciate if you take a look 
as well.
> 
Thanks,
Mikhail

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ