lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 16 Dec 2019 17:31:19 +0100
From:   Zaslonko Mikhail <zaslonko@...ux.ibm.com>
To:     dsterba@...e.cz, Andrew Morton <akpm@...ux-foundation.org>,
        Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
        David Sterba <dsterba@...e.com>, linux-btrfs@...r.kernel.org,
        Heiko Carstens <heiko.carstens@...ibm.com>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Eduard Shishkin <edward6@...ux.ibm.com>,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     "gerald.schaefer@...ibm.com" <gerald.schaefer@...ibm.com>
Subject: Re: [PATCH v2 6/6] btrfs: Use larger zlib buffer for s390 hardware
 compression

Hi David,

On 13.12.2019 18:35, David Sterba wrote:
> On Fri, Dec 13, 2019 at 05:10:10PM +0100, Zaslonko Mikhail wrote:
>> Hello,
>>
>> Could you please review the patch for btrfs below.
>>
>> Apart from falling back to 1 page, I have set the condition to allocate 
>> 4-pages zlib workspace buffer only if s390 Deflate-Conversion facility
>> is installed and enabled. Thus, it will take effect on s390 architecture
>> only.
>>
>> Currently in zlib_compress_pages() I always copy input pages to the workspace
>> buffer prior to zlib_deflate call. Would that make sense, to pass the page
>> itself, as before, based on the workspace buf_size (for 1-page buffer)?
> 
> Doesn't the copy back and forth kill the improvements brought by the
> hw supported decompression?

Well, I'm not sure how to avoid this copy step here. As far as I understand
the input data in btrfs_compress_pages() doesn't always represent continuous 
pages, so I copy input pages to a continuous buffer prior to a compression call.   
But even with this memcpy in place, the hw supported compression shows
significant improvements.
What I can definitely do is to skip the copy if no s390 hardware compression
support enabled.

> 
>> As for calling zlib_deflate with Z_FINISH flush parameter in a loop until
>> Z_STREAM_END is returned, that comes in agreement with the zlib manual.
> 
> The concerns are about zlib stream that take 4 pages on input and on the
> decompression side only 1 page is available for the output. Ie. as if
> the filesystem was created on s390 with dflcc then opened on x86 host.

I'm not sure I fully understand the concern here. If we talk of backward 
compatibility, I do not see side effects of using larger buffers. Data in 
the compressed state might differ indeed, but it will sill conform to zlib
standard and thus can be decompressed. The smaller out buffer would just 
take more zlib calls to flush the output.


> The zlib_deflate(Z_FINISH) happens on the compresission side.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ