lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191127135436.GR2734@twin.jikos.cz>
Date:   Wed, 27 Nov 2019 14:54:36 +0100
From:   David Sterba <dsterba@...e.cz>
To:     Zaslonko Mikhail <zaslonko@...ux.ibm.com>
Cc:     Josef Bacik <josef@...icpanda.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Chris Mason <clm@...com>, David Sterba <dsterba@...e.com>,
        Richard Purdie <rpurdie@...ys.net>,
        Heiko Carstens <heiko.carstens@...ibm.com>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] btrfs: Increase buffer size for zlib functions

On Wed, Nov 27, 2019 at 02:42:20PM +0100, Zaslonko Mikhail wrote:
> Hello,
> 
> On 26.11.2019 16:52, Josef Bacik wrote:
> > On Tue, Nov 26, 2019 at 03:41:30PM +0100, Mikhail Zaslonko wrote:
> >> Due to the small size of zlib buffer (1 page) set in btrfs code, s390
> >> hardware compression is rather limited in terms of performance. Increasing
> >> the buffer size to 4 pages would bring significant benefit for s390
> >> hardware compression (up to 60% better performance compared to the
> >> PAGE_SIZE buffer) and should not bring much overhead in terms of memory
> >> consumption due to order 2 allocations.
> >>
> >> Signed-off-by: Mikhail Zaslonko <zaslonko@...ux.ibm.com>
> > 
> > We may have to make these allocations under memory pressure in the IO context,
> > order 2 allocations here is going to be not awesome.  If you really want it then
> > you need to at least be able to fall back to single page if you fail to get the
> > allocation.  Thanks,
> 
> As far as I understand GFP_KERNEL allocations would never fail for the order <= 
> PAGE_ALLOC_COSTLY_ORDER.

There's no guaranteed no-fail semantics for GFP flags (obviously besides
__GFP_NOFAIL), GFP_KERNEL can fail and GFP_NOFS is unlikely to fail for
order below costly allocations. This depends on the allocator internals
and has never been an API-level guarantee AFAIK. There's ongoing to work
to relax the allocator constraints and allow to fail in more cases
(like for GFP_NOFS).

> How else can the memory pressure condition be identified
> here?

All data write paths must consider what happens under memory pressure,
because the reason to write the data could be started by an allocation
that can get free memory by writing dirty data. So it's kind of implied
here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ