lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 14 Sep 2021 10:13:04 +1000 From: NeilBrown <neilb@...e.de> To: Andrew Morton <akpm@...ux-foundation.org>, Theodore Ts'o <tytso@....edu>, Andreas Dilger <adilger.kernel@...ger.ca>, "Darrick J. Wong" <djwong@...nel.org>, Matthew Wilcox <willy@...radead.org>, Mel Gorman <mgorman@...e.com> Cc: linux-xfs@...r.kernel.org, linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-nfs@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: [PATCH 6/6] XFS: remove congestion_wait() loop from xfs_buf_alloc_pages() Documentation commment in gfp.h discourages indefinite retry loops on ENOMEM and says of __GFP_NOFAIL that it is definitely preferable to use the flag rather than opencode endless loop around allocator. congestion_wait() is indistinguishable from schedule_timeout_uninterruptible() in practice and it is not a good way to wait for memory to become available. So instead of waiting, allocate a single page using __GFP_NOFAIL, then loop around and try to get any more pages that might be needed with a bulk allocation. This single-page allocation will wait in the most appropriate way. Signed-off-by: NeilBrown <neilb@...e.de> --- fs/xfs/xfs_buf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 5fa6cd947dd4..1ae3768f6504 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -372,8 +372,8 @@ xfs_buf_alloc_pages( /* * Bulk filling of pages can take multiple calls. Not filling the entire - * array is not an allocation failure, so don't back off if we get at - * least one extra page. + * array is not an allocation failure, so don't fail or fall back on + * __GFP_NOFAIL if we get at least one extra page. */ for (;;) { long last = filled; @@ -394,7 +394,7 @@ xfs_buf_alloc_pages( } XFS_STATS_INC(bp->b_mount, xb_page_retries); - congestion_wait(BLK_RW_ASYNC, HZ / 50); + bp->b_pages[filled++] = alloc_page(gfp_mask | __GFP_NOFAIL); } return 0; }
Powered by blists - more mailing lists