[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220524012806.GY1098723@dread.disaster.area>
Date: Tue, 24 May 2022 11:28:06 +1000
From: Dave Chinner <david@...morbit.com>
To: Jackie Liu <liu.yun@...ux.dev>
Cc: liuzhengyuan <liuzhengyuan@...inos.cn>,
胡海 <huhai@...inos.cn>, zhangshida@...inos.cn,
darrick.wong@...cle.com, linux-xfs@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [BUG report] security_inode_alloc return -ENOMEM let xfs shutdown
On Tue, May 24, 2022 at 08:52:30AM +0800, Jackie Liu wrote:
> 在 2022/5/24 上午7:20, Dave Chinner 写道:
> > On Mon, May 23, 2022 at 04:51:50PM +0800, Jackie Liu wrote:
> > Yup, that's a shutdown with a dirty transaction because memory
> > allocation failed in the middle of a transaction. XFS can not
> > tolerate memory allocation failure within the scope of a dirty
> > transactions and, in practice, this almost never happens. Indeed,
> > I've never seen this allocation from security_inode_alloc():
> >
> > int lsm_inode_alloc(struct inode *inode)
> > {
> > if (!lsm_inode_cache) {
> > inode->i_security = NULL;
> > return 0;
> > }
> >
> > > > > > > inode->i_security = kmem_cache_zalloc(lsm_inode_cache, GFP_NOFS);
> > if (inode->i_security == NULL)
> > return -ENOMEM;
> > return 0;
> > }
> >
> > fail in all my OOM testing. Hence, to me, this is a theoretical
> > failure as I've never, ever seen this allocation fail in production
> > or test systems, even when driving them hard into OOM with excessive
> > inode allocation and triggering the OOM killer repeatedly until the
> > system kills init....
> >
> > Hence I don't think there's anything we need to change here right
> > now. If users start hitting this, then we're going to have add new
> > memalloc_nofail_save/restore() functionality to XFS transaction
> > contexts. But until then, I don't think we need to worry about
> > syzkaller intentionally hitting this shutdown.
>
> Thanks Dave.
>
> In the actual test, the x86 or arm64 device test will trigger this error
> more easily when FAILSLAB is turned on. After our internal discussion, we
> can try again through such a patch. Anyway, thank you for your reply.
What kernel is the patch against? It doesn't match a current TOT
kernel...
>
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index ceee27b70384..360304409c0c 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -435,6 +435,7 @@ xfs_iget_cache_hit(
> wake_up_bit(&ip->i_flags, __XFS_INEW_BIT);
> ASSERT(ip->i_flags & XFS_IRECLAIMABLE);
> trace_xfs_iget_reclaim_fail(ip);
> + error = -EAGAIN;
> goto out_error;
> }
Ok, I can see what you are suggesting here - it might work if we get
it right. :)
We don't actually want (or need) an unconditional retry. This will
turn persistent memory allocation failure into a CPU burning
livelock rather than -ENOMEM being returned. It might work for a
one-off memory failure, but it's not viable for long term failure as
tends to happen when the system goes deep into OOM territory.
It also ignores the fact that we can return ENOMEM without
consequences from this path if we are not in a transaction - any
pathwalk lookup can have ENOMEM safely returned to it, and that will
propagate the error to userspace. Same with bulkstat lookups, etc.
So we still want them to fail with ENOMEM, not retry indefinitely.
Likely what we want to do is add conditions to the xfs_iget() lookup
tail to detect ENOMEM when tp != NULL. IN that case, we can then run
memalloc_retry_wait(GFP_NOFS) before retrying the lookup. That's in
line with what we do in other places that cannot tolerate allocation
failure (e.g. kmem_alloc(), xfs_buf_alloc_pages()) so it may make
sense to do the same thing here....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists