lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X/dS37kyW+jf4gg/@kroah.com>
Date:   Thu, 7 Jan 2021 19:28:47 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     Wen Yang <wenyang@...ux.alibaba.com>,
        Christian Brauner <christian@...uner.io>
Cc:     Sasha Levin <sashal@...nel.org>,
        Xunlei Pang <xlpang@...ux.alibaba.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 4.9 00/10] fix a race in release_task when flushing
 the dentry

On Fri, Jan 08, 2021 at 12:21:38AM +0800, Wen Yang wrote:
> 
> 
> 在 2021/1/7 下午8:17, Greg Kroah-Hartman 写道:
> > On Thu, Jan 07, 2021 at 03:52:12PM +0800, Wen Yang wrote:
> > > The dentries such as /proc/<pid>/ns/ have the DCACHE_OP_DELETE flag, they
> > > should be deleted when the process exits.
> > > 
> > > Suppose the following race appears:
> > > 
> > > release_task                 dput
> > > -> proc_flush_task
> > >                               -> dentry->d_op->d_delete(dentry)
> > > -> __exit_signal
> > >                               -> dentry->d_lockref.count--  and return.
> > > 
> > > In the proc_flush_task(), if another process is using this dentry, it will
> > > not be deleted. At the same time, in dput(), d_op->d_delete() can be executed
> > > before __exit_signal(pid has not been hashed), d_delete returns false, so
> > > this dentry still cannot be deleted.
> > > 
> > > This dentry will always be cached (although its count is 0 and the
> > > DCACHE_OP_DELETE flag is set), its parent denry will also be cached too, and
> > > these dentries can only be deleted when drop_caches is manually triggered.
> > > 
> > > This will result in wasted memory. What's more troublesome is that these
> > > dentries reference pid, according to the commit f333c700c610 ("pidns: Add a
> > > limit on the number of pid namespaces"), if the pid cannot be released, it
> > > may result in the inability to create a new pid_ns.
> > > 
> > > This issue was introduced by 60347f6716aa ("pid namespaces: prepare
> > > proc_flust_task() to flush entries from multiple proc trees"), exposed by
> > > f333c700c610 ("pidns: Add a limit on the number of pid namespaces"), and then
> > > fixed by 7bc3e6e55acf ("proc: Use a list of inodes to flush from proc").
> > 
> > Why are you just submitting a series for 4.9 and 4.19, what about 4.14?
> > We can't have users move to a newer kernel and then experience old bugs,
> > right?
> > 
> Okay, the patches corresponding to 4.14 will be ready later.

Note for some reason you didn't cc: the stable list for these patches :(

> > But the larger question is why are you backporting a whole new feature
> > here?  Why is CLONE_PIDFD needed?  That feels really wrong...
> > 
> 
> The reason for backporting CLONE_PIDFD is because 7bc3e6e55acf ("proc: Use a
> list of inodes to flush from proc") relies on wait_pidfd.lock. There are
> indeed many associated modifications here. We are also testing it. Please
> check the code more.

Is the only "issue" here wasted memory?  Will it eventually be freed
anyway even if you do not echo to the proc file to flush caches?

You mention the inability to create a new pid for a specific namespace,
is that really a problem?  Shouldn't the code handle such issues
normally?  What breaks without these changes?

I think at this point, it might just time for you to move to a newer
kernel release, as adding a whole new userspace feature for this feels
really really odd.

What is preventing you from doing that today?  What holds you to older
kernels that will not allow you to move forward?

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ