lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Mar 2017 09:32:20 +0200
From:   Nicolai Stange <nicstange@...il.com>
To:     Johannes Berg <johannes@...solutions.net>
Cc:     Nicolai Stange <nicstange@...il.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        "Paul E.McKenney" <paulmck@...ux.vnet.ibm.com>,
        gregkh <gregkh@...uxfoundation.org>
Subject: Re: deadlock in synchronize_srcu() in debugfs?

Hi Johannes,

On Mon, Mar 27 2017, Johannes Berg wrote:

>> > Before I go hunting - has anyone seen a deadlock in
>> > synchronize_srcu() in debugfs_remove() before?
>> 
>> Not yet. How reproducible is this?
>
> So ... this turned out to be a livelock of sorts.
>
> We have a debugfs file (not upstream (yet?), it seems) that basically
> blocks reading data.
>
> At the point of system hanging, there was a process reading from that
> file, with no data being generated.

>
> A second process was trying to remove a completely unrelated debugfs
> file (*), with the RTNL held.

I wonder if holding the RTNL during the debugfs file removal is really
needed. I'll try to have a look in the next couple of days.

>
> A third and many other processes were waiting to acquire the RTNL.
>
>
> Obviously, in light of things like nfp_net_debugfs_tx_q_read(),
> wil_write_file_reset(), lowpan_short_addr_get() and quite a few more,
> nobody in the whole system can now remove debugfs files while holding
> the RTNL. Not sure how many people that affects, but it's IMHO a pretty
> major new restriction, and one that isn't even flagged at all.

To be honest, I didn't have this scenario, i.e. removing a debugfs file
under a lock, in mind when writing this removal protection series.

Thank you very much for your debugging work and for pointing me to this
sort of problem!

Summarizing, the problem is the call to the indefinitely blocking
srcu_synchronize() while having a lock held? I'll see whether I can ask
lockdep if any lock is held and spit out a warning then.


> Similarly, nobody should be blocking in debugfs files, like we did in
> ours, but also smsdvb_stats_read(), crtc_crc_open() look like they
> could block for quite a while.

Blocking in the debugfs files' fops shall be fine by itself, that's why
SRCU is used for the removal stuff.


> Again, there's no warning here that blocking in debugfs files can now
> indefinitely defer completely unrelated debugfs_remove() calls in the
> entire system.

Yes, there's only one global srcu_struct for debugfs. So far this hasn't
been a problem and if I understand things correctly, it's also not the
problem at hand? If it really becomes an issue, we can very well
introduce per directory srcu_structs as you suggested.


> Overall, while I can solve this problem for our driver, possibly by
> making the debugfs file return some dummy data periodically if no real
> data exists, which may not easily be possible for all such files, I'm
> not convinced that all of this really is the right thing to actually
> impose.

No, I agree: imposing dummy data reads certainly isn't.

Let me have a look
- whether holding the RTNL lock while removing the debugfs files is
  actually needed and
- whether there is an easy way to spot similar scenarios and emit
  a warning for them.

If this doesn't solve the problem, I'll have to think of a different way
to fix this...

> (*) before removing first first we'd obviously wake up and thereby more
> or less terminate the readers first

With the current implementation, I can't see an easy way to identify the
tasks blocking on a particular debugfs file. But maybe this is
resolvable and the way to go here...


Thanks,

Nicolai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ