lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080622205244.261a787d@tleilax.poochiereds.net>
Date:	Sun, 22 Jun 2008 20:52:44 -0400
From:	Jeff Layton <jlayton@...hat.com>
To:	Neil Brown <neilb@...e.de>
Cc:	"J. Bruce Fields" <bfields@...ldses.org>,
	linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH - take 2] knfsd: nfsd: Handle ERESTARTSYS from syscalls.

On Mon, 23 Jun 2008 10:20:59 +1000
Neil Brown <neilb@...e.de> wrote:

> On Thursday June 19, jlayton@...hat.com wrote:
> > > 
> > > I wonder why we have all this mucking about this signal masks anyway.
> > > Anyone have any ideas about what it actually achieves?
> > > 
> > 
> > HCH asked me the same question when I did the conversion to kthreads.
> > My interpretation (based on guesswork here) was that we wanted to
> > distinguish between SIGKILL and other allowed signals. A SIGKILL is
> > allowed to interrupt the underlying I/O, but other signals should not.
> 
> Yes, that seems to be the intent of the code, but does it really make
> any sense?
>
> Traditionally, filesystem IO is immune to signals so any signal
> arriving while nfsd is doing filesystem IO is irrelevant.
> I guess network IO can be interrupted, and nfsd is expected to write
> to the network during this time when all-but-SIGKILL is ignored.  So
> maybe the intent was to allow e.g. SIGINT to shut down cleanly, and
> SIGKILL to kill more promptly even if the network was sluggish...
> 
> However I don't think we ever blocked on network write.  And currently
> we try very hard to ensure that there is enough memory to hold a reply
> before we start on a request, so it should definitely never block.
> 
> So until OCFS2 came along, the distinction seems to have been
> meaningless, which is why I questioned it.
> 
> Even with OCFS2 I'm not sure it makes much sense.
> 
> Any time that we shut down NFSD, we really want it to shut down
> promptly.  If SIGINT isn't certain to achieve that, we should just be
> sending SIGKILL.
> Conversely, if it does make sense to keep the two signals, we should
> be educating init.d scripts to make use of them. e.g.
> 
>   killall -2 nfsd
>   sleep 0.1
>   if [ `cat /proc/fs/nfsd/threads` -gt 0 ]
>   then sleep 5
>        killall -9 nfsd
>   fi
> 
> or something more clever.
> 
> I think my leaning is just to remove the distinction.  About the worst
> that can happen if the filesystem decides to respond to the signal is
> that you get a short write or a short read, both of which should be
> correctly handled by the client.
> 

Good points all around. If the sigmask juggling makes no sense, then
I'm OK with removing it. I'd prefer that we simplify the code rather
than trying to get clever with init scripts anyway...

The fact that SuSE (and possibly others) have used SIGKILL to take down
nfsd probably means that removing it is a non-issue (modulo the OCFS
issue your patch fixes).

> 
> This brings up an interesting issue for your kthread conversion.
> The new code uses kthread_stop instead of sending a signal.  That
> means it will block until the threads have completed a request and
> exited.
> If some thread is in a filesystem that is waiting on network IO and
> expects to get a signal if it needs to stop in a hurry, then the
> shutdown could block for a long time.  That might not be what we
> want.
> 
> Particularly as if we are blocked in kthread_stop, then no-one else can
> do a kthread stop....
> 
> I think there are issues here that don't yet have obvious answers.
> Other opinions most welcome.
> 

The latest patch doesn't use kthread_stop with nfsd. I figured it was
best to avoid having multiple takedown methods, and since we're using
signals here anyway, I just kept signaling as the only way to take down
nfsd. Plus, as you mention -- all kthread_stop's are serialized, which
could have meant that nfsd's would take a long time to come down on a
busy box with a lot of them.

As a side note, I'm not terribly thrilled with how kthread_stop is
implemented. I worry that a stuck kthread would block unrelated
kthreads from coming down. I did a patch 6 mos ago or so to make it so
that kthread_stop's could be done in parallel. The downside was that it
added a new field to the task_struct (which is already too bloated,
IMO). It might be worth resurrecting that at some point (possibly
making the new field a union with another field that's unused in
kthreads), or considering a different approach to parallelize
kthread_stop.

-- 
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ