lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 3 Dec 2007 11:27:15 +0100
From:	Andi Kleen <andi@...stfloor.org>
To:	Radoslaw Szkodzinski <lkml@...ralstorm.puszkin.org>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [feature] automatically detect hung TASK_UNINTERRUPTIBLE tasks

> Kernel waiting 2 minutes on TASK_UNINTERRUPTIBLE is certainly broken.

What should it do when the NFS server doesn't answer anymore or 
when the network to the SAN RAID array located a few hundred KM away develops 
some hickup?  Or just the SCSI driver decides to do lengthy error 
recovery  -- you could argue that is broken if it takes longer 
than 2 minutes, but in practice these things are hard to test
and to fix.

> Yes, that's exactly why the patch is needed - to find the bugs and fix

The way to find that would be to use source auditing, not break
perfectly fine error handling paths. Especially since this at least
traditionally hasn't been considered a bug, but a fundamental design
parameter of network/block/etc. file systems 

> CIFS and similar have to be fixed - it tends to lock the app
> using it, in unkillable state.

Actually that's not true. You can umount -f and then kill for at least
NFS and CIFS. Not sure it is true for the other network file systems
though.

You could in theory do TASK_KILLABLE for all block devices too (not 
a bad thing; I would love to have it). 

But it would be equivalent in work (has to patch all the same places with 
similar code) to Suparna's big old fs AIO retry
patchkit that never went forward because everyone was too worried
about excessive code impact. Maybe that has changed, maybe not ... 

And even then you would need to check all possible error handling
paths (scsi_error and low level drivers at least) that they all 
finish in less than two minutes.

> > > wild guesses. Only one way to get the real false positive percentage.
> > 
> > Yes let's break things first instead of looking at the implications closely.
> 
> Throwing _rare_ stack traces is not breakage. 120s task_uninterruptible

Sorry that's total bogus. Throwing a stack trace is the kernel
equivalent of sending S.O.S. and worrying the user significantly,
taxing reporting resources etc.  and in the interest of saving
everybody trouble it should only do that when it is really
sure it is truly broken. 

> in the usual case (no errors) is already broken - there are no sane
> loads that can invoke that IMO.

You are wrong on that.

-Andi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ