lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0612101705180.12500@woody.osdl.org>
Date:	Sun, 10 Dec 2006 17:17:34 -0800 (PST)
From:	Linus Torvalds <torvalds@...l.org>
To:	Chris Wedgwood <cw@...f.org>
cc:	Daniel Drake <dsd@...too.org>, Adrian Bunk <bunk@...sta.de>,
	Sergio Monteiro Basto <sergio@...giomb.no-ip.org>,
	Daniel Ritz <daniel.ritz@....ch>,
	Jean Delvare <khali@...ux-fr.org>,
	Bjorn Helgaas <bjorn.helgaas@...com>,
	Brice Goglin <brice@...i.com>,
	"John W. Linville" <linville@...driver.com>,
	Bauke Jan Douma <bjdouma@...all.nl>,
	Tomasz Koprowski <tomek@...rowski.org>, gregkh@...e.de,
	linux-kernel@...r.kernel.org, linux-pci@...ey.karlin.mff.cuni.cz
Subject: Re: RFC: PCI quirks update for 2.6.16



On Sun, 10 Dec 2006, Chris Wedgwood wrote:
> 
> Well, it's not clear to me that reverting to a quirk the pokes *all*
> VIA pci devices on all machines is safe, it's not even clear if it was
> a good idea to merge this.

I'm just saying that the stable tree should never merge anything that can 
possibly cause a regression. 

> Well, I think the current 2.6.16.x release series is already broken on
> some other subset of hardware.

That's not the point. If it was broken on some subset of hardware, as long 
as it's not a REGRESSION from 2.6.16, that's better than _changing_ the 
breakage. And no, it doesn't really matter how many machines are affected 
(ie it's not better to have a "smaller" set of cases that break, unless 
it's a _strict_ subset).

The reason? It's better to be _dependable_ than to work on a maximum 
number of machines. This is why _regressions_ are always much worse than 
old bugs. It's much better to have "it didn't work before, and it still 
doesn't work" than to have "it used to work, but now it broke".

Because people for whom something used to work should always be able to 
update to a new kernel without having to constantly worry.

So for the _stable_ series, if you don't understand the problem 100%, and 
you don't know that something really fixes something and never causes 
regressions, such a patch simply SHOULD NOT be applied. It's that easy.

(And the argument that it "fixes more than it breaks" is a total garbage 
argument for several reasons:

 - you don't actually know that. You may have a lot of reports about 
   breakage that you think will be fixed (so you _think_ it fixes a lot), 
   but by definition you won't have any clue AT ALL about how much it will 
   break, since nobody will have tested it. The machines that weren't 
   broken before generally won't even bother to upgrade, so you'll find 
   out only much later.

 - machines that didn't use to work well before are much less important 
   than machines that worked fine. People don't _expect_ them to work, 
   people don't have a history of them working. So if you fix ten machines 
   that didn't work before, but you break one that _did_ work before, 
   that's _still_ not actually a good deal. Because angst-wise, you 
   actually lost on it.

So please revert anythign that is even slightly open for debate in the 
stable series. The whole point of the stable series is to be _stable_, and 
regressions are bad.

			Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ