[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200706120016.58317.ak@suse.de>
Date: Tue, 12 Jun 2007 00:16:58 +0200
From: Andi Kleen <ak@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>,
Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
arjan@...ux.intel.com, ashok.raj@...el.com, shaohua.li@...el.com,
davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
>
> If the only option is to panic then something's busted. If it's network IO
> then there should be a way of dropping the frame. If it's disk IO then we
> should report the failure and cause an IO error.
An block IO error is basically catastrophic for the system too. There isn't really
a concept of "temporary IO error that will resolve itself" concept in Unix.
There are still lots of users of pci_map_single() that don't check the return
value unfortunately. That is mostly in old drivers; it is generally
picked on in reviews now. But then there is no guarantee that these rarely
used likely untested error handling paths actually work.
The alternative is writing out random junk which is somewhat risky.
We fixed over time all the pci_map_sg()s at least to do the checks correctly.
When I wrote the IOMMU code originally this wasn't the case and it destroyed
several file systems of test systems due to IOMMU leaks in drivers
(writing junk over the super block when the IOMMU is full doesn't make
mount happy after the next reboot)
Because of these experiences I'm more inclined towards the panic option,
although x86-64 defaults to not panic these days.
It would be really much better if sleeping was allowed, but it is hard.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists