[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6DD0090E41@AcuExch.aculab.com>
Date: Wed, 11 Oct 2017 09:23:03 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Robin Murphy' <robin.murphy@....com>,
"mathias.nyman@...el.com" <mathias.nyman@...el.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
CC: "linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"angelsl@...4.sg" <angelsl@...4.sg>
Subject: RE: [PATCH] xhci: Cope with VIA VL805 readahead
From: Robin Murphy
> Sent: 10 October 2017 19:09
>
> The VIA VL805 host controller is well-known for causing problems on
> systems with IOMMUs enabled, ranging from triggering endless streams of
> fault messages to locking itself up completely. It appears that the root
> of the problem might be an over-aggressive prefetching of TRBs, wherein
> consuming commands near the end of a queue segment causes it to read off
> the end of the segment, even across a page boundary. This blows up when
> DMA mapping ops are backed by an IOMMU, since there is no guarantee that
> addresses outside the allocated segment are accessible at all.
>
> Some trial-and-error investigation reveals that we can avoid such
> cross-page reads by not using the last few TRBs in a segment; to that
> end, factor out the implicit index of the end-of-segemnt link TRB, and
> implement a quirk to move it slightly further forward when necessary.
Does this fix all of your problems?
Or is there a second issue when the iommu is disabled?
...
> +unsigned int xhci_segment_link_idx(struct xhci_hcd *xhci)
> +{
> + if (xhci->quirks & XHCI_READAHEAD_QUIRK)
> + return TRBS_PER_SEGMENT - 4;
> +
> + return TRBS_PER_SEGMENT - 1;
> +}
There is no point calculating this every time it is needed.
Save the value in the xhci structure.
I wonder whether it is actually worth just setting TRBS_PER_SEGMENT
to 252 but allocating a full page (256 TRBs)
(ie doing it unconditionally for all devices).
I suspect the performance drop will be immeasurable small.
David
Powered by blists - more mailing lists