lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Oct 2017 11:25:41 +0300
From:   Mathias Nyman <mathias.nyman@...ux.intel.com>
To:     David Laight <David.Laight@...LAB.COM>,
        Robin Murphy <robin.murphy@....com>,
        "mathias.nyman@...el.com" <mathias.nyman@...el.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
Cc:     "linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "angelsl@...4.sg" <angelsl@...4.sg>
Subject: Re: [PATCH] xhci: Cope with VIA VL805 readahead

On 11.10.2017 18:46, David Laight wrote:
> From: Mathias Nyman
>> Sent: 11 October 2017 15:41
> ..
>> If possible I'd like to try and find some other solution before chopping the Segment
>> size to smaller than 256.
>> I think that your first proposal of adding the guard page to the segment pool could be an option.
>
> Would be a waste of a page - which could be used for a lot of extra TRB.

Yes, for VIA VL805 it would.
Reducing the TRBs per segment isn't outruled. I'm just trying
to see if we can find a different solution.

> IIRC the rings used to have 16 TRB - that caused some serious problems,
> but I can't quite remember all of them.

It used to be 64 TRBs,

issues were:
- event ring fill up before we could handle the events.
- more frequent dynamic ring expansion issues

> I don't remember anything that made 256 'good' - except that it was a 4k page.

Event ring doesn't get filled up.
Segment uses the whole page
Link trbs are not that frequent, less 64k aligment issues with possible bounce buffers.
Debugging is easier, I see directly from offset how near the end of the ring we are,
just like when looking at the dmesg of this case.

Future plans to index TRBs, converting from dma address to index really easy if
amount of TRBs is power of 2.

>
> Did you add something to copy badly fragmented buffers to get around the
> problem with misaligned LINKs.  ISTR my original fix didn't work for requests
> for very long disk transfers.

Yes, xhci_align_td() cuts the data length of the TRB before link TRB to
be at a packet boundary. If there is no packet boundary then we use a bounce buffer.

-Mathias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ