lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Jul 2015 20:07:33 +0200
From:	Sven Brauch <mail@...nbrauch.de>
To:	Johan Hovold <johan@...nel.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
	Oliver Neukum <oliver@...kum.org>,
	Peter Hurley <peter@...leysoftware.com>,
	Toby Gray <toby.gray@...lvnc.com>, linux-usb@...r.kernel.org,
	linux-serial@...r.kernel.org
Subject: Re: [PATCH] Fix data loss in cdc-acm

On 20/07/15 19:25, Johan Hovold wrote:
> What kernel version are you using?
I'm using linux 4.1.2.

> The idea of adding another layer of buffering in the cdc-acm driver has
> been suggested in the past but was rejected (or at least questioned).
> See for example this thread:
> 
> 	https://lkml.kernel.org/r/20110608164626.22bc893c@lxorguk.ukuu.org.uk
Yes, that is indeed pretty much the same problem and the same solution.
Answering to the questions brought up in that thread:

> a) Why is your setup filling 64K in the time it takes the throttle
> response to occur
As far as I understand, the throttle happens only when there's less than
128 bytes of free space in the tty buffer. Data can already be lost
before the tty even decides it should start throttling, simply because
the throttle threshold is smaller than the amount of data potentially in
each urb. Also (excuse my cluelessness) it seems that when exactly the
throttling happens depends on some scheduling "jitter" as well.
Additionally, the response of the cdc_acm driver to a throttle request
is not very prompt; it might have a queue of up to 16kB (16 urbs) pending.

> b) Do we care (is the right thing to do to lose bits anyway at
> that point)
This I cannot answer, I don't know enough about the architecture or
standards. I can just say that for my case, there's a lot of losses;
this it not an issue which happens after hours when the system is under
heavy load, it happens after just a few seconds reproducably.

> The tty buffers are quite large these days, but could possibly be bumped
> further if needed to give the ldisc some more time to throttle the
> device at very high speeds.
I do not like this solution. It will again be based on luck, and you
will still be unable to rely on the delivery guarantee made by the USB
stack (at least when using bulk).
My suggestion instead stops the host system from accepting any more data
from the device when its buffers are full, forcing the device to wait
before sending out more data (which many kinds of devices might very
well be able to do).

Also note that this patch does not introduce an extra layer of
buffering. The buffers are already there; this change just alters the
process which decides when to submit the buffers to the tty, and when to
free them for more input data from the device.

Sven



Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ