[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200908281507.11143.oliver@neukum.org>
Date: Fri, 28 Aug 2009 15:07:10 +0200
From: Oliver Neukum <oliver@...kum.org>
To: David VomLehn <dvomlehn@...co.com>
Cc: linux-usb@...r.kernel.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, greg@...ah.com
Subject: Re: [PATCH 1/1, v2] usb-use-kfifo-to-buffer-usb-generic-serial-writes.patch
Am Donnerstag, 27. August 2009 19:22:18 schrieb David VomLehn:
> On Thu, Aug 27, 2009 at 10:14:53AM +0200, Oliver Neukum wrote:
> > > + /* send the data out the bulk port */
> > > + result = usb_submit_urb(port->write_urb, GFP_ATOMIC);
> > > + if (result) {
> > > + dev_err(&port->dev,
> > > + "%s - failed submitting write urb, error %d\n",
> > > + __func__, result);
> > > + /* don't have to grab the lock here, as we will
> > > + retry if != 0 */
> > > + port->write_urb_busy = 0;
> > > + status = result;
> >
> > This looks deficient. If the first part of a transmission fails,
> > the fifo's remaining content should be discarded and if possible
> > an error returned to user space.
>
> I thought about that, and perhaps I don't know enough about about USB
> failure modes, but it's not really clear to me that the FIFO's contents
> should be tossed. The pro argument is that losing more data may make it
> clearer that an error occurred, but the con is that this may be a transient
> error and why should we discard perfectly good data? I'm definitely open to
> discussion on this.
You may be writing a command. If you clip out a sequence in the middle you
might be sending an altered command without telling user space.
> > [..]
> >
> > > @@ -487,8 +515,8 @@ void usb_serial_generic_write_bulk_callback(struct
> > > urb *urb) port->urbs_in_flight = 0;
> > > spin_unlock_irqrestore(&port->lock, flags);
> > > } else {
> > > - /* Handle the case for single urb mode */
> > > port->write_urb_busy = 0;
> > > + usb_serial_generic_write_start(port, 0);
> >
> > This is a problem. This may fail due to a system suspend.
> > In that case you cannot depend on the next write restarting
> > IO. You need to restart IO in resume()
>
> It's not so clear that this is a problem. Serial output is not idempotent
> the way disk output is; you trade the risk of dropped data for the risk of
> duplicated data. I have a belief, open to challenge, that users can handle
> would regard duplicated data as more confusing than dropped data.
How can this duplicate data? If you know that you need to call
usb_serial_generic_write_start, you also know that no further
transfer will occur. If you don't test and restart on resume, you
get this scenario:
user space writes to device
user space waits for answer
kernel transfers 1. buffer
system suspension - 2. buffer cannot be transfered,
as usb_serial_generic_write_start gets -EPERM
system resumption - 2. buffer is never transmitted,
user space times out and reports an error
> > [..]
> >
> > > @@ -96,6 +98,8 @@ struct usb_serial_port {
> > > unsigned char *bulk_out_buffer;
> > > int bulk_out_size;
> > > struct urb *write_urb;
> > > + struct kfifo *write_fifo;
> > > + spinlock_t write_fifo_lock;
> >
> > Do you really need a separate lock?
>
> No. I could, theoretically, grab the BKL, but why hold up anything you
> don't have to? If someone wants to make an argument based on cacheline
> thrashing, they could certainly do so, but the data rates being used here
> are relatively low.
Is there any lock the usb serial subsystem provides you could use?
Regards
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists