[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ft6gbx3k.fsf@nanos.tec.linutronix.de>
Date: Wed, 14 Oct 2020 18:17:51 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Alan Stern <stern@...land.harvard.edu>
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Johan Hovold <johan@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-usb@...r.kernel.org,
Thomas Winischhofer <thomas@...ischhofer.net>,
"Ahmed S. Darwish" <a.darwish@...utronix.de>,
Mathias Nyman <mathias.nyman@...el.com>,
Valentina Manea <valentina.manea.m@...il.com>,
Shuah Khan <shuah@...nel.org>, linux-omap@...r.kernel.org,
Kukjin Kim <kgene@...nel.org>,
Krzysztof Kozlowski <krzk@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
linux-samsung-soc@...r.kernel.org, Felipe Balbi <balbi@...nel.org>,
Duncan Sands <duncan.sands@...e.fr>
Subject: Re: [patch 03/12] USB: serial: keyspan_pda: Consolidate room query
On Wed, Oct 14 2020 at 12:14, Alan Stern wrote:
> On Wed, Oct 14, 2020 at 04:52:18PM +0200, Thomas Gleixner wrote:
>> From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
>>
>> Having two copies of the same code doesn't make the code more readable and
>> allocating a buffer of 1 byte for a synchronous operation is a pointless
>> exercise.
>
> Not so. In fact, it is required, because a portion of a structure
> cannot be mapped for DMA unless it is aligned at a cache line boundary.
>
>> Add a byte buffer to struct keyspan_pda_private which can be used
>> instead. The buffer is only used in open() and tty->write().
>
> This won't work.
Ok.
>> + res = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
>> + 6, /* write_room */
>> + USB_TYPE_VENDOR | USB_RECIP_INTERFACE | USB_DIR_IN,
>> + 0, /* value */
>> + 0, /* index */
>> + &priv->query_buf,
>> + 1,
>> + 2000);
>
> Instead, consider using the new usb_control_msg_recv() API. But it
> might be better to allocate the buffer once and for all.
Let me have a look.
Thanks,
tglx
Powered by blists - more mailing lists