[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200802221837.02012.david-b@pacbell.net>
Date: Fri, 22 Feb 2008 18:37:01 -0800
From: David Brownell <david-b@...bell.net>
To: Ned Forrester <nforrester@...i.edu>
Cc: spi-devel-general@...ts.sourceforge.net, marc.pignat@...s.ch,
linux-kernel@...r.kernel.org, anemo@....ocn.ne.jp
Subject: Re: [spi-devel-general] [PATCH] atmel_spi: support zero length transfer
On Friday 22 February 2008, Ned Forrester wrote:
>
> So, what I think you said is that it would be better for pxa2xx_spi to
> silently ignore a zero-length message, passing it back with the rest of
> the message when all is complete, than to reject the message.
Yes. Remember that the reason to _use_ such a message is to get the
inline delay ... and that if you've got code to handle that case
(transfer.len == 0 && transfer.delay_usecs != 0) it's almost trivial
to support the degenerate "delay_usecs == 0" case too.
> I see no
> reason why that could not be done, though it may be tricky to set other
> things like SSP modes and chip select and *not* start the DMA. It would
> have to be tested, so I'm not sure when I could try that.
Normally I'd expect it's little more than a "goto", but the pxa2xx_spi
structure is relatively complex.
> >> I agree with Marc: any such delay will be undefined, in the general
> >> case. It might work for a specific driver implementation.
> >
> > Is that what Marc said? I couldn't tell. In any case, I disagree;
> > the semantics of that delay are clearly define.
>
> Maybe I am missing something. Aren't we talking about a transfer in a
> message, with or without other transfers, who's only unique
> characteristic is that that its length is zero?
There were two cases ... delay_usecs being zero, or not. In either
case, the semantics are the same: after the transfer (len == 0),
delay that many microseconds (zero or more).
> Or are you and Atsushi talking about using spi_transfer.delay_usecs
> *with* a zero length transfer to effectively put a delay between the
> assertion of CS and the start of the first clock? If so, then I guess I
> missed the original point. Sorry.
As I noted, there are two cases. The reason to use a zero length
transfer is to get a delay ... in his case, specifically before
the first clock, but in general *anywhere* in the transfer. But
the degenerate case is "delay for zero microseconds".
> --
>
> By the way, reading spi.h again, it looks like spi_transfer.delay_usecs
> is supposed to be implemented between the last bit movement of a
> transfer and any change of CS at the end of a transfer. Is that right?
Yes.
> I think that pxa2xx_spi is dropping CS, if requested, immediately at
> the end of transfer, and then putting spi_transfer.delay_usecs between
> that transfer and the next.
That's a bug then; it will matter with some drivers.
> >> If it were necessary to scan a
> >> whole message for zero-length transfers and refuse to queue an offending
> >> message, then that adds burden to all messages.
> >
> > Sanity checking messages as they're submitted is easy; and it's
> > also the natural point for setting up DMA mappings (and making
> > those cachelines available for better use).
>
> Hmmm.... Obviously I have much to learn about modern computers. It had
> not occurred to me, even after reading "Linux Device Drivers", Corbet,
> et.al, that by DMA mapping, one frees the cache for other uses. It
> makes sense.
That's certainly what happens on systems where the buffer is from
"normal" memory and the cache is DMA-incoherent ... like most ARMs,
and probably the majority of non-x86 hardware. Think of that as
being a level or two deeper than what LDD covers.
> In my application I pass many large buffers to the master
> driver, and I map them in the protocol driver. I did that without good
> reason, but now I see it was the proper choice.
>
> Unfortunately, pxa2xx_spi does any DMA mapping not done by the protocol
> driver in pump_transfers, as each transfer is submitted to the hardware.
That's not wrong ... but it's sub-optimal: those cache lines could
have been doing better things all that time, and cleaning them out
in the middle of a transfer will slow it down by some amount.
> There is not currently any message checking done in transfer(), the
> only error return from that is if the master driver is shutdown (queue
> stopped). The current scheme is to return the message with non-zero
> spi_message.status if it has failed *after* execution has been
> attempted.
Again ... not wrong, but sub-optimal: when it happens (which won't be
common, fortunately!) the SPI slave will be left in a rude state. And
the protocol driver may not know how to recover.
> I don't look forward to making major changes, if we really
> want all DMA mapping and error checking to be in transfer(), though I
> see no reason why it could not be done.
There's no rush on cleanups like that. They'd be reasonable tasks for
someone with some time to spare, who wanted to get their feet wet in
some driver updates and who could set up some kind of test rig. (Like
one with a programmable slave that could be made to do all sorts of stuff
that might otherwise be a bit uncommon. A microcontroller slave would be
easy ... programming a CPLD or FPGA would be much trickier!)
- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists