[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1316003005.4369.10.camel@lws-weitzel>
Date: Wed, 14 Sep 2011 14:23:25 +0200
From: Jan Weitzel <J.Weitzel@...tec.de>
To: Andrew Morton <akpm@...gle.com>
Cc: Evgeniy Polyakov <zbr@...emap.net>, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC] w1: Disable irqs in critical section
Am Dienstag, den 13.09.2011, 15:41 -0700 schrieb Andrew Morton:
> On Thu, 08 Sep 2011 07:46:21 +0200
> Jan Weitzel <J.Weitzel@...tec.de> wrote:
>
> > Am Mittwoch, den 07.09.2011, 21:50 +0400 schrieb Evgeniy Polyakov:
> > > On Wed, Sep 07, 2011 at 10:48:32AM +0200, Jan Weitzel (j.weitzel@...tec.de) wrote:
> > > > Interrupting w1_delay in w1_read_bit results in missing the low level
> > > > on the w1 line and receiving "1" instead of "0".
> > > > Adding local_irq_save / local_irq_restore around the critical section
> > >
> > > This means that CPU will be essentially stuck for 15 useconds for every
> > > bit transferred, doesn't really look like a good idea.
> > >
> > > Are you absolutely sure that missing bit is because of timings and not
> > > some other bug?
> > >
> >
> > I trigger a gpio line after the samplepoint. I case of a wrong bit the
> > sample is taken after the "0 gap". The cycle time (samplt to sample)is
> > increased form about 80__s to 95__s.
> > I did the measurement with the w1-gpio driver on a OMAP4 board.
> >
>
> I'm not clear on how w1 actually works. Is it a bit-banging
> protocol in which the timing is provided by the host CPU? If so then
correct.
> yes, we should carefully disable interrupts in places where an
> interrupt would disrupt critical timing. (But what to do about NMIs
> and SMIs?)
fortunately on our embedded platform we have no SMIs
> Disabling interrupts for 15us is pretty obnoxious, but there's a 55us
> delay there with interrupts enabled, so the overall effect shouldn't be
> too bad.
>
> Finally, can we fine-tune the interrupt-disabled section a bit? For
> example, can the local_irq_disable() be moved to after the
> write_bit(..., 0)?
>
The falling edge from 1 to 0 starts timing for the slave. After the 6µs
the slave should drive the line low (In case the information bit is
"0"). So there is no additional edge on the line. If there is an
interrupt and the low pulse from master is longer, slave did't see it.
Master need to generate the 1 -> 0 transition and measure after 15µs
I tested it anyhow. Behaviour is slightly better than without disabling
interrupts. Only disabling it for 15µs fix the problem.
Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists