[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1204191437560.2542@ionos>
Date: Thu, 19 Apr 2012 14:50:52 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: John Stultz <john.stultz@...aro.org>
cc: Prarit Bhargava <prarit@...hat.com>, linux-kernel@...r.kernel.org,
Salman Qazi <sqazi@...gle.com>, stable@...nel.org
Subject: Re: [PATCH] clocksource, prevent overflow in clocksource_cyc2ns
On Wed, 18 Apr 2012, John Stultz wrote:
> On 04/18/2012 04:59 PM, Prarit Bhargava wrote:
> >
> > Hey John,
> >
> > Thanks for continuing to work on this. Coincidentally that exact patch was
> > my
> > first attempt at resolving the problem as well. The problem is that even
> > after
> > touching the clocksource watchdog and restoring irqs the printk buffer can
> > take
> > a LONG time to flush -- and that still will cause an overflow comparison.
> > So
> > fixing it with just a touch_clocksource_watchdog() isn't the right thing to
> > do
> > IMO. Maybe a combination of the printk() patch you suggested earlier and
> > the
> > touch_clocksource_watchdog() is the right way to go but I'll leave that up
> > to
> > tglx and yourself to decide on a correct fix.
> :( That's a bummer. Something similar may be useful on the printk side as
> well.
No. The show_state() part prints into the buffer. But it's not
guaranteed that the buffer is flushed right away. It could be flushed
later as well in a different context. And of course the flush code
runs with interrupts disabled and dumping out a gazillion of lines
over serial will cause the same hickup. Just planting random
touch_watchdog() calls into the code is not the right approach,
really.
We should think about the reasons why we have interrupts disabled for
so much time. Is that really, really necessary ?
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists