[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F900DD6.50105@redhat.com>
Date: Thu, 19 Apr 2012 09:06:30 -0400
From: Prarit Bhargava <prarit@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: John Stultz <john.stultz@...aro.org>, linux-kernel@...r.kernel.org,
Salman Qazi <sqazi@...gle.com>, stable@...nel.org
Subject: Re: [PATCH] clocksource, prevent overflow in clocksource_cyc2ns
On 04/19/2012 08:52 AM, Thomas Gleixner wrote:
> On Thu, 19 Apr 2012, Thomas Gleixner wrote:
>
>> On Wed, 18 Apr 2012, John Stultz wrote:
>>> On 04/18/2012 04:59 PM, Prarit Bhargava wrote:
>>>>
>> No. The show_state() part prints into the buffer. But it's not
>> guaranteed that the buffer is flushed right away. It could be flushed
>> later as well in a different context. And of course the flush code
>> runs with interrupts disabled and dumping out a gazillion of lines
>> over serial will cause the same hickup. Just planting random
>> touch_watchdog() calls into the code is not the right approach,
>> really.
>>
>> We should think about the reasons why we have interrupts disabled for
>> so much time. Is that really, really necessary ?
In the case of the sysrq-t, I would argue that it is. The whole point behind
the sysrq-t is that we're capturing the *current* state of the system. Having
that output effected by interrupts seems like a bad idea.
>
> I'm not against making the clocksource code more robust, but I don't
> want to add crap there just to cope with complete madness elsewhere.
Maybe I came off the wrong way but I completely agree with that sentiment. Like
yourself, I'm looking for a correct fix rather than a fast fix.
Sorry that I haven't provided any debug info but I'm still in the gathering data
stage atm. It was just John's ping that made me "brain dump" the current info I
had.
P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists