[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGDaqBROm=nWEU2QP7TTeeYoiBj5MEet5X5tzfYXJa7i4EAVpw@mail.gmail.com>
Date: Wed, 6 Jul 2011 10:18:37 -0700
From: Sergiu Iordache <sergiu@...omium.org>
To: Marco Stornelli <marco.stornelli@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Ahmed S. Darwish" <darwish.07@...il.com>,
Artem Bityutskiy <Artem.Bityutskiy@...ia.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] char drivers: ramoops debugfs entry
On Sat, Jul 2, 2011 at 4:59 AM, Marco Stornelli
<marco.stornelli@...il.com> wrote:
> Il 02/07/2011 11:01, Sergiu Iordache ha scritto:
>>
>> On Sat, Jul 2, 2011 at 1:01 AM, Marco Stornelli
>> <marco.stornelli@...il.com> wrote:
>
>>>
>>>
>>> It was easy because the record size had a fixed length (4096), so maybe
>>> at
>>> this point it can be sufficient the record size information. I see a
>>> little
>>> problem however. I think we could use debugfs interface to dump the log
>>> in
>>> an easy way but we should be able to dump it even with /dev/mem.
>>> Specially
>>> on embedded systems, debugfs can be no mounted or not available at all.
>>> So
>>> maybe, as you said below, with these new patches we need a memset over
>>> all
>>> the memory area when the first dump is taken. However, the original idea
>>> was
>>> to store even old dumps. In addition, there's the problem to catch an
>>> oops
>>> after a start-up that "clean" the area before we read it. At that point
>>> we
>>> lost the previous dumps. To solve this we could use a "reset" paramater,
>>> but
>>> I think all of this is a little overkilling. Maybe we can only bump up
>>> the
>>> record size if needed. What do you think?
>>
>> The problem with a fixed record size of 4K is that it is not very
>> flexible as some setups may need more dump data (and 4K doesn't mean
>> that much). Setting the record size via a module parameter or platform
>> data doesn't seem as a huge problem to me if you are not using debugfs
>> as you should be able to somehow export the record size (since you
>> were the one who set it through the parameter in the first place) and
>> get the dumps from /dev/mem.
>
> The point here is not how to set record size, but what it does mean to have
> a variable record size compared with the current situation. However, if we
> know that there are situation where 4k are not sufficient, ok we can modify
> it.
Well right now I don't see any major disadvantages in making it
configurable as long as the present behavior is preserved (no set
record_size -> set it to 4K). Are there any other things to consider?
>> I've thought more about this problem today and I have thought of the
>> following alternative solution: Have a debugfs entry which returns a
>> record size chunk at a time by starting with the first entry and then
>> checking each of the entries for the header (and the presence of the
>> timestamp maybe to be sure). It will then return each entry that is
>> valid skipping over the invalid ones and it will return an empty
>> result when it reaches the end of the memory zone. It could also have
>> an entry to reset to the first entry so you can start over. This way
>> you wouldn't lose old entries and you could still get a pretty easy to
>> parse result.
>>
>
> It seems a good strategy for me.
I'll implement this and come back with a patch.
Sergiu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists