lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E0F0835.4090104@gmail.com>
Date:	Sat, 02 Jul 2011 13:59:49 +0200
From:	Marco Stornelli <marco.stornelli@...il.com>
To:	Sergiu Iordache <sergiu@...omium.org>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	"Ahmed S. Darwish" <darwish.07@...il.com>,
	Artem Bityutskiy <Artem.Bityutskiy@...ia.com>,
	Kyungmin Park <kyungmin.park@...sung.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] char drivers: ramoops debugfs entry

Il 02/07/2011 11:01, Sergiu Iordache ha scritto:
> On Sat, Jul 2, 2011 at 1:01 AM, Marco Stornelli
> <marco.stornelli@...il.com>  wrote:
 >>
>> It was easy because the record size had a fixed length (4096), so maybe at
>> this point it can be sufficient the record size information. I see a little
>> problem however. I think we could use debugfs interface to dump the log in
>> an easy way but we should be able to dump it even with /dev/mem. Specially
>> on embedded systems, debugfs can be no mounted or not available at all. So
>> maybe, as you said below, with these new patches we need a memset over all
>> the memory area when the first dump is taken. However, the original idea was
>> to store even old dumps. In addition, there's the problem to catch an oops
>> after a start-up that "clean" the area before we read it. At that point we
>> lost the previous dumps. To solve this we could use a "reset" paramater, but
>> I think all of this is a little overkilling. Maybe we can only bump up the
>> record size if needed. What do you think?
> The problem with a fixed record size of 4K is that it is not very
> flexible as some setups may need more dump data (and 4K doesn't mean
> that much). Setting the record size via a module parameter or platform
> data doesn't seem as a huge problem to me if you are not using debugfs
> as you should be able to somehow export the record size (since you
> were the one who set it through the parameter in the first place) and
> get the dumps from /dev/mem.

The point here is not how to set record size, but what it does mean to 
have a variable record size compared with the current situation. 
However, if we know that there are situation where 4k are not 
sufficient, ok we can modify it.

>
> I've thought more about this problem today and I have thought of the
> following alternative solution: Have a debugfs entry which returns a
> record size chunk at a time by starting with the first entry and then
> checking each of the entries for the header (and the presence of the
> timestamp maybe to be sure). It will then return each entry that is
> valid skipping over the invalid ones and it will return an empty
> result when it reaches the end of the memory zone. It could also have
> an entry to reset to the first entry so you can start over. This way
> you wouldn't lose old entries and you could still get a pretty easy to
> parse result.
>

It seems a good strategy for me.

Marco
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ