[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <516E30DC.5070601@linux.vnet.ibm.com>
Date: Wed, 17 Apr 2013 10:49:24 +0530
From: Aruna Balakrishnaiah <aruna@...ux.vnet.ibm.com>
To: Michael Ellerman <michael@...erman.id.au>
CC: linuxppc-dev@...abs.org, paulus@...ba.org,
linux-kernel@...r.kernel.org, benh@...nel.crashing.org,
jkenisto@...ux.vnet.ibm.com, mahesh@...ux.vnet.ibm.com,
anton@...ba.org
Subject: Re: [PATCH 4/8] Read/Write oops nvram partition via pstore
On Tuesday 16 April 2013 11:50 AM, Aruna Balakrishnaiah wrote:
>
> Currently with this patchset, pstore is not supporting compression of
> oops-messages
> since it involves some changes in the pstore framework.
>
> big_oops_buf will hold the large part of oops data which will be compressed
> and put
> to oops_buf.
>
> big_oops_buf: (1.45 of oops_partition_size)
Sorry, big_oops_buf is (2.22 of oops_data_sz)
where oops_data_sz is oops_partition_size - sizeof(oops_log_info).
where oops_log_info is oops header.
> _________________________
> | header | oops-text |
> |_________|_____________|
>
> <header> is added by the pstore.
>
> So in case compression fails:
>
> we would need to log the header + last few bytes of big_oops_buf to oops_buf.
> oops_buf: (this is of oops_partition_size)
>
We would need to log the header + last oops_data_sz bytes of big_oops_buf to
oops_buf.
So that we can have the header while throwing away the data that immediately
follows it.
> we need last few bytes of big_oops_buf as we need to log the recent messages of
> printk buffer. For which we need to know the header size and it involves some
> changes in the pstore framework.
>
Just communicating the header size from pstore would do the job for us.
> I have the compression patches ready, will be posting it soon as a separate set.
>
>> cheers
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists