[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5201D777.8060303@linux.vnet.ibm.com>
Date: Wed, 07 Aug 2013 10:43:27 +0530
From: Aruna Balakrishnaiah <aruna@...ux.vnet.ibm.com>
To: Tony Luck <tony.luck@...il.com>
CC: "linuxppc-dev@...abs.org" <linuxppc-dev@...abs.org>,
"paulus@...ba.org" <paulus@...ba.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
"keescook@...omium.org" <keescook@...omium.org>
Subject: Re: [PATCH 00/11] Add compression support to pstore
Hi Tony,
On Wednesday 07 August 2013 08:55 AM, Tony Luck wrote:
> On Tue, Aug 6, 2013 at 6:58 PM, Aruna Balakrishnaiah
> <aruna@...ux.vnet.ibm.com> wrote:
>> The patch looks right. I will clean it up. Does the issue still persist
>> after this?
> Things seem to be working - but testing has hardly been extensive (just
> a couple of forced panics).
>
> I do have one other question. In this code:
>
>>> if (compressed && (type == PSTORE_TYPE_DMESG)) {
>>> big_buf_sz = (psinfo->bufsize * 100) / 45;
> Where does the magic multiply by 1.45 come from? Is that always enough
> for the decompression of "dmesg" type data to succeed?
I had this in my cover letter of the series, posting the same from it
Writing to persistent store
----------------------------
Compression will reduce the size of oops/panic report to atmost 45% of its
original size. (Based on experiments done while providing compression support
to nvram by Jim keniston).
Hence buffer of size ( (100/45 approx 2.22) *<registered_buffer> is allocated).
The compression parameters selected based on some experiments:
compression_level = 6, window_bits = 12, memory_level = 4 which achieved a
significant compression of 12 % of uncompressed buffer size tried upto 36k.
Data is compressed from the bigger buffer to registered buffer which is
returned to backends.
Pstore will indicate that with a flag 'compressed' which is passed to backends.
Using this flag, backends will add a flag in their header to indicate the data
is compressed or not while writing to persistent store.
The significant compression that I have mentioned had repeated occurrences in the
text. When I tried with plain text I saw compression of around 45% with compression
parameters I have used.
If the record size is fixed across all the backends then it would be easy to come
up with a pre defined set of compression parameters as well as the buffer size of
compressed/decompressed data based on experiments. In power as of now, the maximum size
of the record is 4k. So compression support on power was provided with multiply (100/45)
considering the maximum record size to be 4k.
How is it with erst and efivars?
- Aruna
> -Tony
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists