[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A5ED84D3BB3A384992CBB9C77DEDA4D40FB2813D@USINDEM103.corp.hds.com>
Date: Thu, 19 Jul 2012 23:08:22 +0000
From: Seiji Aguchi <seiji.aguchi@....com>
To: "Luck, Tony" <tony.luck@...el.com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mikew@...gle.com" <mikew@...gle.com>,
"dzickus@...hat.com" <dzickus@...hat.com>,
"Matthew Garrett (mjg@...hat.com)" <mjg@...hat.com>
CC: "dle-develop@...ts.sourceforge.net"
<dle-develop@...ts.sourceforge.net>,
Satoru Moriya <satoru.moriya@....com>
Subject: RE: [RFC][PATCH v2 2/3] Hold multiple logs
>
> I think that 3 or 4 logs should be plenty to cover almost all situations. E.g.
> with 3 logs you could capture 2 OOPS (and perhaps miss some other OOPS) and then get the final panic that kills the system. Messier
> crashes are of course possible ... but that would give lots of clues on where the problems lie.
>
Thank you for letting my know your idea.
Let me explain my opinion.
If you are concerned about multiple OOPS case, I think an user app which logs from /dev/pstore to /var/log should be developed.
Once it is developed, we don't need to care about multiple oops case and the appropriate number is two.
- In case where system is workable after oops.
The user app will erase an entry in NVRAM.
And we can get the message via /var/log.
- In case where system hangs up or panics due to the oops.
Oops is the critical message and we don't need care about subsequent events.
What do you think?
> If you don't know what is the appropriate number ... then how will users decide? We should really give them some guidance ...
> especially if there are odd problems[1] if they pick a number that is too big.
You are right.
There is no user app above right now. So, I was in stuck...
But I understand I shouldn't have introduce efi_pstore_log_num.
Seiji
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists