[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150415110905.GC4804@codeblueprint.co.uk>
Date: Wed, 15 Apr 2015 12:09:05 +0100
From: Matt Fleming <matt@...eblueprint.co.uk>
To: Borislav Petkov <bp@...en8.de>
Cc: Andy Lutomirski <luto@...capital.net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Kweh, Hock Leong" <hock.leong.kweh@...el.com>,
Ming Lei <ming.lei@...onical.com>,
Ong Boon Leong <boon.leong.ong@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
"linux-efi@...r.kernel.org" <linux-efi@...r.kernel.org>,
Sam Protsenko <semen.protsenko@...aro.org>,
Peter Jones <pjones@...hat.com>,
Roy Franz <roy.franz@...aro.org>
Subject: Re: [PATCH v4 1/2] firmware_loader: introduce new API -
request_firmware_direct_full_path()
On Wed, 15 Apr, at 12:18:05PM, Borislav Petkov wrote:
> On Wed, Apr 15, 2015 at 11:14:55AM +0100, Matt Fleming wrote:
> > Well, I haven't come across a scenario where you need a brand new
> > interface for getting it *out* of the kernel again.
>
> Well, how are we going to read crash data on next boot then? EFI var or
> what? Are we going to have a generic interface like
>
> /sys/.../capsule/...
>
> or how are we imagining this to look like?
You can read it out in userspace using the existing pstorefs code. The
last thing we need to do is introduce more userspace APIs ;-)
It's possible (and desirable) to separate the input interface from
output.
I've written patches in the past where the EFI kernel subsystem
discovers capsules with a specific GUID reserved for crash data, and
then hands that data to the pstore subsystem. Things are then exposed
via pstorefs. The capsule code would just be another backend to pstore,
similar to how the EFI variable code works today.
I am in no way advocating for yet another crash API.
--
Matt Fleming, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists