[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGRrVHzd86TYe8sE0S88HObDOXKNAfsX9T=GCo0gkTr2n1wDQw@mail.gmail.com>
Date: Thu, 18 Oct 2018 14:58:25 -0600
From: Ross Zwisler <zwisler@...gle.com>
To: keescook@...omium.org
Cc: dan.j.williams@...el.com, linux-kernel@...r.kernel.org,
anton@...msg.org, ccross@...roid.com, tony.luck@...el.com,
joel@...lfernandes.org
Subject: Re: [PATCH] pstore/ram: Clarify resource reservation labels
On Thu, Oct 18, 2018 at 2:31 PM Kees Cook <keescook@...omium.org> wrote:
>
> On Thu, Oct 18, 2018 at 8:33 AM, Dan Williams <dan.j.williams@...el.com> wrote:
> > [ add Ross ]
>
> Hi Ross! :)
>
> > On Thu, Oct 18, 2018 at 12:15 AM Kees Cook <keescook@...omium.org> wrote:
> >> As for nvdimm specifically, yes, I'd love to get pstore hooked up
> >> correctly to nvdimm. How do the namespaces work? Right now pstore
> >> depends one of platform driver data, device tree specification, or
> >> manual module parameters.
> >
> > From the userspace side we have the ndctl utility to wrap
> > personalities on top of namespaces. So for example, I envision we
> > would be able to do:
> >
> > ndctl create-namespace --mode=pstore --size=128M
> >
> > ...and create a small namespace that will register with the pstore sub-system.
> >
> > On the kernel side this would involve registering a 'pstore_dev' child
> > / seed device under each region device. The 'seed-device' sysfs scheme
> > is described in our documentation [1]. The short summary is ndctl
> > finds a seed device assigns a namespace to it and then binding that
> > device to a driver causes it to be initialized by the kernel.
> >
> > [1]: https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt
>
> Interesting!
>
> Really, this would be a way to configure "ramoops" (the persistent RAM
> backend to pstore), rather than pstore itself (pstore is just the
> framework). From reading the ndctl man page it sounds like there isn't
> a way to store configuration information beyond just size?
Ramoops needs a start (mem_address), size (mem_size) and mapping type
(mem_type), right? I think we get the first two for free based on the
size of the namespace, so really we'd just be looking for a way to
switch between cacheable/noncached memory?
> ramoops will auto-configure itself and fill available space using its
> default parameters, but it might be nice to have a way to store that
> somewhere (traditionally it's part of device tree or platform data).
> ramoops could grow a "header", but normally the regions are very small
> so I've avoided that.
Several of the other modes (BTT and DAX) have space for additional
metadata in their namespaces. If we just need a single bit, though,
maybe we can grab that out of the "flags" field of the namespace
label.
http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf section 2.2.3.
Dan, is this workable or is there a better option? Is it a useful
feature to have other types of namespaces be able to control their
caching attributes in this way?
> I'm not sure I understand the right way to glue ramoops_probe() to the
> "seed-device" stuff. (It needs to be probed VERY early to catch early
> crashes -- ramoops uses postcore_initcall() normally.)
Powered by blists - more mailing lists