[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4gO2zN-ixVE1eLpe4RT1Vkkyp_C+LbGnNVT+gnJG=h3KQ@mail.gmail.com>
Date: Thu, 18 Oct 2018 08:33:04 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Kees Cook <keescook@...omium.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Anton Vorontsov <anton@...msg.org>,
Colin Cross <ccross@...roid.com>,
"Luck, Tony" <tony.luck@...el.com>, joel@...lfernandes.org,
zwisler@...gle.com
Subject: Re: [PATCH] pstore/ram: Clarify resource reservation labels
[ add Ross ]
On Thu, Oct 18, 2018 at 12:15 AM Kees Cook <keescook@...omium.org> wrote:
>
> On Wed, Oct 17, 2018 at 5:49 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> > On Wed, Oct 17, 2018 at 5:29 PM Kees Cook <keescook@...omium.org> wrote:
> >>
> >> When ramoops reserved a memory region in the kernel, it had an unhelpful
> >> label of "persistent_memory". When reading /proc/iomem, it would be
> >> repeated many times, did not hint that it was ramoops in particular,
> >> and didn't clarify very much about what each was used for:
> >>
> >> 400000000-407ffffff : Persistent Memory (legacy)
> >> 400000000-400000fff : persistent_memory
> >> 400001000-400001fff : persistent_memory
> >> ...
> >> 4000ff000-4000fffff : persistent_memory
> >>
> >> Instead, this adds meaningful labels for how the various regions are
> >> being used:
> >>
> >> 400000000-407ffffff : Persistent Memory (legacy)
> >> 400000000-400000fff : ramoops:dump(0/252)
> >> 400001000-400001fff : ramoops:dump(1/252)
> >> ...
> >> 4000fc000-4000fcfff : ramoops:dump(252/252)
> >> 4000fd000-4000fdfff : ramoops:console
> >> 4000fe000-4000fe3ff : ramoops:ftrace(0/3)
> >> 4000fe400-4000fe7ff : ramoops:ftrace(1/3)
> >> 4000fe800-4000febff : ramoops:ftrace(2/3)
> >> 4000fec00-4000fefff : ramoops:ftrace(3/3)
> >> 4000ff000-4000fffff : ramoops:pmsg
> >
> > Hopefully ramoops is doing request_region() before trying to do
> > anything with its ranges, because it's going to collide with the pmem
> > driver doing a request_region(). If we want to have pstore use pmem as
> > a backing store that's a new drivers/nvdimm/ namespace personality
> > driver to turn around and register a persistent memory range with
> > pstore rather than the pmem block-device driver.
>
> Yup: it's using request_mem_region() (that's where the labels above
> are assigned).
>
> As for nvdimm specifically, yes, I'd love to get pstore hooked up
> correctly to nvdimm. How do the namespaces work? Right now pstore
> depends one of platform driver data, device tree specification, or
> manual module parameters.
>From the userspace side we have the ndctl utility to wrap
personalities on top of namespaces. So for example, I envision we
would be able to do:
ndctl create-namespace --mode=pstore --size=128M
...and create a small namespace that will register with the pstore sub-system.
On the kernel side this would involve registering a 'pstore_dev' child
/ seed device under each region device. The 'seed-device' sysfs scheme
is described in our documentation [1]. The short summary is ndctl
finds a seed device assigns a namespace to it and then binding that
device to a driver causes it to be initialized by the kernel.
[1]: https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt
Powered by blists - more mailing lists