[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4jYiEregTvHV83ogc19vWSd56T-OgwePxKD9n4ym-gr+A@mail.gmail.com>
Date: Thu, 18 Oct 2018 15:33:18 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Kees Cook <keescook@...omium.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Anton Vorontsov <anton@...msg.org>,
Colin Cross <ccross@...roid.com>,
"Luck, Tony" <tony.luck@...el.com>, joel@...lfernandes.org,
zwisler@...gle.com
Subject: Re: [PATCH] pstore/ram: Clarify resource reservation labels
On Thu, Oct 18, 2018 at 3:26 PM Kees Cook <keescook@...omium.org> wrote:
>
> On Thu, Oct 18, 2018 at 3:23 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> > On Thu, Oct 18, 2018 at 3:19 PM Kees Cook <keescook@...omium.org> wrote:
> >>
> >> On Thu, Oct 18, 2018 at 2:35 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> >> > On Thu, Oct 18, 2018 at 1:31 PM Kees Cook <keescook@...omium.org> wrote:
> > [..]
> >> > I cringe at users picking addresses because someone is going to enable
> >> > ramoops on top of their persistent memory namespace and wonder why
> >> > their filesystem got clobbered. Should attempts to specify an explicit
> >> > ramoops range that intersects EfiPersistentMemory fail by default? The
> >> > memmap=ss!nn parameter has burned us many times with users picking the
> >> > wrong address, so I'd be inclined to hide this ramoops sharp edge from
> >> > them.
> >>
> >> Yeah, this is what I'm trying to solve. I'd like ramoops to find the
> >> address itself, but it has to do it really early, so if I can't have
> >> nvdimm handle it directly, will having regions already allocated with
> >> request_mem_region() "get along" with the rest of nvdimm?
> >
> > If the filesystem existed on the namespace before the user specified
> > the ramoops command line then ramoops will clobber the filesystem and
> > the user will only find out when mount later fails. All the kernel
> > will say is:
> >
> > dev_warn(dev, "could not reserve region %pR\n", res);
> >
> > ...from the pmem driver, and then the only way to figure who the
> > conflict is with is to look at /proc/iomem, but the damage is already
> > likely done by that point.
>
> Yeah, bleh. Okay, well, let's just skip this for now, since ramoops
> doesn't do _anything_ with pmem now. No need to go crazy right from
> the start. Instead, let's make it work "normally", and if someone
> needs it for very early boot, they can manually enter the mem_address.
>
> How should I attach a ramoops_probe() call to pmem?
To me this looks like it would be a nvdimm glue driver whose entire
job is to attach to the namespace, fill out some
ramoops_platform_data, and then register a "ramoops" platform_device
for the ramoops driver to find.
Powered by blists - more mailing lists