lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 18 Oct 2018 15:58:15 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Anton Vorontsov <anton@...msg.org>,
        Colin Cross <ccross@...roid.com>,
        "Luck, Tony" <tony.luck@...el.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Ross Zwisler <zwisler@...gle.com>
Subject: Re: [PATCH] pstore/ram: Clarify resource reservation labels

On Thu, Oct 18, 2018 at 3:33 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> On Thu, Oct 18, 2018 at 3:26 PM Kees Cook <keescook@...omium.org> wrote:
>>
>> On Thu, Oct 18, 2018 at 3:23 PM, Dan Williams <dan.j.williams@...el.com> wrote:
>> > On Thu, Oct 18, 2018 at 3:19 PM Kees Cook <keescook@...omium.org> wrote:
>> >>
>> >> On Thu, Oct 18, 2018 at 2:35 PM, Dan Williams <dan.j.williams@...el.com> wrote:
>> >> > On Thu, Oct 18, 2018 at 1:31 PM Kees Cook <keescook@...omium.org> wrote:
>> > [..]
>> >> > I cringe at users picking addresses because someone is going to enable
>> >> > ramoops on top of their persistent memory namespace and wonder why
>> >> > their filesystem got clobbered. Should attempts to specify an explicit
>> >> > ramoops range that intersects EfiPersistentMemory fail by default? The
>> >> > memmap=ss!nn parameter has burned us many times with users picking the
>> >> > wrong address, so I'd be inclined to hide this ramoops sharp edge from
>> >> > them.
>> >>
>> >> Yeah, this is what I'm trying to solve. I'd like ramoops to find the
>> >> address itself, but it has to do it really early, so if I can't have
>> >> nvdimm handle it directly, will having regions already allocated with
>> >> request_mem_region() "get along" with the rest of nvdimm?
>> >
>> > If the filesystem existed on the namespace before the user specified
>> > the ramoops command line then ramoops will clobber the filesystem and
>> > the user will only find out when mount later fails. All the kernel
>> > will say is:
>> >
>> >     dev_warn(dev, "could not reserve region %pR\n", res);
>> >
>> > ...from the pmem driver, and then the only way to figure who the
>> > conflict is with is to look at /proc/iomem, but the damage is already
>> > likely done by that point.
>>
>> Yeah, bleh. Okay, well, let's just skip this for now, since ramoops
>> doesn't do _anything_ with pmem now. No need to go crazy right from
>> the start. Instead, let's make it work "normally", and if someone
>> needs it for very early boot, they can manually enter the mem_address.
>>
>> How should I attach a ramoops_probe() call to pmem?
>
> To me this looks like it would be a nvdimm glue driver whose entire
> job is to attach to the namespace, fill out some
> ramoops_platform_data, and then register a "ramoops" platform_device
> for the ramoops driver to find.

That sounds right, yes. I'm happy to help review/test/etc.

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ