lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jJ9Eq1vJEV4wooSpA3y6m5zCOZvq2yp+Q51LCnJ_1M=9g@mail.gmail.com>
Date:   Thu, 18 Oct 2018 14:20:04 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Ross Zwisler <zwisler@...gle.com>
Cc:     Dan Williams <dan.j.williams@...el.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Anton Vorontsov <anton@...msg.org>,
        Colin Cross <ccross@...roid.com>,
        Tony Luck <tony.luck@...el.com>,
        Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH] pstore/ram: Clarify resource reservation labels

On Thu, Oct 18, 2018 at 1:58 PM, Ross Zwisler <zwisler@...gle.com> wrote:
> On Thu, Oct 18, 2018 at 2:31 PM Kees Cook <keescook@...omium.org> wrote:
>>
>> On Thu, Oct 18, 2018 at 8:33 AM, Dan Williams <dan.j.williams@...el.com> wrote:
>> > [ add Ross ]
>>
>> Hi Ross! :)
>>
>> > On Thu, Oct 18, 2018 at 12:15 AM Kees Cook <keescook@...omium.org> wrote:
>> >> As for nvdimm specifically, yes, I'd love to get pstore hooked up
>> >> correctly to nvdimm. How do the namespaces work? Right now pstore
>> >> depends one of platform driver data, device tree specification, or
>> >> manual module parameters.
>> >
>> > From the userspace side we have the ndctl utility to wrap
>> > personalities on top of namespaces. So for example, I envision we
>> > would be able to do:
>> >
>> >     ndctl create-namespace --mode=pstore --size=128M
>> >
>> > ...and create a small namespace that will register with the pstore sub-system.
>> >
>> > On the kernel side this would involve registering a 'pstore_dev' child
>> > / seed device under each region device. The 'seed-device' sysfs scheme
>> > is described in our documentation [1]. The short summary is ndctl
>> > finds a seed device assigns a namespace to it and then binding that
>> > device to a driver causes it to be initialized by the kernel.
>> >
>> > [1]: https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt
>>
>> Interesting!
>>
>> Really, this would be a way to configure "ramoops" (the persistent RAM
>> backend to pstore), rather than pstore itself (pstore is just the
>> framework). From reading the ndctl man page it sounds like there isn't
>> a way to store configuration information beyond just size?
>
> Ramoops needs a start (mem_address), size (mem_size) and mapping type
> (mem_type), right?  I think we get the first two for free based on the
> size of the namespace, so really we'd just be looking for a way to
> switch between cacheable/noncached memory?

Start and size would be a good starting point, yes. That would let
automatic layout happen, which could be improved in the future.

mem_type just chooses ioremap() ioremap_wc() after the request_mem_region():

        if (!request_mem_region(start, size, label ?: "ramoops")) {
                pr_err("request mem region (0x%llx@...llx) failed\n",
                        (unsigned long long)size, (unsigned long long)start);
                return NULL;
        }

        if (memtype)
                va = ioremap(start, size);
        else
                va = ioremap_wc(start, size);

Is this feature "knowable" during probe time? Traditionally all these
details had to be stored separately, but if the nvdimm core knows the
right answer, it could just pass the correct memtype during the
ramoops probe.

> Several of the other modes (BTT and DAX) have space for additional
> metadata in their namespaces.  If we just need a single bit, though,
> maybe we can grab that out of the "flags" field of the namespace
> label.

This feels a bit like a hack? If I want something better than command
line args for ramoops sub-region sizing, I'll probably build a
"header" region starting at mem_address with a magic number, version,
etc.

Thanks for your help!

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ