lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <340596c9-d55d-5f8a-fa27-d95b0e10b20a@intel.com>
Date:   Thu, 28 Sep 2023 12:16:31 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
Cc:     Baoquan He <bhe@...hat.com>, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
        hpa@...or.com, ebiederm@...ssion.com, akpm@...ux-foundation.org,
        stanislav.kinsburskii@...il.com, corbet@....net,
        linux-kernel@...r.kernel.org, kexec@...ts.infradead.org,
        linux-mm@...ck.org, kys@...rosoft.com, jgowans@...zon.com,
        wei.liu@...nel.org, arnd@...db.de, gregkh@...uxfoundation.org,
        graf@...zon.de, pbonzini@...hat.com,
        "Shutemov, Kirill" <kirill.shutemov@...el.com>
Subject: Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

On 9/27/23 17:38, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
>> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
>>> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
>> ...
>>> Well, not exactly. That's something I'd like to have indeed, but from my
>>> POV this goal is out of scope of discussion at the moment.
>>> Let me try to express it the same way you did above:
>>>
>>> 1. Boot some kernel
>>> 2. Grow the deposited memory a bunch
>>> 5. Kexec
>>> 4. Kernel panic due to GPF upon accessing the memory deposited to
>>> hypervisor.
>>
>> I basically consider this a bug in the first kernel.  It *can't* kexec
>> when it's left RAM in shambles.  It doesn't know what features the new
>> kernel has and whether this is even safe.
>>
> 
> Could you elaborate more on why this is a bug in the first kernel?
> Say, kernel memory can be allocated in big physically consequitive
> chunks by the first kernel for depositing. The information about these
> chunks is then passed the the second kernel via FDT or even command
> line, so the seconds kernel can reserve this region during booting.
> What's wrong with this approach?

How do you know the second kernel can parse the FDT entry or the
command-line you pass to it?

>> Can the new kernel even read the new device tree data?
> 
> I'm not sure I understand the question, to be honest.
> Why can't it? This series contains code parts for both first and seconds
> kernels.

How do you know the second kernel isn't the version *before* this series
gets merged?

...
>> I still think the only way this will possibly work when kexec'ing both
>> old and new kernels is to do it with the memory maps that *all* kernels
>> can read.
> 
> Could you elaborate more on this?
> The avaiable memory map actually stays the same for both kernels. The
> difference here can be in a different list of memory regions to reserve,
> when the first kernel allocated and deposited another chunk, and thus
> the second kernel needs to reserve this memory as a new region upon
> booting.

Please take a step back from your implementation for a moment.  There
are two basic design points that need to be considered.

First, *must* "System RAM" (according to the memory map) be persisted
across kexec?  If no, then there's no problem to solve and we can stop
this thread.  If yes, then some mechanism must be used to tell the new
kernel that the "System RAM" in the memory map is not normal RAM.

Second, *if* we agree that some data must communicate across kexec, then
what mechanism should be used?  You're arguing for a new mechanism that
only new kernels can use.  I'm arguing that you should likely reuse an
existing mechanism (probably the UEFI/e820 maps) so that *ALL* kernels
can consume the information, old and new.

I'm not convinced that this series is going in the right direction on
either of those points.

> Can all this considered, as, say, the first kernel uses device tree to
> inform the second kernel about the memory regions to reserve?
> In this case the first kernel behaves a bit like a firmware piece for
> the second one.
> 
>> Can the hypervisor be improved to make this release operation faster?
> 
> I guess it can, but shutting down guests contributes to downtime the
> most. And without shutting down the guests the deposited memory can't be
> withdrawn.

Do you really need to fully shut down each guest?  Or do you just need
to get them to a quiescent state where the hypervisor and devices aren't
writing to the deposited memory?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ