lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Thu, 28 Sep 2023 11:00:12 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
Cc:     Baoquan He <bhe@...hat.com>, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
        hpa@...or.com, ebiederm@...ssion.com, akpm@...ux-foundation.org,
        stanislav.kinsburskii@...il.com, corbet@....net,
        linux-kernel@...r.kernel.org, kexec@...ts.infradead.org,
        linux-mm@...ck.org, kys@...rosoft.com, jgowans@...zon.com,
        wei.liu@...nel.org, arnd@...db.de, gregkh@...uxfoundation.org,
        graf@...zon.de, pbonzini@...hat.com,
        "Shutemov, Kirill" <kirill.shutemov@...el.com>
Subject: Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
...
> Well, not exactly. That's something I'd like to have indeed, but from my
> POV this goal is out of scope of discussion at the moment.
> Let me try to express it the same way you did above:
> 
> 1. Boot some kernel
> 2. Grow the deposited memory a bunch
> 5. Kexec
> 4. Kernel panic due to GPF upon accessing the memory deposited to
> hypervisor.

I basically consider this a bug in the first kernel.  It *can't* kexec
when it's left RAM in shambles.  It doesn't know what features the new
kernel has and whether this is even safe.

Can the new kernel even read the new device tree data?

>> Can't the deposited memory just be shrunk before kexec?  Surely there
>> aren't a bunch of pathological things consuming that memory right before
>> kexec, which is basically a reboot.
> 
> In general it can. But for this to happen hypervisor needs to release
> this memory. And it can release the memory iff the guests are stopped.
> And stopping the guests during kexec isn't something we want to have in the
> long run.
> Also, even if we stop the guests before kexec, we need to restart them
> after boot meaning we have to deposit the pages once again.
> All this: stopping the guests, withdrawing the pages upon kexec,
> allocating after boot and depostiting them again significatnly affect
> guests downtime.

Ahh, and you're presumably kexec'ing in the first place because you've
got a bug in the first kernel and you want a second kernel with fewer bugs.

I still think the only way this will possibly work when kexec'ing both
old and new kernels is to do it with the memory maps that *all* kernels
can read.

Can the hypervisor be improved to make this release operation faster?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ