lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ktwgnbsni5pt2cznxj2g6qyb3xwkhjrciym6lpk3uvsxgi4324@tllciap26vb5>
Date: Mon, 4 Nov 2024 10:35:53 +0200
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>, 
	Yan Zhao <yan.y.zhao@...el.com>, kexec@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	linux-coco@...ts.linux.dev, x86@...nel.org, rick.p.edgecombe@...el.com
Subject: Re: [PATCH] kexec_core: Accept unaccepted kexec destination addresses

On Fri, Oct 25, 2024 at 04:56:41PM +0300, Kirill A. Shutemov wrote:
> On Wed, Oct 23, 2024 at 10:44:11AM -0500, Eric W. Biederman wrote:
> > "Kirill A. Shutemov" <kirill@...temov.name> writes:
> > 
> > > Waiting minutes to get VM booted to shell is not feasible for most
> > > deployments. Lazy is sane default to me.
> > 
> > Huh?
> > 
> > Unless my guesses about what is happening are wrong lazy is hiding
> > a serious implementation deficiency.  From all hardware I have seen
> > taking minutes is absolutely ridiculous.
> > 
> > Does writing to all of memory at full speed take minutes?  How can such
> > a system be functional?
> 
> It is not only memory write (to encrypt the memory), but also TDCALL which
> is TD-exit on every page. That is costly in TDX case.
> 
> On single vCPU it takes about a minute to accept 90GiB of memory.
> 
> It improves a bit with number of vCPUs. It is 40 seconds with 4 vCPU, but
> it doesn't scale past that in my setup.
> 
> But it is all rather pathological: VMM doesn't support huge pages yet and
> all memory is accepted in 4K chunks. Bringing 2M support would cut number
> of TDCALLs by 512.
> 
> Once memory accepted, memory access cost is comparable to bare metal minus
> usual virtualisation tax on page walk.
> 
> I don't know what the picture looks like in AMD case.
> j
> > If you don't actually have to write to the pages and it is just some
> > accounting function it is even more ridiculous.
> > 
> > 
> > I had previously thought that accept_memory was the firmware call.
> > Now that I see that it is just a wrapper for some hardware specific
> > calls I am even more perplexed.
> 
> It is hypercall basically. The feature is only used in guests so far.

Eric, can we get the patch applied? It fixes a crash.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ