[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1807140049200.2644@nanos.tec.linutronix.de>
Date: Sat, 14 Jul 2018 00:51:39 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Pavel Tatashin <pasha.tatashin@...cle.com>
cc: LKML <linux-kernel@...r.kernel.org>, pbonzini@...hat.com,
rkrcmar@...hat.com, peterz@...radead.org, jgross@...e.com,
Steven Sistare <steven.sistare@...cle.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>, x86@...nel.org,
kvm@...r.kernel.org
Subject: Re: [patch 0/7] x86/kvmclock: Remove memblock dependency and further
cleanups
On Wed, 11 Jul 2018, Pavel Tatashin wrote:
> > So this still will have some overhead when kvmclock is not in use, but
> > bringing it down to zero would be a massive trainwreck and even more
> > indirections.
>
> Hi Thomas,
>
> In my opinion, having kvmclock page in __initdata for boot cpu, and
> setup it in init_hypervisor_platform(). Later, switch to memblock
> allocated memory in x86_init.hyper.guest_late_init() for all CPUs
> would not be too bad, and might be even use fewer lines of code. In
> addition, it won't have any overhead when kvm is not used.
Why memblock? This can be switched when the allocator is up and
running. And you can use the per cpu allocator for that.
I'm not having cycles at the moment to look at that, so feel free to pick
the series up and enhance it.
Thanks,
tglx
Powered by blists - more mailing lists