[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a82c80d1-f2ff-779b-7f90-a6fe9c51b7a4@oracle.com>
Date: Fri, 13 Jul 2018 20:20:39 -0400
From: Pavel Tatashin <pasha.tatashin@...cle.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, pbonzini@...hat.com,
rkrcmar@...hat.com, peterz@...radead.org, jgross@...e.com,
Steven Sistare <steven.sistare@...cle.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>, x86@...nel.org,
kvm@...r.kernel.org
Subject: Re: [patch 0/7] x86/kvmclock: Remove memblock dependency and further
cleanups
On 07/13/2018 06:51 PM, Thomas Gleixner wrote:
> On Wed, 11 Jul 2018, Pavel Tatashin wrote:
>
>>> So this still will have some overhead when kvmclock is not in use, but
>>> bringing it down to zero would be a massive trainwreck and even more
>>> indirections.
>>
>> Hi Thomas,
>>
>> In my opinion, having kvmclock page in __initdata for boot cpu, and
>> setup it in init_hypervisor_platform(). Later, switch to memblock
>> allocated memory in x86_init.hyper.guest_late_init() for all CPUs
>> would not be too bad, and might be even use fewer lines of code. In
>> addition, it won't have any overhead when kvm is not used.
>
> Why memblock? This can be switched when the allocator is up and
> running. And you can use the per cpu allocator for that.
>
> I'm not having cycles at the moment to look at that, so feel free to pick
> the series up and enhance it.
OK, I will add your series into my series, and fix all the comments that were raised by the reviewers.
Pavel
Powered by blists - more mailing lists