[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
Date: Tue, 25 May 2021 18:23:35 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Anchal Agarwal <anchalag@...zon.com>
Cc: "tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "hpa@...or.com" <hpa@...or.com>,
"jgross@...e.com" <jgross@...e.com>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"roger.pau@...rix.com" <roger.pau@...rix.com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
"davem@...emloft.net" <davem@...emloft.net>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
"len.brown@...el.com" <len.brown@...el.com>,
"pavel@....cz" <pavel@....cz>,
"peterz@...radead.org" <peterz@...radead.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Woodhouse@...-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com,
David <dwmw@...zon.co.uk>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
aams@...zon.com
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
mode
On 5/21/21 1:26 AM, Anchal Agarwal wrote:
>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
>>> booting kernel. But since the hibernation kernel may have different physical
>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
>>> During hibernation, the VCPU info register hypercall is not invoked again.
>>
>> I still don't think that's the cause but it's certainly worth having a look.
>>
> Hi Boris,
> Apologies for picking this up after last year.
> I did some dive deep on the above statement and that is indeed the case that's happening.
> I did some debugging around KASLR and hibernation using reboot mode.
> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> control jumps into the image.
>
> I failed to get any prints after it got stuck in bringup_cpu() and
> I do not have an option to send a sysrq signal to the guest or rather get a kdump.
xenctx and xen-hvmctx might be helpful.
> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
> expected behavior.
> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> KASLR is enabled but I do not have substantial data to prove that.
> Is this a coincidence that this always happens for 1st vcpu?
> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> will re-registering vcpu_info for secondary vcpu's even plausible?
I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
-boris
> I could definitely use some advice to debug this further.
>
>
> Some printk's from my debugging:
>
> At Boot:
>
> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
>
> Image Loads:
> It ends up in the condition:
> xen_vcpu_setup()
> {
> ...
> if (xen_hvm_domain()) {
> if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
> return 0;
> }
> ...
> }
>
> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
>
> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> in the guest.
>
> Thanks,
> Anchal.
>> -boris
>>
>>
Powered by blists - more mailing lists