[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d385bb6-fc30-a44d-a057-f23d89a0152e@gmail.com>
Date: Sat, 11 Mar 2023 00:19:49 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: "Gupta, Pankaj" <pankaj.gupta@....com>, luto@...nel.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
seanjc@...gle.com, pbonzini@...hat.com, jgross@...e.com,
tiala@...rosoft.com, kirill@...temov.name,
jiangshan.ljs@...group.com, peterz@...radead.org,
ashish.kalra@....com, srutherford@...gle.com,
akpm@...ux-foundation.org, anshuman.khandual@....com,
pawan.kumar.gupta@...ux.intel.com, adrian.hunter@...el.com,
daniel.sneddon@...ux.intel.com, alexander.shishkin@...ux.intel.com,
sandipan.das@....com, ray.huang@....com, brijesh.singh@....com,
michael.roth@....com, thomas.lendacky@....com,
venu.busireddy@...cle.com, sterritt@...gle.com,
tony.luck@...el.com, samitolvanen@...gle.com, fenghua.yu@...el.com
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-hyperv@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: [RFC PATCH V3 00/16] x86/hyperv/sev: Add AMD sev-snp enlightened
guest support on hyperv
On 3/10/2023 11:35 PM, Gupta, Pankaj wrote:
>
>
> Hi Tianyu,
>
> While testing the guest patches on KVM host, My guest kernel is stuck
> at early bootup. As it did not seem a hang but sort of loop where
> interrupts are getting processed from "pv_native_irq_enable" path
> repeatedly and prevent boot process to make progress IIUC. Did you face
> any such scenario in your testing?
>
> It seems to me "native_irq_enable" enable interrupts and
> "check_hv_pending_irq_enable" starts handling the interrupts (after
> disabling irqs). But "check_hv_pending_irq_enable=>do_exc_hv" can again
> call "pv_native_irq_enable" in interrupt handling path and execute the
> same loop?
I don't meet the issue. Thanks for report. I will double check and
report back.
> Also pasting below the stack dump [1].
>
> Thanks,
> Pankaj
>
> [1]
> [ 20.530786] Call Trace:^M
> [ 20.531099] <IRQ>^M
> [ 20.531360] dump_stack_lvl+0x4d/0x67^M
> [ 20.531820] dump_stack+0x14/0x1a^M
> [ 20.532235] do_exc_hv.cold+0x11/0xec^M
> [ 20.532792] check_hv_pending_irq_enable+0x64/0x80^M
> [ 20.533390] pv_native_irq_enable+0xe/0x20^M ====> here
> [ 20.533902] __do_softirq+0x89/0x2f3^M
> [ 20.534352] __irq_exit_rcu+0x9f/0x110^M
> [ 20.534825] irq_exit_rcu+0x12/0x20^M
> [ 20.535267] common_interrupt+0xca/0xf0^M
> [ 20.535745] </IRQ>^M
> [ 20.536014] <TASK>^M
> [ 20.536286] do_exc_hv.cold+0xda/0xec^M
> [ 20.536826] check_hv_pending_irq_enable+0x64/0x80^M
> [ 20.537429] pv_native_irq_enable+0xe/0x20^M ====> here
> [ 20.537942] _raw_spin_unlock_irqrestore+0x21/0x50^M
> [ 20.538539] __setup_irq+0x3be/0x740^M
> [ 20.538990] request_threaded_irq+0x116/0x180^M
> [ 20.539533] hpet_time_init+0x35/0x56^M
> [ 20.539994] x86_late_time_init+0x1f/0x3d^M
> [ 20.540556] start_kernel+0x8af/0x970^M
> [ 20.541033] x86_64_start_reservations+0x28/0x2e^M
> [ 20.541607] x86_64_start_kernel+0x96/0xa0^M
> [ 20.542126] secondary_startup_64_no_verify+0xe5/0xeb^M
> [ 20.542757] </TASK>^M
Powered by blists - more mailing lists