[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9869257e-d257-da83-edd6-0c167f915829@de.ibm.com>
Date: Fri, 25 Feb 2022 14:29:31 +0100
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Michael Mueller <mimu@...ux.ibm.com>, kvm@...r.kernel.org
Cc: cohuck@...hat.com, frankja@...ux.ibm.com, thuth@...hat.com,
pasic@...ux.ibm.com, david@...hat.com, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/1] KVM: s390: pv: make use of ultravisor AIV support
Am 24.02.22 um 16:47 schrieb Michael Mueller:
>
>
> On 22.02.22 09:13, Christian Borntraeger wrote:
>> Am 09.02.22 um 16:22 schrieb Michael Mueller:
>>> This patch enables the ultravisor adapter interruption vitualization
>>> support indicated by UV feature BIT_UV_FEAT_AIV. This allows ISC
>>> interruption injection directly into the GISA IPM for PV kvm guests.
>>>
>>> Hardware that does not support this feature will continue to use the
>>> UV interruption interception method to deliver ISC interruptions to
>>> PV kvm guests. For this purpose, the ECA_AIV bit for all guest cpus
>>> will be cleared and the GISA will be disabled during PV CPU setup.
>>>
>>> In addition a check in __inject_io() has been removed. That reduces the
>>> required instructions for interruption handling for PV and traditional
>>> kvm guests.
>>>
>>> Signed-off-by: Michael Mueller <mimu@...ux.ibm.com>
>>
>> The CI said the following with gisa_disable in the calltrace.
>> Will drop from next for now.
>
> It turns out this is related to kvm_s390_set_tod_clock() which
> is triggered by a kvm-unit-test (sac_PV) and not related directly
> to this patch. Please re-apply.
Done. We need to fix the sck handler instead.
>
>>
>> LOCKDEP_CIRCULAR (suite: kvm-unit-tests-kvm, case: -)
>> WARNING: possible circular locking dependency detected
>> 5.17.0-20220221.rc5.git1.b8f0356a093a.300.fc35.s390x+debug #1 Not tainted
>> ------------------------------------------------------
>> qemu-system-s39/161139 is trying to acquire lock:
>> 0000000280dc0b98 (&kvm->lock){+.+.}-{3:3}, at: kvm_s390_set_tod_clock+0x36/0x220 [kvm]
>> but task is already holding lock:
>> 0000000280f4e4b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x9a/0xa40 [kvm]
>> which lock already depends on the new lock.
>> the existing dependency chain (in reverse order) is:
>> -> #1 (&vcpu->mutex){+.+.}-{3:3}:
>> __lock_acquire+0x604/0xbd8
>> lock_acquire.part.0+0xe2/0x250
>> lock_acquire+0xb0/0x200
>> __mutex_lock+0x9e/0x8a0
>> mutex_lock_nested+0x32/0x40
>> kvm_s390_gisa_disable+0xa4/0x130 [kvm]
>> kvm_s390_handle_pv+0x718/0x778 [kvm]
>> kvm_arch_vm_ioctl+0x4ac/0x5f8 [kvm]
>> kvm_vm_ioctl+0x336/0x530 [kvm]
>> __s390x_sys_ioctl+0xbe/0x100
>> __do_syscall+0x1da/0x208
>> system_call+0x82/0xb0
>> -> #0 (&kvm->lock){+.+.}-{3:3}:
>> check_prev_add+0xe0/0xed8
>> validate_chain+0x736/0xb20
>> __lock_acquire+0x604/0xbd8
>> lock_acquire.part.0+0xe2/0x250
>> lock_acquire+0xb0/0x200
>> __mutex_lock+0x9e/0x8a0
>> mutex_lock_nested+0x32/0x40
>> kvm_s390_set_tod_clock+0x36/0x220 [kvm]
>> kvm_s390_handle_b2+0x378/0x728 [kvm]
>> kvm_handle_sie_intercept+0x13a/0x448 [kvm]
>> vcpu_post_run+0x28e/0x560 [kvm]
>> __vcpu_run+0x266/0x388 [kvm]
>> kvm_arch_vcpu_ioctl_run+0x10a/0x270 [kvm]
>> kvm_vcpu_ioctl+0x27c/0xa40 [kvm]
>> __s390x_sys_ioctl+0xbe/0x100
>> __do_syscall+0x1da/0x208
>> system_call+0x82/0xb0
>> other info that might help us debug this:
>> Possible unsafe locking scenario:
>> CPU0 CPU1
>> ---- ----
>> lock(&vcpu->mutex);
>> lock(&kvm->lock);
>> lock(&vcpu->mutex);
>> lock(&kvm->lock);
>> *** DEADLOCK ***
>> 2 locks held by qemu-system-s39/161139:
>> #0: 0000000280f4e4b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x9a/0xa40 [kvm]
>> #1: 0000000280dc47c8 (&kvm->srcu){....}-{0:0}, at: __vcpu_run+0x1d4/0x388 [kvm]
>> stack backtrace:
>> CPU: 10 PID: 161139 Comm: qemu-system-s39 Not tainted 5.17.0-20220221.rc5.git1.b8f0356a093a.300.fc35.s390x+debug #1
>> Hardware name: IBM 8561 T01 701 (LPAR)
>> Call Trace:
>> [<00000001da4e89de>] dump_stack_lvl+0x8e/0xc8
>> [<00000001d9876c56>] check_noncircular+0x136/0x158
>> [<00000001d9877c70>] check_prev_add+0xe0/0xed8
>> [<00000001d987919e>] validate_chain+0x736/0xb20
>> [<00000001d987b23c>] __lock_acquire+0x604/0xbd8
>> [<00000001d987c432>] lock_acquire.part.0+0xe2/0x250
>> [<00000001d987c650>] lock_acquire+0xb0/0x200
>> [<00000001da4f72ae>] __mutex_lock+0x9e/0x8a0
>> [<00000001da4f7ae2>] mutex_lock_nested+0x32/0x40
>> [<000003ff8070cd6e>] kvm_s390_set_tod_clock+0x36/0x220 [kvm]
>> [<000003ff8071dd68>] kvm_s390_handle_b2+0x378/0x728 [kvm]
>> [<000003ff8071146a>] kvm_handle_sie_intercept+0x13a/0x448 [kvm]
>> [<000003ff8070dd46>] vcpu_post_run+0x28e/0x560 [kvm]
>> [<000003ff8070e27e>] __vcpu_run+0x266/0x388 [kvm]
>> [<000003ff8070eba2>] kvm_arch_vcpu_ioctl_run+0x10a/0x270 [kvm]
>> [<000003ff806f4044>] kvm_vcpu_ioctl+0x27c/0xa40 [kvm]
>> [<00000001d9b47ac6>] __s390x_sys_ioctl+0xbe/0x100
>> [<00000001da4ec152>] __do_syscall+0x1da/0x208
>> [<00000001da4fec42>] system_call+0x82/0xb0
>> INFO: lockdep is turned off.
Powered by blists - more mailing lists