[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CwQB_EBK_GSF-Nrm6kfpQoNZJmg+N382B+d4YVNjj_gOA@mail.gmail.com>
Date: Sat, 21 Apr 2018 08:38:16 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Cornelia Huck <cohuck@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Tonny Lu <tonnylu@...cent.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@...hat.com>:
> On Fri, 20 Apr 2018 21:51:13 +0800
> Wanpeng Li <kernellwp@...il.com> wrote:
>
>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@...hat.com>:
>> > On Thu, 19 Apr 2018 17:47:28 -0700
>> > Wanpeng Li <kernellwp@...il.com> wrote:
>> >
>> >> From: Wanpeng Li <wanpengli@...cent.com>
>> >>
>> >> Our virtual machines make use of device assignment by configuring
>> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> >> MSI-X Table entries:
>> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> >> The windows virtual machines fail to boot since they will map the number of
>> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> >> for all archs, in the future this might be extended again if needed.
>> >>
>> >> Cc: Paolo Bonzini <pbonzini@...hat.com>
>> >> Cc: Radim Krčmář <rkrcmar@...hat.com>
>> >> Cc: Tonny Lu <tonnylu@...cent.com>
>> >> Cc: Cornelia Huck <cohuck@...hat.com>
>> >> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
>> >> Signed-off-by: Tonny Lu <tonnylu@...cent.com>
>> >> ---
>> >> v1 -> v2:
>> >> * extend MAX_IRQ_ROUTES to 4096 for all archs
>> >>
>> >> include/linux/kvm_host.h | 6 ------
>> >> 1 file changed, 6 deletions(-)
>> >>
>> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> >> index 6930c63..0a5c299 100644
>> >> --- a/include/linux/kvm_host.h
>> >> +++ b/include/linux/kvm_host.h
>> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >>
>> >> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>> >>
>> >> -#ifdef CONFIG_S390
>> >> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>> >
>> > What about /* might need extension/rework in the future */ instead of
>> > the FIXME?
>>
>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>
>> >
>> > As far as I understand, 4096 should cover most architectures and the
>> > sane end of s390 configurations, but will not be enough at the scarier
>> > end of s390. (I'm not sure how much it matters in practice.)
>> >
>> > Do we want to make this a tuneable in the future? Do some kind of
>> > dynamic allocation? Not sure whether it is worth the trouble.
>>
>> I think keep as it is currently.
>
> My main question here is how long this is enough... the number of
> virtqueues per device is up to 1K from the initial 64, which makes it
> possible to hit the 4K limit with fewer virtio devices than before (on
> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> giant tables everywhere just to accommodate s390.
I suspect there is no real scenario to futher extend for s390 since no
guys report.
> If the s390 maintainers tell me that nobody is doing the really insane
> stuff, I'm happy as well :)
Christian, any thoughts?
Regards,
Wanpeng Li
Powered by blists - more mailing lists