[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160819144907.GB1885@potion>
Date: Fri, 19 Aug 2016 16:49:08 +0200
From: Radim Krčmář <rkrcmar@...hat.com>
To: Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
Cc: joro@...tes.org, pbonzini@...hat.com, alex.williamson@...hat.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
sherry.hurwitz@....com
Subject: Re: [PART2 PATCH v6 12/12] svm: Implements update_pi_irte hook to
setup posted interrupt
2016-08-18 14:42-0500, Suravee Suthikulpanit:
> This patch implements update_pi_irte function hook to allow SVM
> communicate to IOMMU driver regarding how to set up IRTE for handling
> posted interrupt.
>
> In case AVIC is enabled, during vcpu_load/unload, SVM needs to update
> IOMMU IRTE with appropriate host physical APIC ID. Also, when
> vcpu_blocking/unblocking, SVM needs to update the is-running bit in
> the IOMMU IRTE. Both are achieved via calling amd_iommu_update_ga().
>
> However, if GA mode is not enabled for the pass-through device,
> IOMMU driver will simply just return when calling amd_iommu_update_ga.
>
> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
> ---
> diff --git a/include/linux/amd-iommu.h b/include/linux/amd-iommu.h
> @@ -34,6 +34,7 @@ struct amd_ir_data {
> struct msi_msg msi_entry;
> void *entry; /* Pointer to union irte or struct irte_ga */
> void *ref; /* Pointer to the actual irte */
> + struct list_head node; /* Used by SVM for per-vcpu ir_list */
Putting a list_head here requires all users of amd-iommu to cooperate,
which is dangerous, but it allows simpler SVM code. The alternative is
to force wrappers in SVM, which would also allow IOMMU to keep struct
amd_ir_data as incomplete in public headers.
struct struct amd_ir_data_wrapper {
struct list_head node;
struct amd_ir_data *ir_data;
}
(The rant continues below.)
> +static int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
> + uint32_t guest_irq, bool set)
> +{
> + struct kvm_kernel_irq_routing_entry *e;
> + struct kvm_irq_routing_table *irq_rt;
[...]
> + hlist_for_each_entry(e, &irq_rt->map[guest_irq], link) {
> + struct kvm_lapic_irq irq;
> + struct vcpu_data vcpu_info;
[...]
> + kvm_set_msi_irq(e, &irq);
> + if (kvm_intr_is_single_vcpu(kvm, &irq, &vcpu)) {
> + svm = to_svm(vcpu);
> + vcpu_info.pi_desc_addr = page_to_phys(svm->avic_backing_page);
> + vcpu_info.vector = irq.vector;
[...]
> + struct amd_iommu_pi_data pi;
> +
> + /* Try to enable guest_mode in IRTE */
> + pi.ga_tag = AVIC_GATAG(kvm->arch.avic_vm_id,
> + vcpu->vcpu_id);
> + pi.is_guest_mode = true;
> + pi.vcpu_data = &vcpu_info;
> + ret = irq_set_vcpu_affinity(host_irq, &pi);
> + if (!ret && pi.is_guest_mode)
> + svm_ir_list_add(svm, pi.ir_data);
I missed a bug here the last time:
If ir_data is already inside some VCPU list and the VCPU changes, then
we don't remove ir_data from the previous list and just add it to a new
one. This was not as bad when we only had wrappers (only resulted in
duplication), but now we break the removed list ...
The problem with wrappers is that we don't know what list we should
remove the "struct amd_ir_data" from; we would need to add another
tracking structure or go through all VCPUs.
Having "struct list_head" in "struct amd_ir_data" would allow us to know
the current list and remove it from there:
One "struct amd_ir_data" should never be used by more than one caller of
amd_iommu_update_ga(), because they would have to be cooperating anyway,
which would mean a single mediator, so we can add a "struct list_head"
into "struct amd_ir_data".
Minor design note:
To make the usage of "struct amd_ir_data" safer, we could pass "struct
list_head" into irq_set_vcpu_affinity(), instead of returning "struct
amd_ir_data *".
irq_set_vcpu_affinity() would add "struct amd_ir_data" to the list only
if ir_data was not already in some list and report whether the list
was modified.
I think that adding "struct list_head" into "struct amd_ir_data" is
nicer than having wrappers.
Joerg, Paolo, what do you think?
Thanks.
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> @@ -4366,6 +4399,177 @@ static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
> +static int svm_ir_list_add(struct vcpu_svm *svm, struct amd_ir_data *ir)
> +{
> + spin_lock_irqsave(&svm->ir_list_lock, flags);
> + list_for_each_entry(cur, &svm->ir_list, node) {
> + if (cur != ir)
> + continue;
> + found = true;
> + break;
> + }
If we're using ir->node, then this can ask !list_empty(ir->node),
because we should never add it to a new list otherwise.
> + spin_unlock_irqrestore(&svm->ir_list_lock, flags);
> +
> + if (found)
> + return 0;
> +
> + spin_lock_irqsave(&svm->ir_list_lock, flags);
> + list_add(&ir->node, &svm->ir_list);
> + spin_unlock_irqrestore(&svm->ir_list_lock, flags);
> + return 0;
> +}
Powered by blists - more mailing lists