[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <554CE3A5.7000101@redhat.com>
Date: Fri, 08 May 2015 18:26:13 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Joerg Roedel <joro@...tes.org>, Gleb Natapov <gleb@...nel.org>
CC: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Joerg Roedel <jroedel@...e.de>,
Christian Borntraeger <borntraeger@...ibm.com>
Subject: Re: [PATCH] kvm: irqchip: Break up high order allocations of kvm_irq_routing_table
On 08/05/2015 14:31, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@...e.de>
>
> The allocation size of the kvm_irq_routing_table depends on
> the number of irq routing entries because they are all
> allocated with one kzalloc call.
>
> When the irq routing table gets bigger this requires high
> order allocations which fail from time to time:
>
> qemu-kvm: page allocation failure: order:4, mode:0xd0
>
> This patch fixes this issue by breaking up the allocation of
> the table and its entries into individual kzalloc calls.
> These could all be satisfied with order-0 allocations, which
> are less likely to fail.
>
> The downside of this change is the lower performance, because
> of more calls to kzalloc. But given how often kvm_set_irq_routing
> is called in the lifetime of a guest, it doesn't really
> matter much.
It probably doesn't matter much indeed, but can you time the difference?
kvm_set_irq_routing is not too frequent, but happens enough often that
we had to use a separate SRCU instance just to speed it up (see commit
719d93cd5f5, kvm/irqchip: Speed up KVM_SET_GSI_ROUTING, 2014-01-16).
Paolo
> Signed-off-by: Joerg Roedel <jroedel@...e.de>
> ---
> virt/kvm/irqchip.c | 40 ++++++++++++++++++++++++++++++++--------
> 1 file changed, 32 insertions(+), 8 deletions(-)
>
> diff --git a/virt/kvm/irqchip.c b/virt/kvm/irqchip.c
> index 1d56a90..b56168f 100644
> --- a/virt/kvm/irqchip.c
> +++ b/virt/kvm/irqchip.c
> @@ -33,7 +33,6 @@
>
> struct kvm_irq_routing_table {
> int chip[KVM_NR_IRQCHIPS][KVM_IRQCHIP_NUM_PINS];
> - struct kvm_kernel_irq_routing_entry *rt_entries;
> u32 nr_rt_entries;
> /*
> * Array indexed by gsi. Each entry contains list of irq chips
> @@ -118,11 +117,31 @@ int kvm_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level,
> return ret;
> }
>
> +static void free_irq_routing_table(struct kvm_irq_routing_table *rt)
> +{
> + int i;
> +
> + if (!rt)
> + return;
> +
> + for (i = 0; i < rt->nr_rt_entries; ++i) {
> + struct kvm_kernel_irq_routing_entry *e;
> + struct hlist_node *n;
> +
> + hlist_for_each_entry_safe(e, n, &rt->map[i], link) {
> + hlist_del(&e->link);
> + kfree(e);
> + }
> + }
> +
> + kfree(rt);
> +}
> +
> void kvm_free_irq_routing(struct kvm *kvm)
> {
> /* Called only during vm destruction. Nobody can use the pointer
> at this stage */
> - kfree(kvm->irq_routing);
> + free_irq_routing_table(kvm->irq_routing);
> }
>
> static int setup_routing_entry(struct kvm_irq_routing_table *rt,
> @@ -173,25 +192,29 @@ int kvm_set_irq_routing(struct kvm *kvm,
>
> nr_rt_entries += 1;
>
> - new = kzalloc(sizeof(*new) + (nr_rt_entries * sizeof(struct hlist_head))
> - + (nr * sizeof(struct kvm_kernel_irq_routing_entry)),
> + new = kzalloc(sizeof(*new) + (nr_rt_entries * sizeof(struct hlist_head)),
> GFP_KERNEL);
>
> if (!new)
> return -ENOMEM;
>
> - new->rt_entries = (void *)&new->map[nr_rt_entries];
> -
> new->nr_rt_entries = nr_rt_entries;
> for (i = 0; i < KVM_NR_IRQCHIPS; i++)
> for (j = 0; j < KVM_IRQCHIP_NUM_PINS; j++)
> new->chip[i][j] = -1;
>
> for (i = 0; i < nr; ++i) {
> + struct kvm_kernel_irq_routing_entry *e;
> +
> + r = -ENOMEM;
> + e = kzalloc(sizeof(*e), GFP_KERNEL);
> + if (!e)
> + goto out;
> +
> r = -EINVAL;
> if (ue->flags)
> goto out;
> - r = setup_routing_entry(new, &new->rt_entries[i], ue);
> + r = setup_routing_entry(new, e, ue);
> if (r)
> goto out;
> ++ue;
> @@ -209,6 +232,7 @@ int kvm_set_irq_routing(struct kvm *kvm,
> r = 0;
>
> out:
> - kfree(new);
> + free_irq_routing_table(new);
> +
> return r;
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists