[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <532191D4.3000501@citrix.com>
Date: Thu, 13 Mar 2014 11:09:08 +0000
From: David Vrabel <david.vrabel@...rix.com>
To: Ian Campbell <Ian.Campbell@...rix.com>
CC: Zoltan Kiss <zoltan.kiss@...rix.com>,
<xen-devel@...ts.xenproject.org>, <jonathan.davies@...rix.com>,
<wei.liu2@...rix.com>, <linux-kernel@...r.kernel.org>,
<netdev@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH net-next v7 4/9] xen-netback: Introduce TX
grant mapping
On 13/03/14 11:02, Ian Campbell wrote:
> On Thu, 2014-03-13 at 10:56 +0000, David Vrabel wrote:
>> On 13/03/14 10:33, Ian Campbell wrote:
>>> On Thu, 2014-03-06 at 21:48 +0000, Zoltan Kiss wrote:
>>>> @@ -135,13 +146,31 @@ struct xenvif {
>>>> pending_ring_idx_t pending_cons;
>>>> u16 pending_ring[MAX_PENDING_REQS];
>>>> struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
>>>> + grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
>>>>
>>>> /* Coalescing tx requests before copying makes number of grant
>>>> * copy ops greater or equal to number of slots required. In
>>>> * worst case a tx request consumes 2 gnttab_copy.
>>>> */
>>>> struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
>>>> -
>>>> + struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
>>>> + struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
>>>
>>> I wonder if we should break some of these arrays into separate
>>> allocations? Wasn't there a problem with sizeof(struct xenvif) at one
>>> point?
>>
>> alloc_netdev() falls back to vmalloc() if the kmalloc failed so there's
>> no need to split these structures.
>
> Is vmalloc space in abundant supply? For some reason I thought it was
> limited (maybe that's a 32-bit only limitation?)
It is limited in 32-bit, but 64-bit has stupid amounts.
/proc/meminfo:
VmallocTotal: 34359738367 kB
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists