[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50A60D9D.80601@oracle.com>
Date: Fri, 16 Nov 2012 17:55:41 +0800
From: ANNIE LI <annie.li@...cle.com>
To: Ian Campbell <Ian.Campbell@...rix.com>
CC: "xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>
Subject: Re: [Xen-devel] [PATCH 1/4] xen/netback: implements persistent grant
with one page pool.
On 2012-11-16 17:27, Ian Campbell wrote:
> On Fri, 2012-11-16 at 02:18 +0000, ANNIE LI wrote:
>> In this patch,
>> The maximum of memory overhead is about
>>
>> (XEN_NETIF_TX_RING_SIZE+XEN_NETIF_RX_RING_SIZE)*PAGE_SIZE (plus size of grant_ref_t and handle)
>> which is about 512 PAGE_SIZE. Normally, without heavy network offload, this maximum can not be reached.
>>
>> In next patch of splitting tx/rx pool, the maximum is about
> "about" or just "is"?
For only grant pages, it is this value. I took into account other
element of grant_ref_t and map(change to handle in future)....
>
>> (256+512)PAGE_SIZE.
> IOW 3MB.
>
>>>> +
>>>> + return NULL;
>>>> +}
>>>> +
>>>> @@ -1338,7 +1497,11 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
>>>> gop->source.domid = vif->domid;
>>>> gop->source.offset = txreq.offset;
>>>>
>>>> - gop->dest.u.gmfn = virt_to_mfn(page_address(page));
>>>> + if (!vif->persistent_grant)
>>>> + gop->dest.u.gmfn = virt_to_mfn(page_address(page));
>>>> + else
>>>> + gop->dest.u.gmfn = (unsigned long)page_address(page);
>>> page_address doesn't return any sort of frame number, does it? This is
>>> rather confusing...
>> Yes. I only use dest.u.gmfn element to save the page_address here for
>> future memcpy, and it does not mean to use frame number actually. To
>> avoid confusion, here I can use
>>
>> gop->dest.u.gmfn = virt_to_mfn(page_address(page));
>>
>> and then call mfn_to_virt when doing memcpy.
> It seems a bit odd to be using the gop structure in this way when you
> aren't actually doing a grant op on it.
>
> While investigating I noticed:
> +static int
> +grant_memory_copy_op(unsigned int cmd, void *vuop, unsigned int count,
> + struct xen_netbk *netbk, bool tx_pool)
> ...
> + struct gnttab_copy *uop = vuop;
>
> Why void *vuop? Why not struct gnttab_copy * in the parameter?
Sorry, my mistake.
>
> I also noticed your new grant_memory_copy_op() seems to have unbatched
> the grant ops in the non-persistent case, which is going to suck for
> performance in non-persistent mode. You need to pull the conditional and
> the HYPERVISOR_grant_table_op outside the loop and pass it full array
> instead of doing them one at a time.
This still connects with netback per-VIF implementation.
Currently, these could not be pulled out outside since netback queue may
contains persistent and nonpersistent in the same queue. I did consider
to implement per-VIF first and then the persistent grant,
but thinking of it is part of wei's patch combined with other patches,
and finally decided to implement per-VIF later.
But this does limit implementation of persistent grant.
Thanks
Annie
>
> Ian
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists