[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e63b4b01-7468-af1e-345b-605ecb605579@redhat.com>
Date: Fri, 23 Nov 2018 10:30:54 +0800
From: Jason Wang <jasowang@...hat.com>
To: Li RongQing <lirongqing@...du.com>, netdev@...r.kernel.org
Subject: Re: [PATCH][net-next] vhost:net: allocate 32KB memory instead of 32K
pages when page frag refill
On 2018/11/23 上午10:04, Li RongQing wrote:
> when page frag refills, 32K pages, 128MB memory is asked, it
> hardly successes when system has memory stress
Looking at get_order(), it seems we get 3 after get_order(32768) since
it accepts the size of block.
/**
* get_order - Determine the allocation order of a memory size
* @size: The size for which to get the order
...
define get_order(n) \
( \
__builtin_constant_p(n) ? ( \
((n) == 0UL) ? BITS_PER_LONG - PAGE_SHIFT : \
(((n) < (1UL << PAGE_SHIFT)) ? 0 : \
ilog2((n) - 1) - PAGE_SHIFT + 1) \
^^^
) : \
__get_order(n) \
)
>
> And such large memory size will cause the underflow of reference
> bias, and make refcount of page chaos, since reference bias will
> be decreased to negative before the allocated memory is used up
Do you have reproducer for this issue?
Thanks
>
> so 32KB memory is safe choice, meanwhile, remove a unnecessary
> check
>
> Fixes: e4dab1e6ea64 ("vhost_net: mitigate page reference counting during page frag refill")
> Signed-off-by: Zhang Yu <zhangyu31@...du.com>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
> ---
> drivers/vhost/net.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index d919284f103b..b933a4a8e4ba 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -641,7 +641,7 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len)
> !vhost_vq_avail_empty(vq->dev, vq);
> }
>
> -#define SKB_FRAG_PAGE_ORDER get_order(32768)
> +#define SKB_FRAG_PAGE_ORDER 3
>
> static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz,
> struct page_frag *pfrag, gfp_t gfp)
> @@ -654,17 +654,17 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz,
>
> pfrag->offset = 0;
> net->refcnt_bias = 0;
> - if (SKB_FRAG_PAGE_ORDER) {
> - /* Avoid direct reclaim but allow kswapd to wake */
> - pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
> - __GFP_COMP | __GFP_NOWARN |
> - __GFP_NORETRY,
> - SKB_FRAG_PAGE_ORDER);
> - if (likely(pfrag->page)) {
> - pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
> - goto done;
> - }
> +
> + /* Avoid direct reclaim but allow kswapd to wake */
> + pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
> + __GFP_COMP | __GFP_NOWARN |
> + __GFP_NORETRY,
> + SKB_FRAG_PAGE_ORDER);
> + if (likely(pfrag->page)) {
> + pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
> + goto done;
> }
> +
> pfrag->page = alloc_page(gfp);
> if (likely(pfrag->page)) {
> pfrag->size = PAGE_SIZE;
Powered by blists - more mailing lists