[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5135749-4bf8-47b4-dec4-326324ab6e1d@itcare.pl>
Date: Sat, 3 Nov 2018 01:16:08 +0100
From: Paweł Staszewski <pstaszewski@...are.pl>
To: Aaron Lu <aaron.lu@...el.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Saeed Mahameed <saeedm@...lanox.com>,
"eric.dumazet@...il.com" <eric.dumazet@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...lanox.com>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"yoel@...knet.dk" <yoel@...knet.dk>,
"mgorman@...hsingularity.net" <mgorman@...hsingularity.net>
Subject: Re: Kernel 4.19 network performance - forwarding/routing normal users
traffic
W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
>
>
> W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>> Aaron Lu <aaron.lu@...el.com> wrote:
>>>
>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>> wrote:
>>>>>> ... ...
>>>>>>> Section copied out:
>>>>>>>
>>>>>>> mlx5e_poll_tx_cq
>>>>>>> |
>>>>>>> --16.34%--napi_consume_skb
>>>>>>> |
>>>>>>> |--12.65%--__free_pages_ok
>>>>>>> | |
>>>>>>> | --11.86%--free_one_page
>>>>>>> | |
>>>>>>> | |--10.10%
>>>>>>> --queued_spin_lock_slowpath
>>>>>>> | |
>>>>>>> | --0.65%--_raw_spin_lock
>>>>>> This callchain looks like it is freeing higher order pages than
>>>>>> order
>>>>>> 0:
>>>>>> __free_pages_ok is only called for pages whose order are bigger than
>>>>>> 0.
>>>>> mlx5 rx uses only order 0 pages, so i don't know where these high
>>>>> order
>>>>> tx SKBs are coming from..
>>>> Perhaps here:
>>>> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
>>>> __napi_alloc_frag() will all call page_frag_alloc(), which will use
>>>> __page_frag_cache_refill() to get an order 3 page if possible, or fall
>>>> back to an order 0 page if order 3 page is not available.
>>>>
>>>> I'm not sure if your workload will use the above code path though.
>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>
>>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>>> pages, via:
>>>
>>> static void skb_free_head(struct sk_buff *skb)
>>> {
>>> unsigned char *head = skb->head;
>>>
>>> if (skb->head_frag)
>>> skb_free_frag(head);
>>> else
>>> kfree(head);
>>> }
>>>
>>> static inline void skb_free_frag(void *addr)
>>> {
>>> page_frag_free(addr);
>>> }
>>>
>>> /*
>>> * Frees a page fragment allocated out of either a compound or
>>> order 0 page.
>>> */
>>> void page_frag_free(void *addr)
>>> {
>>> struct page *page = virt_to_head_page(addr);
>>>
>>> if (unlikely(put_page_testzero(page)))
>>> __free_pages_ok(page, compound_order(page));
>>> }
>>> EXPORT_SYMBOL(page_frag_free);
>> I think here is a problem - order 0 pages are freed directly to buddy,
>> bypassing per-cpu-pages. This might be the reason lock contention
>> appeared on free path. Can someone apply below diff and see if lock
>> contention is gone?
> Will test it tonight
>
Patch applied
perf report:
https://ufile.io/sytfh
But i need to wait also with more traffic currently cpu's are sleeping
>
>
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e2ef1c17942f..65c0ae13215a 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>> {
>> struct page *page = virt_to_head_page(addr);
>> - if (unlikely(put_page_testzero(page)))
>> - __free_pages_ok(page, compound_order(page));
>> + if (unlikely(put_page_testzero(page))) {
>> + unsigned int order = compound_order(page);
>> +
>> + if (order == 0)
>> + free_unref_page(page);
>> + else
>> + __free_pages_ok(page, order);
>> + }
>> }
>> EXPORT_SYMBOL(page_frag_free);
>>> Notice for the mlx5 driver it support several RX-memory models, so it
>>> can be hard to follow, but from the perf report output we can see that
>>> is uses mlx5e_skb_from_cqe_linear, which use build_skb.
>>>
>>> --13.63%--mlx5e_skb_from_cqe_linear
>>> |
>>> --5.02%--build_skb
>>> |
>>> --1.85%--__build_skb
>>> |
>>> --1.00%--kmem_cache_alloc
>>>
>>> /* build_skb() is wrapper over __build_skb(), that specifically
>>> * takes care of skb->head and skb->pfmemalloc
>>> * This means that if @frag_size is not zero, then @data must be
>>> backed
>>> * by a page fragment, not kmalloc() or vmalloc()
>>> */
>>> struct sk_buff *build_skb(void *data, unsigned int frag_size)
>>> {
>>> struct sk_buff *skb = __build_skb(data, frag_size);
>>>
>>> if (skb && frag_size) {
>>> skb->head_frag = 1;
>>> if (page_is_pfmemalloc(virt_to_head_page(data)))
>>> skb->pfmemalloc = 1;
>>> }
>>> return skb;
>>> }
>>> EXPORT_SYMBOL(build_skb);
>>>
>>> It still doesn't prove, that the @data is backed by by a order-0 page.
>>> For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
>>> page_pool_dev_alloc_pages(), and I can see perf report using
>>> __page_pool_alloc_pages_slow().
>>>
>>> The setup for page_pool in mlx5 uses order=0.
>>>
>>> /* Create a page_pool and register it with rxq */
>>> pp_params.order = 0;
>>> pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
>>> pp_params.pool_size = pool_size;
>>> pp_params.nid = cpu_to_node(c->cpu);
>>> pp_params.dev = c->pdev;
>>> pp_params.dma_dir = rq->buff.map_dir;
>>>
>>> /* page_pool can be used even when there is no rq->xdp_prog,
>>> * given page_pool does not handle DMA mapping there is no
>>> * required state to clear. And page_pool gracefully handle
>>> * elevated refcnt.
>>> */
>>> rq->page_pool = page_pool_create(&pp_params);
>>> if (IS_ERR(rq->page_pool)) {
>>> err = PTR_ERR(rq->page_pool);
>>> rq->page_pool = NULL;
>>> goto err_free;
>>> }
>>> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
>>> MEM_TYPE_PAGE_POOL, rq->page_pool);
>> Thanks for the detailed analysis, I'll need more time to understand the
>> whole picture :-)
>>
>
>
Powered by blists - more mailing lists