[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160719214935.GE64618@ast-mbp.thefacebook.com>
Date: Tue, 19 Jul 2016 14:49:37 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Brenden Blanco <bblanco@...mgrid.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org,
Jamal Hadi Salim <jhs@...atatu.com>,
Saeed Mahameed <saeedm@....mellanox.co.il>,
Martin KaFai Lau <kafai@...com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Ari Saha <as754m@....com>, Or Gerlitz <gerlitz.or@...il.com>,
john.fastabend@...il.com, hannes@...essinduktion.org,
Thomas Graf <tgraf@...g.ch>, Tom Herbert <tom@...bertland.com>,
Daniel Borkmann <daniel@...earbox.net>,
Tariq Toukan <ttoukan.linux@...il.com>
Subject: Re: [PATCH v10 07/12] net/mlx4_en: add page recycle to prepare rx
ring for tx support
On Tue, Jul 19, 2016 at 12:16:52PM -0700, Brenden Blanco wrote:
> The mlx4 driver by default allocates order-3 pages for the ring to
> consume in multiple fragments. When the device has an xdp program, this
> behavior will prevent tx actions since the page must be re-mapped in
> TODEVICE mode, which cannot be done if the page is still shared.
>
> Start by making the allocator configurable based on whether xdp is
> running, such that order-0 pages are always used and never shared.
>
> Since this will stress the page allocator, add a simple page cache to
> each rx ring. Pages in the cache are left dma-mapped, and in drop-only
> stress tests the page allocator is eliminated from the perf report.
>
> Note that setting an xdp program will now require the rings to be
> reconfigured.
>
> Before:
> 26.91% ksoftirqd/0 [mlx4_en] [k] mlx4_en_process_rx_cq
> 17.88% ksoftirqd/0 [mlx4_en] [k] mlx4_en_alloc_frags
> 6.00% ksoftirqd/0 [mlx4_en] [k] mlx4_en_free_frag
> 4.49% ksoftirqd/0 [kernel.vmlinux] [k] get_page_from_freelist
> 3.21% swapper [kernel.vmlinux] [k] intel_idle
> 2.73% ksoftirqd/0 [kernel.vmlinux] [k] bpf_map_lookup_elem
> 2.57% swapper [mlx4_en] [k] mlx4_en_process_rx_cq
>
> After:
> 31.72% swapper [kernel.vmlinux] [k] intel_idle
> 8.79% swapper [mlx4_en] [k] mlx4_en_process_rx_cq
> 7.54% swapper [kernel.vmlinux] [k] poll_idle
> 6.36% swapper [mlx4_core] [k] mlx4_eq_int
> 4.21% swapper [kernel.vmlinux] [k] tasklet_action
> 4.03% swapper [kernel.vmlinux] [k] cpuidle_enter_state
> 3.43% swapper [mlx4_en] [k] mlx4_en_prepare_rx_desc
> 2.18% swapper [kernel.vmlinux] [k] native_irq_return_iret
> 1.37% swapper [kernel.vmlinux] [k] menu_select
> 1.09% swapper [kernel.vmlinux] [k] bpf_map_lookup_elem
>
> Signed-off-by: Brenden Blanco <bblanco@...mgrid.com>
...
> +#define MLX4_EN_CACHE_SIZE (2 * NAPI_POLL_WEIGHT)
> +struct mlx4_en_page_cache {
> + u32 index;
> + struct mlx4_en_rx_alloc buf[MLX4_EN_CACHE_SIZE];
> +};
amazing that this tiny recycling pool makes such a huge difference.
Acked-by: Alexei Starovoitov <ast@...nel.org>
Powered by blists - more mailing lists