[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170313173432.GA31333@ast-mbp.thefacebook.com>
Date: Mon, 13 Mar 2017 10:34:35 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Willem de Bruijn <willemb@...gle.com>,
Alexei Starovoitov <ast@...nel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Alexander Duyck <alexander.duyck@...il.com>
Subject: Re: [PATCH net-next] mlx4: Better use of order-0 pages in RX path
On Sun, Mar 12, 2017 at 05:58:47PM -0700, Eric Dumazet wrote:
> @@ -767,10 +814,30 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
> case XDP_PASS:
> break;
> case XDP_TX:
> + /* Make sure we have one page ready to replace this one */
> + npage = NULL;
> + if (!ring->page_cache.index) {
> + npage = mlx4_alloc_page(priv, ring,
> + &ndma, numa_mem_id(),
> + GFP_ATOMIC | __GFP_MEMALLOC);
did you test this with xdp2 test ?
under what conditions it allocates ?
It looks dangerous from security point of view to do allocations here.
Can it be exploited by an attacker?
we use xdp for ddos and lb and this is fast path.
If 1 out of 100s XDP_TX packets hit this allocation we will have serious
perf regression.
In general I dont think it's a good idea to penalize x86 in favor of powerpc.
Can you #ifdef this new code somehow? so we won't have these concerns on x86?
Powered by blists - more mailing lists