[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171208101836.GH24449@breakpoint.cc>
Date: Fri, 8 Dec 2017 11:18:36 +0100
From: Florian Westphal <fw@...len.de>
To: Pablo Neira Ayuso <pablo@...filter.org>
Cc: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
f.fainelli@...il.com, simon.horman@...ronome.com,
ronye@...lanox.com, jiri@...lanox.com, nbd@....name,
john@...ozen.org, kubakici@...pl, fw@...len.de
Subject: Re: [PATCH nf-next RFC,v2 6/6] netfilter: nft_flow_offload: add ndo
hooks for hardware offload
Pablo Neira Ayuso <pablo@...filter.org> wrote:
> The software flow table garbage collector skips entries that resides in
> the hardware, so the hardware will be responsible for releasing this
> flow table entry too via flow_offload_dead(). In the next garbage
> collector run, this removes the entries both in the software and
> hardware flow table from user context.
>
> Signed-off-by: Pablo Neira Ayuso <pablo@...filter.org>
> ---
> @Florian: I still owe you one here, you mentioned about inmediate schedule
> of the workqueue thread, and I need to revisit this, the quick patch I made
> is hitting splats when calling queue_delayed_work() from packet path, this
> may be my fault though.
OK. IIRC I had suggested to just use schedule_work() instead.
In most cases (assuming system is busy) the workqueue will already be
pending anyway.
> diff --git a/net/netfilter/nf_flow_table.c b/net/netfilter/nf_flow_table.c
> index ff27dad268c3..c578c3aec0e0 100644
> --- a/net/netfilter/nf_flow_table.c
> +++ b/net/netfilter/nf_flow_table.c
> @@ -212,6 +212,21 @@ int nf_flow_table_iterate(struct nf_flowtable *flow_table,
> }
> EXPORT_SYMBOL_GPL(nf_flow_table_iterate);
>
> +static void flow_offload_hw_del(struct flow_offload *flow)
> +{
> + struct net_device *indev;
> + int ret, ifindex;
> +
> + rtnl_lock();
> + ifindex = flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.iifidx;
> + indev = __dev_get_by_index(&init_net, ifindex);
I think this should pass struct net * as arg to flow_offload_hw_del.
> + if (WARN_ON(!indev))
> + return;
> +
> + ret = indev->netdev_ops->ndo_flow_offload(FLOW_OFFLOAD_DEL, flow);
> + rtnl_unlock();
> +}
Please no rtnl lock unless absolutely needed.
Seems this could even avoid the mutex completely by using
dev_get_by_index + dev_put.
> +static int do_flow_offload(struct flow_offload *flow)
> +{
> + struct net_device *indev;
> + int ret, ifindex;
> +
> + rtnl_lock();
> + ifindex = flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.iifidx;
> + indev = __dev_get_by_index(&init_net, ifindex);
likewise.
> +#define FLOW_HW_WORK_TIMEOUT msecs_to_jiffies(100)
> +
> +static struct delayed_work nft_flow_offload_dwork;
I would go with struct work and no delay at all.
Powered by blists - more mailing lists