[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190408152655.1891ee77@cakuba.netronome.com>
Date: Mon, 8 Apr 2019 15:26:55 -0700
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Vlad Buslov <vladbu@...lanox.com>
Cc: netdev@...r.kernel.org, jhs@...atatu.com, xiyou.wangcong@...il.com,
jiri@...nulli.us, davem@...emloft.net, john.hurley@...ronome.com
Subject: Re: [PATCH net-next] net: sched: flower: insert filter to ht before
offloading it to hw
On Fri, 5 Apr 2019 20:56:26 +0300, Vlad Buslov wrote:
> John reports:
>
> Recent refactoring of fl_change aims to use the classifier spinlock to
> avoid the need for rtnl lock. In doing so, the fl_hw_replace_filer()
> function was moved to before the lock is taken. This can create problems
> for drivers if duplicate filters are created (commmon in ovs tc offload
> due to filters being triggered by user-space matches).
>
> Drivers registered for such filters will now receive multiple copies of
> the same rule, each with a different cookie value. This means that the
> drivers would need to do a full match field lookup to determine
> duplicates, repeating work that will happen in flower __fl_lookup().
> Currently, drivers do not expect to receive duplicate filters.
>
> To fix this, verify that filter with same key is not present in flower
> classifier hash table and insert the new filter to the flower hash table
> before offloading it to hardware. Implement helper function
> fl_ht_insert_unique() to atomically verify/insert a filter.
>
> This change makes filter visible to fast path at the beginning of
> fl_change() function, which means it can no longer be freed directly in
> case of error. Refactor fl_change() error handling code to deallocate the
> filter with rcu timeout.
>
> Fixes: 620da4860827 ("net: sched: flower: refactor fl_change")
> Reported-by: John Hurley <john.hurley@...ronome.com>
> Signed-off-by: Vlad Buslov <vladbu@...lanox.com>
How is re-offload consistency guaranteed? IIUC the code is:
insert into HT
offload
insert into IDR
What guarantees re-offload consistency if new callback is added just
after offload is requested but before rules ends up in IDR?
Powered by blists - more mailing lists