[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190425082643.GA3951@nanopsycho>
Date: Thu, 25 Apr 2019 10:26:43 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Vlad Buslov <vladbu@...lanox.com>
Cc: netdev@...r.kernel.org, jhs@...atatu.com, xiyou.wangcong@...il.com,
davem@...emloft.net, jakub.kicinski@...ronome.com
Subject: Re: [PATCH net-next v2] net: sched: flower: refactor reoffload for
concurrent access
Wed, Apr 24, 2019 at 08:53:31AM CEST, vladbu@...lanox.com wrote:
>Recent changes that introduced unlocked flower did not properly account for
>case when reoffload is initiated concurrently with filter updates. To fix
>the issue, extend flower with 'hw_filters' list that is used to store
>filters that don't have 'skip_hw' flag set. Filter is added to the list
>when it is inserted to hardware and only removed from it after being
>unoffloaded from all drivers that parent block is attached to. This ensures
>that concurrent reoffload can still access filter that is being deleted and
>prevents race condition when driver callback can be removed when filter is
>no longer accessible trough idr, but is still present in hardware.
>
>Refactor fl_change() to respect new filter reference counter and to release
>filter reference with __fl_put() in case of error, instead of directly
>deallocating filter memory. This allows for concurrent access to filter
>from fl_reoffload() and protects it with reference counting. Refactor
>fl_reoffload() to iterate over hw_filters list instead of idr. Implement
>fl_get_next_hw_filter() helper function that is used to iterate over
>hw_filters list with reference counting and skips filters that are being
>concurrently deleted.
>
>Fixes: 92149190067d ("net: sched: flower: set unlocked flag for flower proto ops")
>Signed-off-by: Vlad Buslov <vladbu@...lanox.com>
Acked-by: Jiri Pirko <jiri@...lanox.com>
Powered by blists - more mailing lists