[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <vbf1s41ysm0.fsf@mellanox.com>
Date: Thu, 21 Feb 2019 17:11:07 +0000
From: Vlad Buslov <vladbu@...lanox.com>
To: Cong Wang <xiyou.wangcong@...il.com>
CC: Ido Schimmel <idosch@...sch.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"jhs@...atatu.com" <jhs@...atatu.com>,
"jiri@...nulli.us" <jiri@...nulli.us>,
"davem@...emloft.net" <davem@...emloft.net>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>
Subject: Re: [PATCH net-next v4 07/17] net: sched: protect filter_chain list
with filter_chain_lock mutex
On Wed 20 Feb 2019 at 23:00, Cong Wang <xiyou.wangcong@...il.com> wrote:
> On Tue, Feb 19, 2019 at 7:20 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>>
>>
>> On Tue 19 Feb 2019 at 05:08, Cong Wang <xiyou.wangcong@...il.com> wrote:
>> > On Fri, Feb 15, 2019 at 2:02 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>> >>
>> >> I looked at the code and problem seems to be matchall classifier
>> >> specific. My implementation of unlocked cls API assumes that concurrent
>> >> insertions are possible and checks for it when deleting "empty" tp.
>> >> Since classifiers don't expose number of elements, the only way to test
>> >> this is to do tp->walk() on them and assume that walk callback is called
>> >> once per filter on every classifier. In your example new tp is created
>> >> for second filter, filter insertion fails, number of elements on newly
>> >> created tp is checked with tp->walk() before deleting it. However,
>> >> matchall classifier always calls the tp->walk() callback once, even when
>> >> it doesn't have a valid filter (in this case with NULL filter pointer).
>> >
>> > Again, this can be eliminated by just switching to normal
>> > non-retry logic. This is yet another headache to review this
>> > kind of unlock-and-retry logic, I have no idea why you are such
>> > a big fan of it.
>>
>> The retry approach was suggested to me multiple times by Jiri on
>> previous code reviews so I assumed it is preferred approach in such
>> cases. I don't have a strong preference in this regard, but locking
>> whole tp on filter update will remove any parallelism when updating same
>> classifier instance concurrently. The goal of these changes is to allow
>> parallel rule update and to achieve that I had to introduce some
>> complexity into the code.
>
> Yeah, but with unlock-and-retry it would waste more time when
> retry occurs. So it can't be better in the worst scenario.
>
> The question is essentially that do we want to waste CPU cycles
> when conflicts occurs or just block there until it is safe to enter
> the critical section?
>
> And, is the retry bound? Is it possible that we would retry infinitely
> as long as we time it correctly?
>
>
>>
>> Now let me explain why these two approaches result completely different
>> performance in this case. Lets start with a list of most CPU-consuming
>> parts in new filter creation process in descending order (raw data at
>> the end of this mail):
>>
>> 1) Hardware offload - if available and no skip_hw.
>> 2) Exts (actions) initalization - most expensive part even with single
>> action, CPU usage increases with number of actions per filter.
>> 3) cls API.
>> 4) Flower classifier data structure initialization.
>>
>> Note that 1)+2) is ~80% of cost of creating a flower filter. So if we
>> just lock the whole flower classifier instance during rule update we
>> serialize 1, 2 and 4, and only cls API (~13% of CPU cost) can be
>> executed concurrently. However, in proposed flower implementation hw
>> offloading and action initialization code is called without any locks
>> and tp->lock is only obtained when modifying flower data structures,
>> which means that only 3) is serialized and everything else (87% of CPU
>> cost) can be executed in parallel.
>
> What about when conflicts detected and retry the whole change?
> And, of course, how often do conflicts happen?
>
> Thanks.
I had similar concerns when designing this change. Lets look at two
cases when this retry is needed.
One process creates first filter on classifier and fails, while other
processes are trying to concurrently add filter to same block/chain/tp:
1) Process obtains filter_chain_lock, performs unsuccessful tp lookup,
releases the lock.
2) Calls tcf_chain_tp_insert_unique() which obtains filter_chain_lock,
inserts new tp, releases the lock.
3) Calls tp->ops->change() that returns an error.
4) Calls tcf_chain_tp_delete_empty() which takes filter_chain_lock, verifies that no
filters were added to tp concurrently, sets tp->deleting flag, removes
tp from chain.
This is supposed to be very rare occurrence because for retry to happen
it not only requires concurrent insertions to same block/chain/tp, but
also that tp with requested prio didn't exist before and no concurrent
process succeeded in adding at least one filter to tp during step 3
before it is marked for deletion in step 4 (otherwise
tcf_proto_check_delete() fails and concurrent threads don't perform
retry).
Another case is when last filter is being deleted while concurrent
processes adding new filters to same block/chain/tp:
1) tc_del_tfilter() gets last filter with tp->ops->get()
2) Deletes it with tp->ops->delete()...
3) ... that return 'last' hint set to true.
4) Calls tcf_chain_tp_delete_empty() which takes filter_chain_lock, verifies that no
filters were added to tp concurrently, sets tp->deleting flag, removes
tp from chain.
This case is also quite rare because it requires concurrent users to
successfully lookup tp before tp->deleting is set to true and tp is
removed from chain, but not create any new filters on tp during that
time.
After considering this I decided that it is not worth it to penalize
common case of updating filters by completely removing parallelism when
updates target same tp instance for such rare corner cases as described
above.
Now regarding forcing users to retry indefinitely. In later cases no
more than one retry is possible because concurrent add processes create
new tp on first retry. In former case multiple retries are possible, but
to block concurrent users indefinitely would require malicious process
to somehow always have priority when obtaining filter_chain_lock during
steps 1, 2, then wait to allow all concurrent users to lookup the tp,
then obtain filter_chain_lock in step 4 and initiate tp deletion before
any of concurrent users that have a reference to this new tp instance
can insert any single filter on it, then go back to step 1, obtain lock
first and repeat. I don't see how this can be timed from userspace
repeatedly as creating first filter on new tp involves multiple cycles
of getting and releasing filter_chain_lock and each of them require
attacker to "influence" kernel scheduler to behave in very specific
fashion.
Regards,
Vlad
Powered by blists - more mailing lists