lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <vbfftspjq48.fsf@mellanox.com>
Date:   Fri, 15 Feb 2019 10:38:04 +0000
From:   Vlad Buslov <vladbu@...lanox.com>
To:     Stefano Brivio <sbrivio@...hat.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net-next 02/12] net: sched: flower: refactor fl_change

On Thu 14 Feb 2019 at 20:34, Stefano Brivio <sbrivio@...hat.com> wrote:
> On Thu, 14 Feb 2019 09:47:02 +0200
> Vlad Buslov <vladbu@...lanox.com> wrote:
>
>> As a preparation for using classifier spinlock instead of relying on
>> external rtnl lock, rearrange code in fl_change. The goal is to group the
>> code which changes classifier state in single block in order to allow
>> following commits in this set to protect it from parallel modification with
>> tp->lock. Data structures that require tp->lock protection are mask
>> hashtable and filters list, and classifier handle_idr.
>>
>> fl_hw_replace_filter() is a sleeping function and cannot be called while
>> holding a spinlock. In order to execute all sequence of changes to shared
>> classifier data structures atomically, call fl_hw_replace_filter() before
>> modifying them.
>>
>> Signed-off-by: Vlad Buslov <vladbu@...lanox.com>
>> Acked-by: Jiri Pirko <jiri@...lanox.com>
>> ---
>>  net/sched/cls_flower.c | 85 ++++++++++++++++++++++++++------------------------
>>  1 file changed, 44 insertions(+), 41 deletions(-)
>>
>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>> index 88d7af78ba7e..91596a6271f8 100644
>> --- a/net/sched/cls_flower.c
>> +++ b/net/sched/cls_flower.c
>> @@ -1354,90 +1354,93 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>>  	if (err < 0)
>>  		goto errout;
>>
>> -	if (!handle) {
>> -		handle = 1;
>> -		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
>> -				    INT_MAX, GFP_KERNEL);
>> -	} else if (!fold) {
>> -		/* user specifies a handle and it doesn't exist */
>> -		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
>> -				    handle, GFP_KERNEL);
>> -	}
>> -	if (err)
>> -		goto errout;
>> -	fnew->handle = handle;
>> -
>>
>> [...]
>>
>>  	if (fold) {
>> +		fnew->handle = handle;
>
> I'm probably missing something, but what if fold is passed and the
> handle isn't specified? That can still happen, right? In that case we
> wouldn't be allocating the handle.

Hi Stefano,

Thank you for reviewing my code.

Cls API lookups fold by handle, so this pointer can only be not NULL
when user specified a handle and filter with such handle exists on tp.

>
>> +
>> +		err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
>> +					     fnew->mask->filter_ht_params);
>> +		if (err)
>> +			goto errout_hw;
>> +
>>  		rhashtable_remove_fast(&fold->mask->ht,
>>  				       &fold->ht_node,
>>  				       fold->mask->filter_ht_params);
>> -		if (!tc_skip_hw(fold->flags))
>> -			fl_hw_destroy_filter(tp, fold, NULL);
>> -	}
>> -
>> -	*arg = fnew;
>> -
>> -	if (fold) {
>>  		idr_replace(&head->handle_idr, fnew, fnew->handle);
>>  		list_replace_rcu(&fold->list, &fnew->list);
>> +
>> +		if (!tc_skip_hw(fold->flags))
>> +			fl_hw_destroy_filter(tp, fold, NULL);
>>  		tcf_unbind_filter(tp, &fold->res);
>>  		tcf_exts_get_net(&fold->exts);
>>  		tcf_queue_work(&fold->rwork, fl_destroy_filter_work);
>>  	} else {
>> +		if (__fl_lookup(fnew->mask, &fnew->mkey)) {
>> +			err = -EEXIST;
>> +			goto errout_hw;
>> +		}
>> +
>> +		if (handle) {
>> +			/* user specifies a handle and it doesn't exist */
>> +			err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
>> +					    handle, GFP_ATOMIC);
>> +		} else {
>> +			handle = 1;
>> +			err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
>> +					    INT_MAX, GFP_ATOMIC);
>> +		}
>> +		if (err)
>> +			goto errout_hw;
>
> Just if you respin: a newline here would be nice to have.

Agree.

>
>> +		fnew->handle = handle;
>> +
>> +		err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
>> +					     fnew->mask->filter_ht_params);
>> +		if (err)
>> +			goto errout_idr;
>> +
>>  		list_add_tail_rcu(&fnew->list, &fnew->mask->filters);
>>  	}
>>
>> +	*arg = fnew;
>> +
>>  	kfree(tb);
>>  	kfree(mask);
>>  	return 0;
>>
>> -errout_mask_ht:
>> -	rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
>> -			       fnew->mask->filter_ht_params);
>> -
>> -errout_mask:
>> -	fl_mask_put(head, fnew->mask, false);
>> -
>>  errout_idr:
>>  	if (!fold)
>
> This check could go away, I guess (not a strong preference though).

Yes, it seems that after this change errout_idr lable is only accessed
from else branch of if(fold) conditional so the check is redundant.

>
>>  		idr_remove(&head->handle_idr, fnew->handle);
>> +errout_hw:
>> +	if (!tc_skip_hw(fnew->flags))
>> +		fl_hw_destroy_filter(tp, fnew, NULL);
>> +errout_mask:
>> +	fl_mask_put(head, fnew->mask, false);
>>  errout:
>>  	tcf_exts_destroy(&fnew->exts);
>>  	kfree(fnew);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ