lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Feb 2019 11:47:06 +0100
From:   Stefano Brivio <sbrivio@...hat.com>
To:     Vlad Buslov <vladbu@...lanox.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net-next 02/12] net: sched: flower: refactor fl_change

On Fri, 15 Feb 2019 10:38:04 +0000
Vlad Buslov <vladbu@...lanox.com> wrote:

> On Thu 14 Feb 2019 at 20:34, Stefano Brivio <sbrivio@...hat.com> wrote:
> > On Thu, 14 Feb 2019 09:47:02 +0200
> > Vlad Buslov <vladbu@...lanox.com> wrote:
> >  
> >> As a preparation for using classifier spinlock instead of relying on
> >> external rtnl lock, rearrange code in fl_change. The goal is to group the
> >> code which changes classifier state in single block in order to allow
> >> following commits in this set to protect it from parallel modification with
> >> tp->lock. Data structures that require tp->lock protection are mask
> >> hashtable and filters list, and classifier handle_idr.
> >>
> >> fl_hw_replace_filter() is a sleeping function and cannot be called while
> >> holding a spinlock. In order to execute all sequence of changes to shared
> >> classifier data structures atomically, call fl_hw_replace_filter() before
> >> modifying them.
> >>
> >> Signed-off-by: Vlad Buslov <vladbu@...lanox.com>
> >> Acked-by: Jiri Pirko <jiri@...lanox.com>
> >> ---
> >>  net/sched/cls_flower.c | 85 ++++++++++++++++++++++++++------------------------
> >>  1 file changed, 44 insertions(+), 41 deletions(-)
> >>
> >> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
> >> index 88d7af78ba7e..91596a6271f8 100644
> >> --- a/net/sched/cls_flower.c
> >> +++ b/net/sched/cls_flower.c
> >> @@ -1354,90 +1354,93 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
> >>  	if (err < 0)
> >>  		goto errout;
> >>
> >> -	if (!handle) {
> >> -		handle = 1;
> >> -		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
> >> -				    INT_MAX, GFP_KERNEL);
> >> -	} else if (!fold) {
> >> -		/* user specifies a handle and it doesn't exist */
> >> -		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
> >> -				    handle, GFP_KERNEL);
> >> -	}
> >> -	if (err)
> >> -		goto errout;
> >> -	fnew->handle = handle;
> >> -
> >>
> >> [...]
> >>
> >>  	if (fold) {
> >> +		fnew->handle = handle;  
> >
> > I'm probably missing something, but what if fold is passed and the
> > handle isn't specified? That can still happen, right? In that case we
> > wouldn't be allocating the handle.  
> 
> Hi Stefano,
> 
> Thank you for reviewing my code.
> 
> Cls API lookups fold by handle, so this pointer can only be not NULL
> when user specified a handle and filter with such handle exists on tp.

Ah, of course. Thanks for clarifying. By the way, what tricked me here
was this check in fl_change():

	if (fold && handle && fold->handle != handle)
		...

which could be turned into:

	if (fold && fold->handle != handle)
		...

at this point.

-- 
Stefano

Powered by blists - more mailing lists