lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZSl5OS7bFsg/ahCK@nanopsycho>
Date: Fri, 13 Oct 2023 19:07:05 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, pabeni@...hat.com, davem@...emloft.net,
	edumazet@...gle.com, gal@...dia.com
Subject: Re: [patch net-next] devlink: don't take instance lock for nested
 handle put

Fri, Oct 13, 2023 at 05:39:45PM CEST, kuba@...nel.org wrote:
>On Thu, 12 Oct 2023 08:14:03 +0200 Jiri Pirko wrote:
>> >The current code is a problem in itself. You added another xarray,
>> >with some mark, callbacks and unclear locking semantics. All of it
>> >completely undocumented.  
>> 
>> Okay, I will add the documentation. But I thouth it is clear. The parent
>> instance lock needs to be taken out of child lock. The problem this
>> patch tries to fix is when the rntl comes into the picture in one flow,
>> see the patch description.
>> 
>> >The RCU lock on top is just fixing one obvious bug I pointed out to you.  
>> 
>> Not sure what obvious bug you mean. If you mean the parent-child
>> lifetime change, I don't know how that would help here. I don't see how.
>> 
>> Plus it has performance implications. When user removes SF port under
>> instance lock, the SF itself is removed asynchonously out of the lock.
>> You suggest to remove it synchronously holding the instance lock,
>> correct? 
>
>The SF is deleted by calling ->port_del() on the PF instance, correct?

That or setting opstate "inactive".


>
>> SF removal does not need that lock. Removing thousands of SFs
>> would take much longer as currently, they are removed in parallel.
>> You would serialize the removals for no good reason.
>
>First of all IDK what the removal rate you're targeting is, and what
>is achievable under PF's lock. Handwaving "we need parallelism" without
>data is not a serious argument.

Oh there are data and there is a need. My colleagues are working
on parallel creation/removal within mlx5 driver as we speak. What you
suggest would be huge setback :/


>
>> >Maybe this is completely unfair but I feel like devlink locking has
>> >been haphazard and semi-broken since the inception. I had to step in   
>> 
>> Well, it got broken over time. I appreciate you helped to fix it.
>> 
>> >to fix it. And now a year later we're back to weird locking and random
>> >dependencies. The only reason it was merged is because I was on PTO.  
>> 
>> Not sure what you mean by that. Locking is quite clear. Why weird?
>> What's weird exactly? What do you mean by "random dependencies"?
>> 
>> I have to say I feel we got a bit lost in the conversation.
>
>You have a rel object, which is refcounted, xarray with a lock, and 
>an async work for notifications.

Yes. The async work for notification is something you would need anyway,
even with object lifetime change you suggest. It's about locking order.
Please see the patchset I sent today (v3), I did put in a documentation
describing that (3 last patches). That should make it clear.


>
>All you need is a list, and a requirement that the PF can't disappear
>before SF (which your version also has, BTW).

It's not PF, but object contained in PF. Just to be clear. That does not
scale as we discuss above :/


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ