lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211102170530.49f8ab4e@kicinski-fedora-PC1C0HJN>
Date:   Tue, 2 Nov 2021 17:05:30 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Leon Romanovsky <leon@...nel.org>
Cc:     idosch@...sch.org, edwin.peer@...adcom.com, jiri@...nulli.us,
        netdev@...r.kernel.org
Subject: Re: [RFC 0/5] devlink: add an explicit locking API

On Tue, 2 Nov 2021 20:14:22 +0200 Leon Romanovsky wrote:
> > > Thanks  
> > 
> > Not sure what you're thanking for. I still prefer two explicit APIs.
> > Allowing nesting is not really necessary here. Callers know whether
> > they hold the lock or not.  
> 
> I'm doubt about. It maybe easy to tell in reload flow, but it is much
> harder inside eswitch mode change (as an example).

Hm, interesting counter example, why is eswitch mode change harder?
From devlink side they should be locked the same, and I take the
devlink lock on all driver callbacks (probe, remove, sriov).

> > > I need RW as a way to ensure "exclusive" access during _set_ operation.
> > > It is not an optimization, but simple way to understand if parallel
> > > access is possible at this specific point of time.  
> > 
> > How is this not an optimization to allow parallel "reads"?  
> 
> You need to stop everything when _set_ command is called. One way is to
> require for all netlink devlink calls to have lock, another solution is
> to use RW semaphore. This is why it is not optimization, but an implementation.
> Parallel "reads" are nice bonus.

Sorry I still don't understand. Why is devlink instance lock not
enough? Are you saying parallel get is a hard requirement for the
rework?

> > > I don't know yet, because as you wrote before netdevsim is for
> > > prototyping and such ABBA deadlock doesn't exist in real devices.
> > > 
> > > My current focus is real devices for now.  
> > 
> > I wrote it with nfp in mind as well. It has a delayed work which needs
> > to take the port lock. Sadly I don't have any nfps handy and I didn't
> > want to post an untested patch.  
> 
> Do you remember why was port configuration implemented with delayed work?

IIRC it was because of the FW API for link info, all ports would get
accessed at once so we used a work which would lock out devlink port
splitting and update state of all ports.

Link state has to be read under rtnl_lock, yet port splitting needs 
to take rtnl_lock to register new netdevs so there was no prettier 
way to solve this.

> > > I clearly remember this patch and the sentence "...in
> > > some devices' resume function(igb_resum,igc_resume) they calls rtnl_lock()
> > > again". The word "... some ..." hints to me that maintainers have different
> > > opinion on how to use rtnl_lock.
> > > 
> > > https://lore.kernel.org/netdev/20210809032809.1224002-1-acelan.kao@canonical.com/  
> > 
> > Yes, using rtnl_lock for PM is certainly a bad idea, and I'm not sure
> > why Intel does it. There are 10s of drivers which take rtnl_lock
> > correctly and it greatly simplifies their lives.  
> 
> I would say that you are ignoring that most of such drivers don't add
> new functionality.

You lost me again. You don't disagree that ability to lock out higher
layers is useful for drivers but... ?

> Anyway, I got your point, please give me time to see what I can do.
> 
> In case, we will adopt your model, will you convert all drivers?

Yes, sure. The way this RFC is done it should be possible to land 
it without any driver changes and then go driver by driver. I find
that approach much more manageable.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ