lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73b994eb-c689-48e0-b09c-a414041a0525@paulmck-laptop>
Date: Thu, 13 Jun 2024 08:15:06 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Keith Busch <kbusch@...nel.org>
Cc: Nilay Shroff <nilay@...ux.ibm.com>, l@...sch-mbp.dhcp.thefacebook.com,
	Keith Busch <kbusch@...a.com>, linux-nvme@...ts.infradead.org,
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
	hch@....de, sagi@...mberg.me, davidgow@...gle.com,
	akpm@...ux-foundation.org, venkat88@...ux.vnet.ibm.com
Subject: Re: [PATCH 1/2] list: introduce a new cutting helper

On Thu, Jun 13, 2024 at 08:47:26AM -0600, Keith Busch wrote:
> On Thu, Jun 13, 2024 at 07:43:35AM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 13, 2024 at 08:36:44AM -0600, Keith Busch wrote:
> > > On Thu, Jun 13, 2024 at 07:11:52PM +0530, Nilay Shroff wrote:
> > > > On 6/13/24 18:26, Keith Busch wrote:
> > > > > But that's not the problem for the rcu case. It's the last line that's
> > > > > the problem:
> > > > > 
> > > > >  	list->prev->next = list;
> > > > > 
> > > > > We can't change forward pointers for any element being detached from
> > > > > @head because a reader iterating the list may see that new pointer value
> > > > > and end up in the wrong list, breaking iteration. A synchronize rcu
> > > > > needs to happen before forward pointers can be mucked with, so it still
> > > > > needs to be done in two steps. Oh bother...
> > > > 
> > > > Agree and probably we may break it down using this API:
> > > > static inline void list_cut_rcu(struct list_head *list,
> > > >  		struct list_head *head, struct list_head *entry, 
> > > > 		void (*sync)(void))
> > > > {
> > > >  	list->next = entry;
> > > >  	list->prev = head->prev;
> > > > 	__list_del(entry->prev, head);
> > > > 	sync();
> > > >  	entry->prev = list;
> > > >  	list->prev->next = list;
> > > > }
> > > 
> > > Yes, that's the pattern, but I think we need an _srcu() variant: the
> > > "sync" callback needs to know the srcu_struct.
> > 
> > Just make a helper function like this:
> > 
> > 	static void my_synchronize_srcu(void)
> > 	{
> > 		synchronize_srcu(&my_srcu_struct);
> > 	}
> > 
> > Or am I missing something subtle here?
> 
> That would work if we had a global srcu, but the intended usage
> dynamically allocates one per device the driver is attached to, so a
> void callback doesn't know which one to sync.

Ah, good point!  I suppose that a further suggestion to just JIT the
needed function would not be well-received?  ;-)

I cannot resist suggesting placing a pointer to the srcu_struct in
the task structure.  /me runs...

Perhaps somewhat more constructively, my usual question:  Is it really
necessary to have per-driver SRCU here?  What would break if there was
a global srcu_struct that applied to all drivers?

						Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ