lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 28 Feb 2024 09:08:32 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
 kuni1840@...il.com,  netdev@...r.kernel.org
Subject: Re: [PATCH v3 net-next 13/14] af_unix: Replace garbage collection
 algorithm.

On Tue, 2024-02-27 at 19:32 -0800, Kuniyuki Iwashima wrote:
> From: Paolo Abeni <pabeni@...hat.com>
> Date: Tue, 27 Feb 2024 12:36:51 +0100
> > On Fri, 2024-02-23 at 13:40 -0800, Kuniyuki Iwashima wrote:
> > > diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> > > index 060e81be3614..59a87a997a4d 100644
> > > --- a/net/unix/garbage.c
> > > +++ b/net/unix/garbage.c
> > > @@ -314,6 +314,48 @@ static bool unix_vertex_dead(struct unix_vertex *vertex)
> > >  	return true;
> > >  }
> > >  
> > > +static struct sk_buff_head hitlist;
> > 
> > I *think* hitlist could be replaced with a local variable in
> > __unix_gc(), WDYT?
> 
> Actually it was a local variable in the first draft.
> 
> In the current GC impl, hitlist is passed down to functions,
> but only the leaf function uses it, and I thought the global
> variable would be easier to follow.
> 
> And now __unix_gc() is not called twice at the same time thanks
> to workqueue, and hitlist can be a global variable.

My personal preference would be for a local variable, unless it makes
the code significant more complex: I think it's more clear and avoid
possible false sharing issues in the data segment.

> > > +
> > > +static void unix_collect_skb(struct list_head *scc)
> > > +{
> > > +	struct unix_vertex *vertex;
> > > +
> > > +	list_for_each_entry_reverse(vertex, scc, scc_entry) {
> > > +		struct sk_buff_head *queue;
> > > +		struct unix_edge *edge;
> > > +		struct unix_sock *u;
> > > +
> > > +		edge = list_first_entry(&vertex->edges, typeof(*edge), vertex_entry);
> > > +		u = edge->predecessor;
> > > +		queue = &u->sk.sk_receive_queue;
> > > +
> > > +		spin_lock(&queue->lock);
> > > +
> > > +		if (u->sk.sk_state == TCP_LISTEN) {
> > > +			struct sk_buff *skb;
> > > +
> > > +			skb_queue_walk(queue, skb) {
> > > +				struct sk_buff_head *embryo_queue = &skb->sk->sk_receive_queue;
> > > +
> > > +				spin_lock(&embryo_queue->lock);
> > 
> > I'm wondering if and why lockdep would be happy about the above. I
> > think this deserve at least a comment.
> 
> Ah, exactly, I guess lockdep is unhappy with it, but it would
> be false positive anyway.  The inversion lock never happens.
> 
> I'll use spin_lock_nested() with a comment, or do
> 
>   - splice listener's list to local queue
>   - unlock listener's queue
>   - skb_queue_walk
>     - lock child queue
>     - splice
>     - unlock child queue
>   - lock listener's queue again
>   - splice the child list back (to call unix_release_sock() later)

Either ways LGTM.

> > > +				skb_queue_splice_init(embryo_queue, &hitlist);
> > > +				spin_unlock(&embryo_queue->lock);
> > > +			}
> > > +		} else {
> > > +			skb_queue_splice_init(queue, &hitlist);
> > > +
> > > +#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
> > > +			if (u->oob_skb) {
> > > +				kfree_skb(u->oob_skb);
> > 
> > Similar question here. This happens under the u receive queue lock,
> > could we his some complex lock dependency? what about moving oob_skb to
> > hitlist instead?
> 
> oob_skb is just a pointer to skb which is put in the recv queue,
> so it's already in the hitlist here.
> 
> But oob_skb has an additional refcount, so we need to call
> kfree_skb() to decrement it, so we don't actually free it
> here and later we do in __unix_gc().

Understood, thanks, LGTM.

Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ