[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADvbK_crn204Q5Ce6npx=zPuWfEb8NAV9gPveDUMHQgOB_tYeQ@mail.gmail.com>
Date: Thu, 23 Oct 2025 15:07:59 -0400
From: Xin Long <lucien.xin@...il.com>
To: Kuniyuki Iwashima <kuniyu@...gle.com>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Kuniyuki Iwashima <kuni1840@...il.com>, netdev@...r.kernel.org,
linux-sctp@...r.kernel.org
Subject: Re: [PATCH v2 net-next 4/8] net: Add sk_clone().
On Wed, Oct 22, 2025 at 6:57 PM Kuniyuki Iwashima <kuniyu@...gle.com> wrote:
>
> On Wed, Oct 22, 2025 at 3:04 PM Xin Long <lucien.xin@...il.com> wrote:
> >
> > On Wed, Oct 22, 2025 at 5:17 PM Kuniyuki Iwashima <kuniyu@...gle.com> wrote:
> > >
> > > sctp_accept() will use sk_clone_lock(), but it will be called
> > > with the parent socket locked, and sctp_migrate() acquires the
> > > child lock later.
> > >
> > > Let's add no lock version of sk_clone_lock().
> > >
> > > Note that lockdep complains if we simply use bh_lock_sock_nested().
> > >
> > > Signed-off-by: Kuniyuki Iwashima <kuniyu@...gle.com>
> > > ---
> > > include/net/sock.h | 7 ++++++-
> > > net/core/sock.c | 21 ++++++++++++++-------
> > > 2 files changed, 20 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/include/net/sock.h b/include/net/sock.h
> > > index 01ce231603db..c7e58b8e8a90 100644
> > > --- a/include/net/sock.h
> > > +++ b/include/net/sock.h
> > > @@ -1822,7 +1822,12 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
> > > void sk_free(struct sock *sk);
> > > void sk_net_refcnt_upgrade(struct sock *sk);
> > > void sk_destruct(struct sock *sk);
> > > -struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority);
> > > +struct sock *sk_clone(const struct sock *sk, const gfp_t priority, bool lock);
> > > +
> > > +static inline struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
> > > +{
> > > + return sk_clone(sk, priority, true);
> > > +}
> > >
> > > struct sk_buff *sock_wmalloc(struct sock *sk, unsigned long size, int force,
> > > gfp_t priority);
> > > diff --git a/net/core/sock.c b/net/core/sock.c
> > > index a99132cc0965..0a3021f8f8c1 100644
> > > --- a/net/core/sock.c
> > > +++ b/net/core/sock.c
> > > @@ -2462,13 +2462,16 @@ static void sk_init_common(struct sock *sk)
> > > }
> > >
> > > /**
> > > - * sk_clone_lock - clone a socket, and lock its clone
> > > - * @sk: the socket to clone
> > > - * @priority: for allocation (%GFP_KERNEL, %GFP_ATOMIC, etc)
> > > + * sk_clone - clone a socket
> > > + * @sk: the socket to clone
> > > + * @priority: for allocation (%GFP_KERNEL, %GFP_ATOMIC, etc)
> > > + * @lock: if true, lock the cloned sk
> > > *
> > > - * Caller must unlock socket even in error path (bh_unlock_sock(newsk))
> > > + * If @lock is true, the clone is locked by bh_lock_sock(), and
> > > + * caller must unlock socket even in error path by bh_unlock_sock().
> > > */
> > > -struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
> > > +struct sock *sk_clone(const struct sock *sk, const gfp_t priority,
> > > + bool lock)
> > > {
> > > struct proto *prot = READ_ONCE(sk->sk_prot);
> > > struct sk_filter *filter;
> > > @@ -2497,9 +2500,13 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
> > > __netns_tracker_alloc(sock_net(newsk), &newsk->ns_tracker,
> > > false, priority);
> > > }
> > > +
> > > sk_node_init(&newsk->sk_node);
> > > sock_lock_init(newsk);
> > > - bh_lock_sock(newsk);
> > > +
> > > + if (lock)
> > > + bh_lock_sock(newsk);
> > > +
> > does it really need bh_lock_sock() that early, if not, maybe we can move
> > it out of sk_clone_lock(), and names sk_clone_lock() back to sk_clone()?
>
> I think sk_clone_lock() and leaf functions do not have
> lockdep_sock_is_held(), and probably the closest one is
> security_inet_csk_clone() which requires lock_sock() for
> bpf_setsockopt(), this can be easily adjusted though.
> (see bpf_lsm_locked_sockopt_hooks)
>
Right.
> Only concern would be moving bh_lock_sock() there will
> introduce one cache line miss.
I think it’s negligible, and it’s not even on the data path, though others
may have different opinions.
Powered by blists - more mailing lists