lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+G_Zeqhjp24DMNXj32Z4_vCt8dTRiZ12ChNjFaYKvGDA@mail.gmail.com>
Date: Mon, 10 Feb 2025 16:13:35 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org, 
	Kuniyuki Iwashima <kuniyu@...zon.com>, "David S. Miller" <davem@...emloft.net>, 
	Jakub Kicinski <kuba@...nel.org>, Simon Horman <horms@...nel.org>, Neal Cardwell <ncardwell@...gle.com>, 
	David Ahern <dsahern@...nel.org>
Subject: Re: [RFC PATCH 0/2] udp: avoid false sharing on sk_tsflags

On Mon, Feb 10, 2025 at 5:00 AM Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>
> Paolo Abeni wrote:
> > While benchmarking the recently shared page frag revert, I observed a
> > lot of cache misses in the UDP RX path due to false sharing between the
> > sk_tsflags and the sk_forward_alloc sk fields.
> >
> > Here comes a solution attempt for such a problem, inspired by commit
> > f796feabb9f5 ("udp: add local "peek offset enabled" flag").
> >
> > The first patch adds a new proto op allowing protocol specific operation
> > on tsflags updates, and the 2nd one leverages such operation to cache
> > the problematic field in a cache friendly manner.
> >
> > The need for a new operation is possibly suboptimal, hence the RFC tag,
> > but I could not find other good solutions. I considered:
> > - moving the sk_tsflags just before 'sk_policy', in the 'sock_read_rxtx'
> >   group. It arguably belongs to such group, but the change would create
> >   a couple of holes, increasing the 'struct sock' size and would have
> >   side effects on other protocols
> > - moving the sk_tsflags just before 'sk_stamp'; similar to the above,
> >   would possibly reduce the side effects, as most of 'struct sock'
> >   layout will be unchanged. Could increase the number of cacheline
> >   accessed in the TX path.
> >
> > I opted for the present solution as it should minimize the side effects
> > to other protocols.
>
> The code looks solid at a high level to me.
>
> But if the issue can be adddressed by just moving a field, that is
> quite appealing. So have no reviewed closely yet.
>

sk_tsflags has not been put in an optimal group, I would indeed move it,
even if this creates one hole.

Holes tend to be used quite fast anyway with new fields.

Perhaps sock_read_tx group would be the best location,
because tcp_recv_timestamp() is not called in the fast path.

diff --git a/include/net/sock.h b/include/net/sock.h
index 8036b3b79cd8be64550dcfd6ce213039460acb1f..b54fbf2d9e72c3d3300e1f7638ecfbb99fdf409d
100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -444,7 +444,6 @@ struct sock {
        socket_lock_t           sk_lock;
        u32                     sk_reserved_mem;
        int                     sk_forward_alloc;
-       u32                     sk_tsflags;
        __cacheline_group_end(sock_write_rxtx);

        __cacheline_group_begin(sock_write_tx);
@@ -474,6 +473,7 @@ struct sock {
        unsigned long           sk_max_pacing_rate;
        long                    sk_sndtimeo;
        u32                     sk_priority;
+       u32                     sk_tsflags;
        u32                     sk_mark;
        struct dst_entry __rcu  *sk_dst_cache;
        netdev_features_t       sk_route_caps;
diff --git a/net/core/sock.c b/net/core/sock.c
index eae2ae70a2e03df370d8ef7750a7bb13cc3b8d8f..4f855361b6c7fa74c449bf5ea3a0e88b7c0f33fb
100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -4371,7 +4371,6 @@ static int __init sock_struct_check(void)
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx, sk_lock);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx,
sk_reserved_mem);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx,
sk_forward_alloc);
-       CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx, sk_tsflags);

        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_tx,
sk_omem_alloc);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_tx,
sk_omem_alloc);
@@ -4394,6 +4393,7 @@ static int __init sock_struct_check(void)
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_sndtimeo);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_priority);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_mark);
+       CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_tsflags);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_dst_cache);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_route_caps);
        CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_tx, sk_gso_type);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ