[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251203100122.291550-1-mjguzik@gmail.com>
Date: Wed, 3 Dec 2025 11:01:22 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: kuniyu@...gle.com
Cc: linux-kernel@...r.kernel.org,
netdev@...r.kernel.org,
kuba@...nel.org,
oliver.sang@...el.com,
Mateusz Guzik <mjguzik@...il.com>
Subject: [PATCH] af_unix: annotate unix_gc_lock with __cacheline_aligned_in_smp
Otherwise the lock is susceptible to ever-changing false-sharing due to
unrelated changes. This in particular popped up here where an unrelated
change improved performance:
https://lore.kernel.org/oe-lkp/202511281306.51105b46-lkp@intel.com/
Stabilize it with an explicit annotation which also has a side effect
of furher improving scalability:
> in our oiginal report, 284922f4c5 has a 6.1% performance improvement comparing
> to parent 17d85f33a8.
> we applied your patch directly upon 284922f4c5. as below, now by
> "284922f4c5 + your patch"
> we observe a 12.8% performance improvements (still comparing to 17d85f33a8).
Note nothing was done for the other fields, so some fluctuation is still
possible.
Tested-by: kernel test robot <oliver.sang@...el.com>
Signed-off-by: Mateusz Guzik <mjguzik@...il.com>
---
net/unix/garbage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 78323d43e63e..25f65817faab 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -199,7 +199,7 @@ static void unix_free_vertices(struct scm_fp_list *fpl)
}
}
-static DEFINE_SPINLOCK(unix_gc_lock);
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(unix_gc_lock);
void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver)
{
--
2.48.1
Powered by blists - more mailing lists