[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250313221358.26e270db@pumpkin>
Date: Thu, 13 Mar 2025 22:13:58 +0000
From: David Laight <david.laight.linux@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, Anna-Maria Behnsen
<anna-maria@...utronix.de>, Frederic Weisbecker <frederic@...nel.org>,
Benjamin Segall <bsegall@...gle.com>, Eric Dumazet <edumazet@...gle.com>,
Andrey Vagin <avagin@...nvz.org>, Pavel Tikhomirov
<ptikhomirov@...tuozzo.com>, Peter Zijlstra <peterz@...radead.org>, Cyrill
Gorcunov <gorcunov@...il.com>
Subject: Re: [patch V3 14/18] posix-timers: Avoid false cacheline sharing
On Sat, 8 Mar 2025 17:48:42 +0100 (CET)
Thomas Gleixner <tglx@...utronix.de> wrote:
> struct k_itimer has the hlist_node, which is used for lookup in the hash
> bucket, and the timer lock in the same cache line.
>
> That's obviously bad, if one CPU fiddles with a timer and the other is
> walking the hash bucket on which that timer is queued.
>
> Avoid this by restructuring struct k_itimer, so that the read mostly (only
> modified during setup and teardown) fields are in the first cache line and
> the lock and the rest of the fields which get written to are in cacheline
> 2-N.
How big is the structure?
If I count it correctly the first 'cacheline' is 64 bytes on 64bit
(and somewhat smaller on 32bit - if anyone cares).
But there are some cpu (probably ppc) with quite large cache lines.
In that case you either need to waste the space by aligning the 2nd
part the structure into an actual cache line, or just align the
structure to a 64 byte boundary.
David
>
> Reduces cacheline contention in a test case of 64 processes creating and
> accessing 20000 timers each by almost 30% according to perf.
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
>
> ---
> V2: New patch
> ---
> include/linux/posix-timers.h | 21 ++++++++++++---------
> kernel/time/posix-timers.c | 4 ++--
> 2 files changed, 14 insertions(+), 11 deletions(-)
>
> --- a/include/linux/posix-timers.h
> +++ b/include/linux/posix-timers.h
> @@ -177,23 +177,26 @@ static inline void posix_cputimers_init_
> * @rcu: RCU head for freeing the timer.
> */
> struct k_itimer {
> - struct hlist_node list;
> - struct hlist_node ignored_list;
> + /* 1st cacheline contains read-mostly fields */
> struct hlist_node t_hash;
> - spinlock_t it_lock;
> - const struct k_clock *kclock;
> - clockid_t it_clock;
> + struct hlist_node list;
> timer_t it_id;
> + clockid_t it_clock;
> + int it_sigev_notify;
> + enum pid_type it_pid_type;
> + struct signal_struct *it_signal;
> + const struct k_clock *kclock;
> +
> + /* 2nd cacheline and above contain fields which are modified regularly */
> + spinlock_t it_lock;
> int it_status;
> bool it_sig_periodic;
> s64 it_overrun;
> s64 it_overrun_last;
> unsigned int it_signal_seq;
> unsigned int it_sigqueue_seq;
> - int it_sigev_notify;
> - enum pid_type it_pid_type;
> ktime_t it_interval;
> - struct signal_struct *it_signal;
> + struct hlist_node ignored_list;
> union {
> struct pid *it_pid;
> struct task_struct *it_process;
> @@ -210,7 +213,7 @@ struct k_itimer {
> } alarm;
> } it;
> struct rcu_head rcu;
> -};
> +} ____cacheline_aligned_in_smp;
>
> void run_posix_cpu_timers(void);
> void posix_cpu_timers_exit(struct task_struct *task);
> --- a/kernel/time/posix-timers.c
> +++ b/kernel/time/posix-timers.c
> @@ -260,8 +260,8 @@ static int posix_get_hrtimer_res(clockid
>
> static __init int init_posix_timers(void)
> {
> - posix_timers_cache = kmem_cache_create("posix_timers_cache", sizeof(struct k_itimer), 0,
> - SLAB_ACCOUNT, NULL);
> + posix_timers_cache = kmem_cache_create("posix_timers_cache", sizeof(struct k_itimer),
> + __alignof__(struct k_itimer), SLAB_ACCOUNT, NULL);
> return 0;
> }
> __initcall(init_posix_timers);
>
>
Powered by blists - more mailing lists