lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6fbd26a5-615f-41bf-ad54-91114898cc2c@redhat.com>
Date: Tue, 17 Dec 2024 22:30:39 -0500
From: Waiman Long <llong@...hat.com>
To: Daniel Xu <dxu@...uu.xyz>, mingo@...hat.com, will@...nel.org,
 peterz@...radead.org
Cc: boqun.feng@...il.com, linux-kernel@...r.kernel.org, paulmck@...nel.org
Subject: Re: [PATCH] seqlock: Use WRITE_ONCE() when updating sequence

On 12/17/24 6:17 PM, Daniel Xu wrote:
> `sequence` is a concurrently accessed shared variable on the reader
> side. Therefore, it needs to be wrapped in WRITE_ONCE() in order to
> prevent unwanted compiler optimizations like store tearing.
>
> Signed-off-by: Daniel Xu <dxu@...uu.xyz>
> ---
>   include/linux/seqlock.h | 14 +++++++-------
>   1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index 5298765d6ca4..f4c6f2507742 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -45,7 +45,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
>   	 * Make sure we are not reinitializing a held lock:
>   	 */
>   	lockdep_init_map(&s->dep_map, name, key, 0);
> -	s->sequence = 0;
> +	WRITE_ONCE(s->sequence, 0);
>   }
>   
The init function certainly doesn't need to use WRITE_ONCE().
>   #ifdef CONFIG_DEBUG_LOCK_ALLOC
> @@ -405,7 +405,7 @@ do {									\
>   static inline void do_raw_write_seqcount_begin(seqcount_t *s)
>   {
>   	kcsan_nestable_atomic_begin();
> -	s->sequence++;
> +	WRITE_ONCE(s->sequence, READ_ONCE(s->sequence) + 1);
>   	smp_wmb();
>   }
>   
> @@ -426,7 +426,7 @@ do {									\
>   static inline void do_raw_write_seqcount_end(seqcount_t *s)
>   {
>   	smp_wmb();
> -	s->sequence++;
> +	WRITE_ONCE(s->sequence, READ_ONCE(s->sequence) + 1);
>   	kcsan_nestable_atomic_end();
>   }
>   
> @@ -548,9 +548,9 @@ static inline void do_write_seqcount_end(seqcount_t *s)
>   static inline void do_raw_write_seqcount_barrier(seqcount_t *s)
>   {
>   	kcsan_nestable_atomic_begin();
> -	s->sequence++;
> +	WRITE_ONCE(s->sequence, READ_ONCE(s->sequence) + 1);
>   	smp_wmb();
> -	s->sequence++;
> +	WRITE_ONCE(s->sequence, READ_ONCE(s->sequence) + 1);
>   	kcsan_nestable_atomic_end();
>   }
>   
> @@ -569,7 +569,7 @@ static inline void do_write_seqcount_invalidate(seqcount_t *s)
>   {
>   	smp_wmb();
>   	kcsan_nestable_atomic_begin();
> -	s->sequence+=2;
> +	WRITE_ONCE(s->sequence, READ_ONCE(s->sequence) + 2);
>   	kcsan_nestable_atomic_end();
>   }
>   
> @@ -673,7 +673,7 @@ read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
>   static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s)
>   {
>   	smp_wmb();	/* prior stores before incrementing "sequence" */
> -	s->seqcount.sequence++;
> +	WRITE_ONCE(s->seqcount.sequence, READ_ONCE(s->seqcount.sequence) + 1);
>   	smp_wmb();      /* increment "sequence" before following stores */
>   }
>   

For seqcount, its actual value isn't important. What is important is 
whether the value changes and whether it is even or odd. So even if 
store tearing is happening, it shouldn't affect its operation. I doubt 
we need to use WRITE_ONCE() here. Could you come up with a scenario 
where store tearing will make it behave incorrectly?

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ