lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 May 2020 18:39:08 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Ahmed S. Darwish" <a.darwish@...utronix.de>
Cc:     Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        "Sebastian A. Siewior" <bigeasy@...utronix.de>,
        Steven Rostedt <rostedt@...dmis.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Phillip Susi <psusi@...ntu.com>,
        Vivek Goyal <vgoyal@...hat.com>, linux-block@...r.kernel.org
Subject: Re: [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on
 seqcount write

On Tue, May 19, 2020 at 11:45:26PM +0200, Ahmed S. Darwish wrote:
> For optimized block readers not holding a mutex, the "number of sectors"
> 64-bit value is protected from tearing on 32-bit architectures by a
> sequence counter.
> 
> Disable preemption before entering that sequence counter's write side
> critical section. Otherwise, the read side can preempt the write side
> section and spin for the entire scheduler tick. If the reader belongs to
> a real-time scheduling class, it can spin forever and the kernel will
> livelock.
> 
> Fixes: c83f6bf98dc1 ("block: add partition resize function to blkpg ioctl")
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Ahmed S. Darwish <a.darwish@...utronix.de>
> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  block/blk.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/block/blk.h b/block/blk.h
> index 0a94ec68af32..151f86932547 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -470,9 +470,11 @@ static inline sector_t part_nr_sects_read(struct hd_struct *part)
>  static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
>  {
>  #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> +	preempt_disable();
>  	write_seqcount_begin(&part->nr_sects_seq);
>  	part->nr_sects = size;
>  	write_seqcount_end(&part->nr_sects_seq);
> +	preempt_enable();
>  #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
>  	preempt_disable();
>  	part->nr_sects = size;

This does look like something that include/linux/u64_stats_sync.h could
help with.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ