lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1540499971.66186.51.camel@acm.org>
Date:   Thu, 25 Oct 2018 13:39:31 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     Johannes Berg <johannes@...solutions.net>,
        Tejun Heo <tj@...nel.org>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Christoph Hellwig <hch@....de>,
        Sagi Grimberg <sagi@...mberg.me>,
        "tytso@....edu" <tytso@....edu>
Subject: Re: [PATCH 3/3] kernel/workqueue: Suppress a false positive lockdep
 complaint

On Thu, 2018-10-25 at 21:51 +0200, Johannes Berg wrote:
> [ ... ]
> diff --git a/fs/direct-io.c b/fs/direct-io.c
> index 093fb54cd316..9ef33d6cba56 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -629,9 +629,16 @@ int sb_init_dio_done_wq(struct super_block *sb)
>  	 * This has to be atomic as more DIOs can race to create the workqueue
>  	 */
>  	old = cmpxchg(&sb->s_dio_done_wq, NULL, wq);
> -	/* Someone created workqueue before us? Free ours... */
> +	/*
> +	 * Someone created workqueue before us? Free ours...
> +	 * Note the _nested(), that pushes down to the (in this case actually
> +	 * pointless) flush_workqueue() happening inside, since this function
> +	 * might be called in contexts that hold the same locks that an fs may
> +	 * take while being called from dio_aio_complete_work() from another
> +	 * instance of the workqueue we allocate here.
> +	 */
>  	if (old)
> -		destroy_workqueue(wq);
> +		destroy_workqueue_nested(wq, SINGLE_DEPTH_NESTING);
>  	return 0;
>  }
> [ ... ]
>  /**
> - * flush_workqueue - ensure that any scheduled work has run to completion.
> + * flush_workqueue_nested - ensure that any scheduled work has run to completion.
>   * @wq: workqueue to flush
> + * @subclass: subclass for lockdep
>   *
>   * This function sleeps until all work items which were queued on entry
>   * have finished execution, but it is not livelocked by new incoming ones.
>   */
> -void flush_workqueue(struct workqueue_struct *wq)
> +void flush_workqueue_nested(struct workqueue_struct *wq, int subclass)
>  {
>  	struct wq_flusher this_flusher = {
>  		.list = LIST_HEAD_INIT(this_flusher.list),
> @@ -2652,7 +2653,7 @@ void flush_workqueue(struct workqueue_struct *wq)
>  	if (WARN_ON(!wq_online))
>  		return;
>  
> -	lock_map_acquire(&wq->lockdep_map);
> +	lock_acquire_exclusive(&wq->lockdep_map, subclass, 0, NULL, _THIS_IP_);
>  	lock_map_release(&wq->lockdep_map);
>  
>  	mutex_lock(&wq->mutex);
> [ ... ]

I don't like this approach because it doesn't match how other kernel code uses
lockdep annotations. All other kernel code I know of only annotates lock depmaps
as nested if the same kernel thread calls lock_acquire() twice for the same depmap
without intervening lock_release(). My understanding is that with your patch
applied flush_workqueue_nested(wq, 1) calls lock_acquire() only once and with the
subclass argument set to one. I think this will confuse other people who will read
the workqueue implementation and who have not followed this conversation.

I like Tejuns proposal much better than the above proposal.

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ