[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171030090640.GC23278@quack2.suse.cz>
Date: Mon, 30 Oct 2017 10:06:40 +0100
From: Jan Kara <jack@...e.cz>
To: Waiman Long <longman@...hat.com>
Cc: Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.com>,
Jeff Layton <jlayton@...chiereds.net>,
"J. Bruce Fields" <bfields@...ldses.org>,
Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Dave Chinner <dchinner@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v7 10/10] lib/dlock-list: Fix use-after-unlock problem in
dlist_for_each_entry_safe()
On Fri 27-10-17 16:10:53, Waiman Long wrote:
> The dlist_for_each_entry_safe() macro in include/linux/dlock-list has
> a use-after-unlock problem where racing condition can happen because
> of a lack of spinlock protection. Fortunately, this macro is not
> currently being used in the kernel.
>
> This patch changes the dlist_for_each_entry_safe() macro so that the
> call to __dlock_list_next_list() is deferred until the next entry is
> being used. That should eliminate the use-after-unlock problem.
>
> Reported-by: Boqun Feng <boqun.feng@...il.com>
> Signed-off-by: Waiman Long <longman@...hat.com>
Looks good to me. You can add:
Reviewed-by: Jan Kara <jack@...e.cz>
Honza
> ---
> include/linux/dlock-list.h | 28 +++++++++++++++++-----------
> 1 file changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h
> index 02c5f4d..f4b7657 100644
> --- a/include/linux/dlock-list.h
> +++ b/include/linux/dlock-list.h
> @@ -191,17 +191,17 @@ extern void dlock_list_add(struct dlock_list_node *node,
> }
>
> /**
> - * dlock_list_first_entry - get the first element from a list
> + * dlock_list_next_list_entry - get first element from next list in iterator
> * @iter : The dlock list iterator.
> - * @type : The type of the struct this is embedded in.
> + * @pos : A variable of the struct that is embedded in.
> * @member: The name of the dlock_list_node within the struct.
> - * Return : Pointer to the next entry or NULL if all the entries are iterated.
> + * Return : Pointer to first entry or NULL if all the lists are iterated.
> */
> -#define dlock_list_first_entry(iter, type, member) \
> +#define dlock_list_next_list_entry(iter, pos, member) \
> ({ \
> struct dlock_list_node *_n; \
> _n = __dlock_list_next_entry(NULL, iter); \
> - _n ? list_entry(_n, type, member) : NULL; \
> + _n ? list_entry(_n, typeof(*pos), member) : NULL; \
> })
>
> /**
> @@ -231,7 +231,7 @@ extern void dlock_list_add(struct dlock_list_node *node,
> * This iteration function is designed to be used in a while loop.
> */
> #define dlist_for_each_entry(pos, iter, member) \
> - for (pos = dlock_list_first_entry(iter, typeof(*(pos)), member);\
> + for (pos = dlock_list_next_list_entry(iter, pos, member); \
> pos != NULL; \
> pos = dlock_list_next_entry(pos, iter, member))
>
> @@ -245,14 +245,20 @@ extern void dlock_list_add(struct dlock_list_node *node,
> * This iteration macro is safe with respect to list entry removal.
> * However, it cannot correctly iterate newly added entries right after the
> * current one.
> + *
> + * The call to __dlock_list_next_list() is deferred until the next entry
> + * is being iterated to avoid use-after-unlock problem.
> */
> #define dlist_for_each_entry_safe(pos, n, iter, member) \
> - for (pos = dlock_list_first_entry(iter, typeof(*(pos)), member);\
> + for (pos = NULL; \
> ({ \
> - bool _b = (pos != NULL); \
> - if (_b) \
> - n = dlock_list_next_entry(pos, iter, member); \
> - _b; \
> + if (!pos || \
> + (&(pos)->member.list == &(iter)->entry->list)) \
> + pos = dlock_list_next_list_entry(iter, pos, \
> + member); \
> + if (pos) \
> + n = list_next_entry(pos, member.list); \
> + pos; \
> }); \
> pos = n)
>
> --
> 1.8.3.1
>
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists