[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <575720A6.4000700@hpe.com>
Date: Tue, 7 Jun 2016 15:29:42 -0400
From: Waiman Long <waiman.long@....com>
To: Waiman Long <Waiman.Long@....com>
CC: Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.com>,
Jeff Layton <jlayton@...chiereds.net>,
"J. Bruce Fields" <bfields@...ldses.org>,
Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux-foundation.org>,
<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Dave Chinner <dchinner@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>
Subject: Re: [RESEND PATCH v7 0/4] vfs: Use per-cpu list for SB's s_inodes
list
On 06/07/2016 03:24 PM, Waiman Long wrote:
> v6->v7:
> - Fix the race condition in __pcpu_list_next_cpu() as reported by
> Jan Kara.
> - No changes in patches 2-4.
>
> v5->v6:
> - Remove patch 5 which can increase the kernel testing matrix.
> - Disable preemption in pcpu_list_add() as it was complained by
> the 0-day test even though it is not technically necessary.
> - Add a PERCPU_LIST_WARN_ON() macro to simplify code.
> - No changes in patches 2-4.
>
> v4->v5:
> - Fix the UP panic problem reported by 0day test by unifying the SMP
> and UP code.
> - Add patch 5 to add a new kernel config parameter to allow disabling
> per-cpu list for small systems that won't benefit much from this
> feature.
>
> v3->v4:
> - Fix some racing conditions in the code.
> - Add another patch from Jan to replace list_for_each_entry_safe()
> by list_for_each_entry().
> - Add lockdep annotation.
>
> v2->v3:
> - Directly replace list_for_each_entry() and
> list_for_each_entry_safe() by pcpu_list_iterate() and
> pcpu_list_iterate_safe() respectively instead. Those 2 functions
> provide a stateful per-cpu list iteration interface.
> - Include Jan Kara's patch to clean up the fsnotify_unmount_inodes()
> function.
>
> v1->v2:
> - Use separate structures for list head and nodes& provide a
> cleaner interface.
> - Use existing list_for_each_entry() or list_for_each_entry_safe()
> macros for each of the sb's s_inodes iteration functions instead
> of using list_for_each_entry_safe() for all of them which may not
> be safe in some cases.
> - Use an iterator interface to access all the nodes of a group of
> per-cpu lists. This approach is cleaner than the previous double-for
> macro which is kind of hacky. However, it does require more lines
> of code changes.
> - Add a preparatory patch 2 to extract out the per-inode codes from
> the superblock s_inodes list iteration functions to minimize code
> changes needed in the patch 3.
>
> This patch is a replacement of my previous list batching patch -
> https://lwn.net/Articles/674105/. Compared with the previous patch,
> this one provides better performance and fairness. However, it also
> requires a bit more changes in the VFS layer.
>
> This patchset is a derivative of Andi Kleen's patch on "Initial per
> cpu list for the per sb inode list"
>
> https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git/commit/?h=hle315/
> combined&id=f1cf9e715a40f44086662ae3b29f123cf059cbf4
>
> Patch 1 introduces the per-cpu list.
>
> Patch 2 cleans up the fsnotify_unmount_inodes() function by making
> the code simpler and more standard.
>
> Patch 3 replaces the use of list_for_each_entry_safe() in
> evict_inodes() and invalidate_inodes() by list_for_each_entry().
>
> Patch 4 modifies the superblock and inode structures to use the per-cpu
> list. The corresponding functions that reference those structures
> are modified.
>
> Jan Kara (2):
> fsnotify: Simplify inode iteration on umount
> vfs: Remove unnecessary list_for_each_entry_safe() variants
>
> Waiman Long (2):
> lib/percpu-list: Per-cpu list with associated per-cpu locks
> vfs: Use per-cpu list for superblock's inode list
>
>
Hi, I am resending this patch series as I haven't received any feedback
to see if further change is needed or is good enough to get merged.
Cheers,
Longman
Powered by blists - more mailing lists