[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <SN1PR21MB0077D7FE61A476EBD6529726CBB00@SN1PR21MB0077.namprd21.prod.outlook.com>
Date: Fri, 18 Nov 2016 20:23:10 +0000
From: Matthew Wilcox <mawilcox@...rosoft.com>
To: Konstantin Khlebnikov <koct9i@...il.com>
CC: Matthew Wilcox <mawilcox@...uxonhyperv.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Huang Ying <ying.huang@...el.com>
Subject: RE: [PATCH 20/29] radix tree: Improve multiorder iterators
From: Konstantin Khlebnikov [mailto:koct9i@...il.com]
> On Fri, Nov 18, 2016 at 7:31 PM, Matthew Wilcox <mawilcox@...rosoft.com>
> wrote:
> > I think what you're suggesting is that we introduce a new API:
> >
> > slot = radix_tree_iter_save(&iter, order);
> >
> > where the caller tells us the order of the entry it just consumed. Or maybe
> you're suggesting
> >
> > slot = radix_tree_iter_advance(&iter, newindex)
>
> Yes, someting like that.
>
> >
> > which would allow us to skip to any index. Although ... isn't that just
> radix_tree_iter_init()?
>
> Iterator could keep pointer to current node and reuse it for next
> iteration if possible.
The point of this API is that it's never possible, because we're about to drop the lock and allow other users to modify the tree. Actually, it is different from radix_tree_iter_init() because it has to set ->tags to 0 and ->index == ->next_index in order to get through a call to radix_tree_next_slot().
> > It does push a bit of complexity onto the callers. We have 7 callers of
> > radix_tree_iter_next() in my current tree (after applying this patch set, so
> > range_tag_if_tagged and locate_item have been pushed into their callers):
> > btrfs, kugepaged, page-writeback and shmem. btrfs knows its objects occupy
> > one slot. khugepaged knows that its page is order 0 at the time it calls
> > radix_tree_iter_next(). Page-writeback has a struct page and can simply use
> > compound_order(). It's shmem where things get sticky, although it's all
> > solvable with some temporary variables.
>
> Users who work only with single slot enties don't get any complications,
> all other already manage these multiorder entries somehow.
If you read the patch below, you'll see that most callers don't need to be concerned with the size of the entry they're looking at. I'll trim away the trivial ones so it's easier to see my point.
It's not a huge amount of code in each caller, but is this a burden we really want to push onto the callers when we could handle it behind the interface?
diff --git a/mm/shmem.c b/mm/shmem.c
index 8f9c9aa..90dd18d9 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -658,7 +658,10 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
swapped++;
if (need_resched()) {
- slot = radix_tree_iter_next(slot, &iter);
+ unsigned int order = 0;
+ if (!radix_tree_exceptional_entry(page))
+ order = compound_order(page);
+ slot = radix_tree_iter_save(&iter, order);
cond_resched_rcu();
}
}
@@ -2450,6 +2453,7 @@ static void shmem_tag_pins(struct address_space *mapping)
slot = radix_tree_iter_retry(&iter);
continue;
}
+ page = NULL;
} else if (page_count(page) - page_mapcount(page) > 1) {
spin_lock_irq(&mapping->tree_lock);
radix_tree_tag_set(&mapping->page_tree, iter.index,
@@ -2458,7 +2462,8 @@ static void shmem_tag_pins(struct address_space *mapping)
}
if (need_resched()) {
- slot = radix_tree_iter_next(slot, &iter);
+ unsigned int order = page ? compound_order(page) : 0;
+ slot = radix_tree_iter_save(&iter, order);
cond_resched_rcu();
}
}
@@ -2528,7 +2533,10 @@ static int shmem_wait_for_pins(struct address_space *mapping)
spin_unlock_irq(&mapping->tree_lock);
continue_resched:
if (need_resched()) {
- slot = radix_tree_iter_next(slot, &iter);
+ unsigned int order = 0;
+ if (page)
+ order = compound_order(page);
+ slot = radix_tree_iter_save(&iter, order);
cond_resched_rcu();
}
}
Powered by blists - more mailing lists