[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 08 Aug 2017 14:12:56 +0800
From: Wei Wang <wei.w.wang@...el.com>
To: Michal Hocko <mhocko@...nel.org>
CC: linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
linux-mm@...ck.org, mst@...hat.com, mawilcox@...rosoft.com,
akpm@...ux-foundation.org, virtio-dev@...ts.oasis-open.org,
david@...hat.com, cornelia.huck@...ibm.com,
mgorman@...hsingularity.net, aarcange@...hat.com,
amit.shah@...hat.com, pbonzini@...hat.com,
liliang.opensource@...il.com, yang.zhang.wz@...il.com,
quan.xu@...yun.com
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks
On 08/03/2017 05:11 PM, Michal Hocko wrote:
> On Thu 03-08-17 14:38:18, Wei Wang wrote:
> This is just too ugly and wrong actually. Never provide struct page
> pointers outside of the zone->lock. What I've had in mind was to simply
> walk free lists of the suitable order and call the callback for each one.
> Something as simple as
>
> for (i = 0; i < MAX_NR_ZONES; i++) {
> struct zone *zone = &pgdat->node_zones[i];
>
> if (!populated_zone(zone))
> continue;
Can we directly use for_each_populated_zone(zone) here?
> spin_lock_irqsave(&zone->lock, flags);
> for (order = min_order; order < MAX_ORDER; ++order) {
This appears to be covered by for_each_migratetype_order(order, mt) below.
> struct free_area *free_area = &zone->free_area[order];
> enum migratetype mt;
> struct page *page;
>
> if (!free_area->nr_pages)
> continue;
>
> for_each_migratetype_order(order, mt) {
> list_for_each_entry(page,
> &free_area->free_list[mt], lru) {
>
> pfn = page_to_pfn(page);
> visit(opaque2, prn, 1<<order);
> }
> }
> }
>
> spin_unlock_irqrestore(&zone->lock, flags);
> }
>
> [...]
>
What do you think if we further simply the above implementation like this:
for_each_populated_zone(zone) {
for_each_migratetype_order_decend(1, order, mt) {
spin_lock_irqsave(&zone->lock, flags);
list_for_each_entry(page,
&zone->free_area[order].free_list[mt], lru) {
pfn = page_to_pfn(page);
visit(opaque1, pfn, 1 << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
}
}
Best,
Wei
Powered by blists - more mailing lists