[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170819100721.GA18859@yu-desktop-1.sh.intel.com>
Date: Sat, 19 Aug 2017 18:07:21 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Len Brown <lenb@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH][RFC v3] PM / Hibernate: Feed the wathdog when creating
snapshot
On Thu, Aug 17, 2017 at 01:28:06PM +0200, Michal Hocko wrote:
> On Thu 17-08-17 12:04:34, Chen Yu wrote:
> [...]
> > #ifdef CONFIG_HIBERNATION
> >
> > +/* Touch watchdog for every WD_INTERVAL_PAGE pages. */
> > +#define WD_INTERVAL_PAGE 1000
>
> traversing 1000 pages should never take too much time so this could be
> overly aggressive. 100k pages could be acceptable as well. But I haven't
> measure that so I might be easily wrong here. So this is just my 2c
>
After checking the log:
[ 1144.690405] done (allocated 6590003 pages)
[ 1144.694971] PM: Allocated 26360012 kbytes in 19.89 seconds (1325.28 MB/s)
The default NMI timeout is 10 seconds AFAIK, in case the user might modify it to
1 second, a safe interval page could be 6590003/20 = 320k pages, but there
might be other machines running in a lower freq, 100k should be more robust.
I'll change it to 100k.
> > +
> > void mark_free_pages(struct zone *zone)
> > {
> > unsigned long pfn, max_zone_pfn;
> > unsigned long flags;
> > - unsigned int order, t;
> > + unsigned int order, t, page_num = 0;
> > struct page *page;
> >
> > if (zone_is_empty(zone))
> > @@ -2548,6 +2552,9 @@ void mark_free_pages(struct zone *zone)
> > if (pfn_valid(pfn)) {
> > page = pfn_to_page(pfn);
> >
> > + if (!((page_num++) % WD_INTERVAL_PAGE))
> > + touch_nmi_watchdog();
> > +
> > if (page_zone(page) != zone)
> > continue;
> >
> > @@ -2555,14 +2562,19 @@ void mark_free_pages(struct zone *zone)
> > swsusp_unset_page_free(page);
> > }
> >
> > + page_num = 0;
> > +
>
> this part doesn't make much sense to me. You are still inside the same
> IRQ disabled section. So why would you want to start counting from 0
> again. Not that this would make any difference in real life but the code
> is not logical
>
Ok, will delete this.
> > for_each_migratetype_order(order, t) {
> > list_for_each_entry(page,
> > &zone->free_area[order].free_list[t], lru) {
> > unsigned long i;
> >
> > pfn = page_to_pfn(page);
> > - for (i = 0; i < (1UL << order); i++)
> > + for (i = 0; i < (1UL << order); i++) {
> > + if (!((page_num++) % WD_INTERVAL_PAGE))
> > + touch_nmi_watchdog();
> > swsusp_set_page_free(pfn_to_page(pfn + i));
> > + }
> > }
> > }
> > spin_unlock_irqrestore(&zone->lock, flags);
> > --
> > 2.7.4
>
> --
> Michal Hocko
> SUSE Labs
Thanks,
Yu
Powered by blists - more mailing lists