lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170817112806.GD17781@dhcp22.suse.cz>
Date:   Thu, 17 Aug 2017 13:28:06 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Chen Yu <yu.c.chen@...el.com>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Len Brown <lenb@...nel.org>,
        Dan Williams <dan.j.williams@...el.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH][RFC v3] PM / Hibernate: Feed the wathdog when creating
 snapshot

On Thu 17-08-17 12:04:34, Chen Yu wrote:
[...]
>  #ifdef CONFIG_HIBERNATION
>  
> +/* Touch watchdog for every WD_INTERVAL_PAGE pages. */
> +#define WD_INTERVAL_PAGE	1000

traversing 1000 pages should never take too much time so this could be
overly aggressive. 100k pages could be acceptable as well. But I haven't
measure that so I might be easily wrong here. So this is just my 2c

> +
>  void mark_free_pages(struct zone *zone)
>  {
>  	unsigned long pfn, max_zone_pfn;
>  	unsigned long flags;
> -	unsigned int order, t;
> +	unsigned int order, t, page_num = 0;
>  	struct page *page;
>  
>  	if (zone_is_empty(zone))
> @@ -2548,6 +2552,9 @@ void mark_free_pages(struct zone *zone)
>  		if (pfn_valid(pfn)) {
>  			page = pfn_to_page(pfn);
>  
> +			if (!((page_num++) % WD_INTERVAL_PAGE))
> +				touch_nmi_watchdog();
> +
>  			if (page_zone(page) != zone)
>  				continue;
>  
> @@ -2555,14 +2562,19 @@ void mark_free_pages(struct zone *zone)
>  				swsusp_unset_page_free(page);
>  		}
>  
> +	page_num = 0;
> +

this part doesn't make much sense to me. You are still inside the same
IRQ disabled section. So why would you want to start counting from 0
again. Not that this would make any difference in real life but the code
is not logical

>  	for_each_migratetype_order(order, t) {
>  		list_for_each_entry(page,
>  				&zone->free_area[order].free_list[t], lru) {
>  			unsigned long i;
>  
>  			pfn = page_to_pfn(page);
> -			for (i = 0; i < (1UL << order); i++)
> +			for (i = 0; i < (1UL << order); i++) {
> +				if (!((page_num++) % WD_INTERVAL_PAGE))
> +					touch_nmi_watchdog();
>  				swsusp_set_page_free(pfn_to_page(pfn + i));
> +			}
>  		}
>  	}
>  	spin_unlock_irqrestore(&zone->lock, flags);
> -- 
> 2.7.4

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ