lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160623131353.GJ30077@dhcp22.suse.cz>
Date:	Thu, 23 Jun 2016 15:13:53 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Mel Gorman <mgorman@...hsingularity.net>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Linux-MM <linux-mm@...ck.org>, Rik van Riel <riel@...riel.com>,
	Vlastimil Babka <vbabka@...e.cz>,
	Johannes Weiner <hannes@...xchg.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 15/27] mm, page_alloc: Consider dirtyable memory in terms
 of nodes

On Thu 23-06-16 13:53:12, Mel Gorman wrote:
> On Wed, Jun 22, 2016 at 04:27:57PM +0200, Michal Hocko wrote:
> > > which can use it (e.g. vmalloc). I understand how this is both an
> > > inherent problem of 32b with a larger high:low ratio and why it is hard
> > > to at least pretend we can cope with it with node based approach but we
> > > should at least document it.
> > > 
> > > I workaround would be to enable highmem_dirtyable_memory which can lead
> > > to premature OOM killer for some workloads AFAIR.
> > [...]
> > > >  static unsigned long highmem_dirtyable_memory(unsigned long total)
> > > >  {
> > > >  #ifdef CONFIG_HIGHMEM
> > > > -	int node;
> > > >  	unsigned long x = 0;
> > > > -	int i;
> > > > -
> > > > -	for_each_node_state(node, N_HIGH_MEMORY) {
> > > > -		for (i = 0; i < MAX_NR_ZONES; i++) {
> > > > -			struct zone *z = &NODE_DATA(node)->node_zones[i];
> > > >  
> > > > -			if (is_highmem(z))
> > > > -				x += zone_dirtyable_memory(z);
> > > > -		}
> > > > -	}
> > 
> > Hmm, I have just noticed that we have NR_ZONE_LRU_ANON resp.
> > NR_ZONE_LRU_FILE so we can estimate the amount of highmem contribution
> > to the global counters by the following or similar:
> > 
> > 	for_each_node_state(node, N_HIGH_MEMORY) {
> > 		for (i = 0; i < MAX_NR_ZONES; i++) {
> > 			struct zone *z = &NODE_DATA(node)->node_zones[i];
> > 
> > 			if (!is_highmem(z))
> > 				continue;
> > 
> > 			x += zone_page_state(z, NR_FREE_PAGES) + zone_page_state(z, NR_ZONE_LRU_FILE) - high_wmark_pages(zone);
> > 		}
> > 
> > high wmark reduction would be to emulate the reserve. What do you think?
> 
> Agreed with minor modifications. Went with this
> 
>         for_each_node_state(node, N_HIGH_MEMORY) {
>                 for (i = ZONE_NORMAL + 1; i < MAX_NR_ZONES; i++) {
>                         struct zone *z;
> 
>                         if (!is_highmem_idx(z))
>                                 continue;
> 
>                         z = &NODE_DATA(node)->node_zones[i];
>                         x += zone_page_state(z, NR_FREE_PAGES) +
>                                 zone_page_state(z, NR_ZONE_LRU_FILE) -
>                                 high_wmark_pages(zone);

I guess you will still need an underflow protection. Because both free +
lru pages might be below high wmark.

			dirtyable += zone_page_state(z, NR_FREE_PAGES) +
					zone_page_state(z, NR_ZONE_LRU_FILE);
			if (dirtyable > high_wmark_pages(zone)
				dirtyable -= high_wmark_pages(zone);

			x += dirtyable;
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ