lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108104713.GB8256@quack.suse.cz>
Date:	Wed, 8 Jan 2014 11:47:13 +0100
From:	Jan Kara <jack@...e.cz>
To:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Fengguang Wu <fengguang.wu@...el.com>,
	David Cohen <david.a.cohen@...ux.intel.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Damien Ramonda <damien.ramonda@...el.com>, jack@...e.cz,
	Linus <torvalds@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH V3] mm readahead: Fix the readahead fail in case of
 empty numa node

On Wed 08-01-14 14:19:23, Raghavendra K T wrote:
> On 01/07/2014 03:43 AM, Andrew Morton wrote:
> >On Mon,  6 Jan 2014 15:51:55 +0530 Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com> wrote:
> >
> >>+	/*
> >>+	 * Readahead onto remote memory is better than no readahead when local
> >>+	 * numa node does not have memory. We sanitize readahead size depending
> >>+	 * on free memory in the local node but limiting to 4k pages.
> >>+	 */
> >>+	return local_free_page ? min(sane_nr, local_free_page / 2) : sane_nr;
> >>  }
> >
> >So if the local node has two free pages, we do just one page of
> >readahead.
> >
> >Then the local node has one free page and we do zero pages readahead.
> >
> >Assuming that bug(!) is fixed, the local node now has zero free pages
> >and we suddenly resume doing large readahead.
> >
> >This transition from large readahead to very small readahead then back
> >to large readahead is illogical, surely?
> >
> >
> 
> You are correct that there is a transition from small readahead to
> large once we have zero free pages.
> I am not sure I can defend well, but 'll give a try :).
> 
> Hoping that we have evenly distributed cpu/memory load, if we have very
> less free+inactive memory may be we are in really bad shape already.
> 
> But in the case where we have a situation like below [1] (cpu does
> not have any local memory node populated) I had mentioned
> earlier where we will have to depend on remote node always,
> is it not that sanitized readahead onto remote memory seems better?
> 
> But having said that I am not able to get an idea of sane implementation
> to solve this readahead failure bug overcoming the anomaly you pointed
> :(.  hints/ideas.. ?? please let me know.
  So if we would be happy with just fixing corner cases like this, we might
use total node memory size to detect them, can't we? If total node memory
size is 0, we can use 16 MB (or global number of free pages / 2 if we would
be uneasy with fixed 16 MB limit) as an upperbound...

								Honza
> 
> 
> [1]: IBM P730
> ----------------------------------
> # numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
> 22 23 24 25 26 27 28 29 30 31
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus:
> node 1 size: 12288 MB
> node 1 free: 10440 MB
> node distances:
> node   0   1
> 0:  10  40
> 1:  40  10
> 
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ