lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Feb 2014 13:51:42 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Fengguang Wu <fengguang.wu@...el.com>,
	David Cohen <david.a.cohen@...ux.intel.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Damien Ramonda <damien.ramonda@...el.com>,
	Jan Kara <jack@...e.cz>, Linus <torvalds@...ux-foundation.org>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH V5] mm readahead: Fix readahead fail for no local
 memory and limit readahead pages

On 02/08/2014 02:11 AM, David Rientjes wrote:
> On Fri, 7 Feb 2014, Raghavendra K T wrote:
>> 3) Change the "readahead into remote memory" part of the documentation
>> which is misleading.
>>
>> ( I feel no need to add numa_mem_id() since we would specifically limit
>> the readahead with MAX_REMOTE_READAHEAD in memoryless cpu cases).
>>
>
> I don't understand what you're saying, numa_mem_id() is local memory to
> current's cpu.  When a node consists only of cpus and not memory it is not
> true that all memory is then considered remote, you won't find that in any
> specification that defines memory affinity including the ACPI spec.  I can
> trivially define all cpus on my system to be on memoryless nodes and
> having that affect readahead behavior when, in fact, there is affinity
> would be ridiculous.
>
As you rightly pointed , I 'll drop remote memory term and use
something like  :

"* Ensure readahead success on a memoryless node cpu. But we limit
  * the readahead to 4k pages to avoid trashing page cache." ..

Regarding ACCESS_ONCE, since we will have to add
inside the function and still there is nothing that could prevent us
getting run on different cpu with a different node (as Andrew ponted), I 
have not included in current patch that I am posting.
Moreover this case is hopefully not fatal since it is just a hint for 
readahead we can do.

So there are many possible implementation:
(1) use numa_mem_id(), apply freepage limit  and use 4k page limit for 
all case
(Jan had reservation about this case)

(2)for normal case:    use free memory calculation and do not apply 4k
     limit (no change).
    for memoryless cpu case:  use numa_mem_id for more accurate
     calculation of limit and also apply 4k limit.

(3) for normal case:   use free memory calculation and do not apply 4k
     limit (no change).
     for memoryless case: apply 4k page limit

(4) use numa_mem_id() and apply only free page limit..

So, I ll be resending the patch with changelog and comment changes
based on your and Andrew's feedback (type (3) implementation).




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ