lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Jun 2007 14:20:04 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	linux-kernel@...r.kernel.org, Christoph Hellwig <hch@...radead.org>
Subject: Re: [patch 00/14] Page cache cleanup in anticipation of Large
 Blocksize support

On Thu, 14 Jun 2007, Andrew Morton wrote:

> If we never inflict variable PAGE_CACHE_SIZE upon the kernel, these changes
> become pointless obfuscation.

But there is no such resonable scenario that I am aware of unless we 
continue to add workarounds for the issues covered here to the VM.

And it was pointed out to you that such approach can never stand in place 
of the different uses of having a larger page cache.

> I think the best way to proceed would be to investigate that _general_
> optimisation and then, based upon the results of that work, decide whether
> further _specialised_ changes such as variable PAGE_CACHE_SIZE are needed,
> and if so, what they should be.

As has been pointed out performance is only one beneficial issue of
having a higher page cache. It is doubtful in principle that the proposed 
alternative can work given that locking overhead and management overhead
by the VM are not minimized but made more complex by your envisioned 
solution.

The solution here significantly cleans up the page cache even if we never 
go to the variable page cache. If we do get there then numerous 
workarounds that we have in the tree because of not supporting larger I/O 
go away cleaning up the VM further. The large disk sizes can be handled in 
a reasonable way (f.e. fsck times would decrease) since we can handle
large contiguous chunks of memory. This is a necessary strategic move for 
the Linux kernel. It would also pave the way of managing large chunks
of contiguous memory for other ways and has the potential of getting rid
of such sore spots as the hugetlb filesystem.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ