lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1156511810.5575.33.camel@localhost>
Date:	Fri, 25 Aug 2006 09:16:50 -0400
From:	Trond Myklebust <trond.myklebust@....uio.no>
To:	Neil Brown <neilb@...e.de>
Cc:	Jens Axboe <axboe@...e.de>, David Chinner <dgc@....com>,
	Andi Kleen <ak@...e.de>, linux-kernel@...r.kernel.org,
	akpm@...l.org
Subject: Re: RFC - how to balance Dirty+Writeback in the face of slow 
	writeback.

On Fri, 2006-08-25 at 14:36 +1000, Neil Brown wrote:
> The 'bugs' I am currently aware of are:
>  - nfs doesn't put a limit on the request queue
>  - the ext3 journal often writes out dirty data without clearing
>    the Dirty flag on the page - so the nr_dirty count ends up wrong.
>    ext3 writes the buffers out and marks them clean.  So when
>    the VM tried to flush a page, it finds all the buffers are clean
>    and so marks the page clean, so the nr_dirty count eventually
>    gets correct again, but I think this can cause write throttling to
>    be very unfair at times.
> 
> I think we need a queue limit on NFS requests.....

That is simply not happening until someone can give a cogent argument
for _why_ it is necessary. Such a cogent argument must, among other
things, allow us to determine what would be a sensible queue limit. It
should also point out _why_ the filesystem should be doing this instead
of the VM.

Furthermore, I'd like to point out that NFS has a "third" state for
pages: following an UNSTABLE write the data on them is marked as
'uncommitted'. Such pages are tracked using the NR_UNSTABLE_NFS counter.
The question is: if we want to set limits on the write queue, what does
that imply for the uncommitted writes?
If you go back and look at the 2.4 NFS client, we actually had an
arbitrary queue limit. That limit covered the sum of writes+uncommitted
pages. Performance sucked, 'cos we were not able to use server side
caching efficiently. The number of COMMIT requests (causes the server to
fsync() the client's data to disk) on the wire kept going through the
roof as we tried to free up pages in order to satisfy the hard limit.
For those reasons and others, the filesystem queue limit was removed for
2.6 in favour of allowing the VM to control the limits based on its
extra knowledge of the state of global resources.

Trond

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ