lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Apr 2009 21:13:28 -0600
From:	"Trenton D. Adams" <trenton.d.adams@...il.com>
To:	David Rees <drees76@...il.com>
Cc:	Christian Kujau <lists@...dbynature.de>,
	Artem Bityutskiy <Artem.Bityutskiy@...ia.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: EXT4-ish "fixes" in UBIFS

On Thu, Apr 2, 2009 at 8:58 PM, David Rees <drees76@...il.com> wrote:
> On Thu, Apr 2, 2009 at 7:28 PM, Trenton D. Adams
>> That's the odd thing, I was setting them to 2 and 1.  I was just
>> looking at the 2.6.29 code, and it should have made a difference.  I
>> don't know what version of the kernel I was using at the time.  And,
>> I'm not sure if I had the 1M fsync tests in place at the time either,
>> to be sure about what I was testing.  It could be that I wasn't being
>> very scientific about it at the time.  Thanks though, that setting
>> makes a huge difference.
>
> Well, it depends on how much memory you have.  Keep in mind that those
> are percentages - so if you have 2GB RAM, that's the same as setting
> it to 40MB and 20MB respectively - both are a lot larger than the 1M
> you were setting the dirty*bytes vm knobs to.
>
> I've got a problematic server with 8GB RAM.  Even if set both to 1,
> that's 80MB and the crappy disks I have in it will often only write
> 10-20MB/s or less due to the seekiness of the workload.  That means
> delays of 5-10 seconds worst case which isn't fun.
>
> -Dave
>

Yeah, I just finished doing the calculation. :P  40M is what I'm
seeing.  Yeah, that sounds like the same as my problem.  Even setting
it to 10M dirty_bytes has a very serious latency problem.  I'm glad
that option was added, because 1M works much better.  I'll have to
change my shell script to dynamically tune on that.  Because under
normal load, I want the 40M+ of queueing.  It's just when things get
really heavy, and stuff starts getting flushed, that this problem
starts happening.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ