lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090406165753.GG5178@kernel.dk>
Date:	Mon, 6 Apr 2009 18:57:53 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	tytso@....edu
Subject: Re: [PATCH 0/8][RFC] IO latency/throughput fixes

On Mon, Apr 06 2009, Linus Torvalds wrote:
> 
> 
> On Mon, 6 Apr 2009, Jens Axboe wrote:
> > 
> > Ran the fsync-tester [1]. Drive is a 3-4 years old SATA drive, fs is
> > ext3/writeback. IO scheduler is CFQ.
> > 
> > fsync time: 0.2785s
> > fsync time: 0.2640s
> > 
> > And with Linus torture dd running in the background:
> > 
> > fsync time: 0.0109s
> > fsync time: 0.5236s
> > fsync time: 1.2108s
> 
> Ok, it's definitely better for me too. CFQ used to be the problem case 
> (with the previous patches), now I've been trying with CFQ for a while, 
> and it seems ok.
> 
> Not wonderful, by any means, but I haven't seen a 5+ second delay yet. 
> I've come close (I have a few 2+s hickups in my trace), but it's 
> clearly more responsive, even if I'd wish it to be better still.

OK that's good. I'll run some testing with this as well and perhaps we
can even do better still.

> One thing that I find intriguing is how the fsync time seems so 
> _consistent_ across a wild variety of drives. It's interesting how you see 
> delays that are roughly the same order of magnitude, even though you are 
> using an old SATA drive, and I'm using the Intel SSD. And when you turn 
> off TCQ, your numbers go down even more.
> 
> That just makes me suspect that there is something else than pure IO going 
> on. There shouldn't be any idling by the IO scheduler in my setup 
> ("rotational" is zero for me), and quite frankly, I should not see 
> latencies in the seconds even _with_ TCQ, since it should be limited to 
> just 32 tags. Of course, maybe some of those requests just grow humongous. 
> 
> So maybe one reason the "sync()" workload is so horrible is that we get 
> insanely big single requests. I see
> 
> 	[root@...alem queue]# cat max_sectors_kb 
> 	512
> 
> so we should be limited to half a meg per request, but I guess 32 of those 
> will take some time even on the Intel SSD. In fact, I guess the SSD is not 
> really any faster than your 2-3 year old SATA disk when it comes to pure 
> linear throughput

It's probably close, for your MLC version. This drive does around
60Mb/sec sequential writes, which is in the ball park with the ~70 yours
should be doing.

> Hmm. Doing a "echo 64 > max_sectors_kb" does seem to make my experience 
> nicer. At no really noticeable downside in throughput that I can see: the 
> "dd+sync" still tends to fluctuate 30-40s. But maybe I'm fooling myself. 
> But my 'strace' seems to agree: I'm having a hard time triggering anything 
> even close to a second latency now.
> 
> I wonder if we could limit the tag usage by request _size_, ie not let big 
> requests fill up all the tags (by all means allow writes to fill them up 
> if they are small - it's with many small requests that you get the biggest 
> advantage, after all, and with many _big_ requests that the downside is 
> the biggest too).

I think we are doing OK with the tag async vs sync allocation, there
should be plenty of room for sync IO still. The problem is likely
earlier, before we even assign a tag. The IO schedulers will move IO to
the dispatch list and the driver will grab it from there, assign a tag,
start, etc. Perhaps we end up moving too much to the dispatch list. That
would increase latencies, even if a sync queue preempts the current
async queue and dispatches a request (which goes to the dispatch list,
ordered, and thus may have to wait for others to be serviced first).

Just speculating, I'll test and probe and see what comes up.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ