lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 08 Apr 2015 13:56:50 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Jens Axboe <axboe@...nel.dk>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] blk-plug: don't flush nested plug lists

Jens Axboe <axboe@...nel.dk> writes:

>> Comments would be greatly appreciated.
>
> It's hard to argue with the increased merging for your case. The task
> plugs did originally work like you changed them to, not flushing until
> the outermost plug was flushed. Unfortunately I don't quite remember
> why I changed them, will have to do a bit of digging to refresh my
> memory.

Let me know what you dig up.  

> For cases where we don't do any merging (like nvme), we always want to
> flush. Well almost, if we start do utilize the batched submission,
> then the plug would still potentially help (just for other reasons
> than merging).

It's never straight-forward.  :)

> And agree with Ming, this can be cleaned up substantially. I'd also

Let me know if you have any issues with the v2 posting.

> like to see some test results from the other end of the spectrum. Your
> posted cased is clearly based case (we missed tons of merging, now we
> don't), I'd like to see a normal case and a worst case result as well
> so we have an idea of what this would do to latencies.

As for other benchmark numbers, this work was inspired by two reports of
lower performance after converting virtio_blk to use blk-mq in RHEL.
The workloads were both synthetic, though.  I'd be happy to run a
battery of tests on the patch set, but if there is anything specific you
want to see (besides a workload that will have no merges), let me know.

I guess I should also mention that another solution to the problem might
be the mythical mq I/O scheduler.  :)

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ