lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <94D0CD8314A33A4D9D801C0FE68B402958C67C0E@G9W0745.americas.hpqcorp.net>
Date:	Mon, 8 Sep 2014 22:53:05 +0000
From:	"Elliott, Robert (Server Storage)" <Elliott@...com>
To:	Ming Lei <ming.lei@...onical.com>, Christoph Hellwig <hch@....de>
CC:	Jens Axboe <axboe@...nel.dk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux SCSI List <linux-scsi@...r.kernel.org>
Subject: RE: [PATCH 0/6] blk-mq: initialize pdu of flush req explicitly



> -----Original Message-----
> From: linux-scsi-owner@...r.kernel.org [mailto:linux-scsi-
> owner@...r.kernel.org] On Behalf Of Ming Lei
> Sent: Monday, 08 September, 2014 11:55 AM
> To: Christoph Hellwig
> Cc: Jens Axboe; Linux Kernel Mailing List; Linux SCSI List
> Subject: Re: [PATCH 0/6] blk-mq: initialize pdu of flush req
> explicitly
> 
> On Mon, Sep 8, 2014 at 2:49 AM, Christoph Hellwig <hch@....de> wrote:
> > This works fine for me, although I still don't really like it very
> much.
> >
> > If you really want to go down the path of major surgery in this
> > area we should probably allocate a flush request per hw_ctx, and
> > initialize it using the normal init/exit functions.  If we want
> > to have proper multiqueue performance on devices needing flushes
> > we'll need something like that anyway.
> 
> Yes, that should be the final solution for the problem, and looks the
> whole flush machinery need to move into hctx, I will try to figure
> out one patch to do that.

Please change flush_rq allocation from kzalloc to kzalloc_node 
while operating on that code (would have affected PATCH 1/6).

blk_mq_init_queue currently has this for q->flush_rq:
        q->flush_rq = kzalloc(round_up(sizeof(struct request) +
                                set->cmd_size, cache_line_size()),
                                GFP_KERNEL);

while all its other allocations are tied to set->numa_node:
        hctxs = kmalloc_node(set->nr_hw_queues * sizeof(*hctxs), GFP_KERNEL,
                        set->numa_node);
        q = blk_alloc_queue_node(GFP_KERNEL, set->numa_node);

or, for per-CPU structures, tied to the appropriate node:
        for (i = 0; i < set->nr_hw_queues; i++) {
                int node = blk_mq_hw_queue_to_node(map, i);

                hctxs[i] = kzalloc_node(sizeof(struct blk_mq_hw_ctx),
                                        GFP_KERNEL, node);

Per-hctx flush requests would mean following the hctxs[i]
approach.


---
Rob Elliott    HP Server Storage



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ