lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4E4C4FAA.70409@micron.com>
Date:	Wed, 17 Aug 2011 17:32:58 -0600
From:	Asai Thambi S P <asamymuthupa@...ron.com>
To:	Jens Axboe <jaxboe@...ionio.com>
CC:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Jeff Moyer <jmoyer@...hat.com>,
	"linux-ide@...r.kernel.org" <linux-ide@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Jeff Garzik <jgarzik@...ox.com>,
	Christoph Hellwig <hch@...radead.org>,
	"Sam Bradshaw (sbradshaw)" <sbradshaw@...ron.com>
Subject: Re: [PATCH v3 1/3] drivers/block/mtip32xx: Adding header file and
  source for pci and block related operation

On 8/12/2011 2:04 AM, Jens Axboe wrote:
> On 2011-08-11 20:52, Asai Thambi Samymuthu Pattrayasamy (asamymuthupa) [CONTRACTOR] wrote:
>> +       /*
>> +        * Semaphore used to lock out read/write commands during the
>> +        * execution of an internal command.
>> +        */
>> +       struct rw_semaphore internal_sem;
> 
> I hope you are not using that in a hot path...

As we don't use a queue, we can't inject the IOCTL/PIO commands at the
head of the queue like the ahci stack. So we have to wait on all NCQ
commands to complete while preventing more from being issued, issue our
'internal' command, wait for it to complete, and  then resume NCQ
submissions. Ideally, we'd like to do this without the overhead of
managing a queue for performance reasons. Do you have any suggestion for
this problem?

>> +int mtip_block_initialize(struct driver_data *dd)
>> +{
>> +       int rv = 0;
>> +       sector_t capacity;
>> +       unsigned int index = 0;
>> +       struct kobject *kobj;
>> +
>> +       /* Initialize the protocol layer. */
>> +       rv = mtip_hw_init(dd);
>> +       if (rv < 0) {
>> +               dev_err(&dd->pdev->dev,
>> +                       "Protocol layer initialization failed\n");
>> +               rv = -EINVAL;
>> +               goto protocol_init_error;
>> +       }
>> +
>> +       /* Allocate the request queue. */
>> +       dd->queue = blk_alloc_queue(GFP_KERNEL);
> 
> It'd be nice for a high perf device like this to allocate the queue node
> local.

We thought not to mess with block layer housekeeping for request queue.
Will there be any performance gain if the driver allocates the request
queue? Is there any other benefit?


-- 
Regards,
Asai Thambi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ