lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090302183649.GB11787@kernel.dk>
Date:	Mon, 2 Mar 2009 19:36:49 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Mike Christie <michaelc@...wisc.edu>
Cc:	Grant Grundler <grundler@...gle.com>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	mike.miller@...com, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
	hare@...ell.com, iss_storagedev@...com, iss.sbteam@...com
Subject: Re: [PATCH] hpsa: SCSI driver for HP Smart Array controllers

On Mon, Mar 02 2009, Mike Christie wrote:
> Grant Grundler wrote:
>> On Sun, Mar 1, 2009 at 10:32 PM, FUJITA Tomonori
>> <fujita.tomonori@....ntt.co.jp> wrote:
>> ...
>>>> +/*
>>>> + * For operations that cannot sleep, a command block is allocated at init,
>>>> + * and managed by cmd_alloc() and cmd_free() using a simple bitmap to track
>>>> + * which ones are free or in use.  Lock must be held when calling this.
>>>> + * cmd_free() is the complement.
>>>> + */
>>>> +static struct CommandList_struct *cmd_alloc(struct ctlr_info *h)
>>>> +{
>>>> +     struct CommandList_struct *c;
>>>> +     int i;
>>>> +     union u64bit temp64;
>>>> +     dma_addr_t cmd_dma_handle, err_dma_handle;
>>>> +
>>>> +     do {
>>>> +             i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds);
>>>> +             if (i == h->nr_cmds)
>>>> +                     return NULL;
>>>> +     } while (test_and_set_bit
>>>> +              (i & (BITS_PER_LONG - 1),
>>>> +               h->cmd_pool_bits + (i / BITS_PER_LONG)) != 0);
>>> Using bitmap to manage free commands looks too complicated a bit to
>>> me. Can we just use lists for command management?
>>
>> Bit maps are generally more efficient than lists since we touch less data.
>> For both search and moving elements from free<->busy lists. This probably
>> won't matter if we are talking less than 10K IOPS. And willy demonstrated
>> other layers have pretty high overhead (block, libata and SCSI midlayer)
>> at high transaction rates.
>>
>
> If it was just needing this for the queuecommand path it would be  
> simple. For the queuecommand path we could just use the scsi host  
> tagging code for the index. You do not need a lock in the queuecommand  
> path for getting a index and command, and you do not need to duplicate  
> the tag/index allocation code in the block/scsi code
>
> A problem with the host tagging is what to do if you need a tag/index  
> for a internal command. In the slow path like the device reset and cache  
> flush case you could use a list or preallocated command or whatever  
> other drivers are using that makes you happy.
>
> Or for the reset/shutdown/internal path could we come up with a  
> extension to the existing API. Maybe just add some wrapper around some  
> of blk_queue_start_tag that takes a the bqt (the bqt would come from the  
> host wide one) and allocates the tag (need a something similar for the  
> release side).

This is precisely what I did for libata, here is is interleaved with
some other stuff:

http://git.kernel.dk/?p=linux-2.6-block.git;a=commitdiff;h=f557570ec6042370333b6b9c33bbbae175120a89

It needs a little more polish and so on, but the concept is identical to
what you describe for this case. And I agree, it's much better to use
the same index instead of generating/maintaining seperate bitmaps for
this type of thing.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ