lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Jan 2016 17:47:12 +0800
From:	Wenwei Tao <ww.tao0320@...il.com>
To:	Matias Bjørling <mb@...htnvm.io>
Cc:	linux-kernel@...r.kernel.org, linux-block@...r.kernel.org
Subject: Re: [RFC PATCH 2/2] lightnvm: add non-continuous lun target creation support

2016-01-21 15:53 GMT+08:00 Matias Bjørling <mb@...htnvm.io>:
> On 01/21/2016 08:44 AM, Wenwei Tao wrote:
>> 2016-01-20 21:19 GMT+08:00 Matias Bjørling <mb@...htnvm.io>:
>>> On 01/15/2016 12:44 PM, Wenwei Tao wrote:
>>>> When create a target, we specify the begin lunid and
>>>> the end lunid, and get the corresponding continuous
>>>> luns from media manager, if one of the luns is not free,
>>>> we failed to create the target, even if the device's
>>>> total free luns are enough.
>>>>
>>>> So add non-continuous lun target creation support,
>>>> thus we can improve the backend device's space utilization.
>>>
>>> A couple of questions:
>>>
>>> A user inits lun 3-4 and afterwards another 1-6, then only 1,2,5,6 would
>>> be initialized?
>>>
>>> What about the case where init0 uses 3-4, and init1 uses 1-6, and would
>>> share 3-4 with init0?
>>>
>>> Would it be better to give a list of LUNs as a bitmap, and then try to
>>> initialize on top of that? with the added functionality of the user may
>>> reserve luns (and thereby reject others attempting to use them)
>>>
>>
>> I'm not quite understand the bitmap you mentioned.
>> This patch do have a bitmap : dev->lun_map and the target creation is
>> on top of this bitmap.
>>
>> The way how a target gets its LUNs is based on its creation flags.
>> If NVM_C_FIXED is set, this means the target wants get its LUNs
>> exactly as it specifies from lun_begin to lun_end, if any of them are
>> occupied by others, the creation fail.
>> If NVM_C_FIXED is not set, the target will get its LUNs from free LUNs
>> between  0 and dev->nr_luns, there is no guarantee that final LUNs are
>> continuous.
>>
>> For the first question, if NVM_C_FIXED is used second creation would
>> be fail since 3-4 are already used, otherwise it will success if we
>> have enough free LUNs left, but the final LUNs may not from 1 to 6,
>> e.g. 1, 2, 5, 6, 7, 11.
>>
>> For the second question, from explanation above we know that sharing
>> LUNs would not happen in current design.
>
> This is an interesting discussion. This could boil down to a device
> supporting either a dense or sparse translation map (or none).
>
> With a dense translation map, there is a 1-to-1 relationship between
> lbas and ppas.
>
> With a sparse translation map (or no translation map, handled completely
> by the host), we may share luns.
>
> For current implementations, a dense mapping is supported. I wonder the
> cost of implementing a sparse map (e.g. b-tree structure) on a device is
> a good design choice.
>
> If the device supports sparse mapping, then we should add another bit to
> the extension bitmap, and then allow luns to shared. In the current
> case, we should properly just deny luns to be shared between targets.
>
> How about extending the functionality to take a bitmap of luns, which
> defines the luns that we like to map. Do the necessary check if any of
> them is in use, and then proceed if all is available?
>

Currently a bitmap of luns already added into nvm_dev, every time we
map the luns we check the bitmap.
I don't quite understand why we need to add another bitmap?

> That'll remove the ambiguity from selection luns, and instead enable the
> user to make the correct decision up front?
>
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ