lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A7A250.5050307@bjorling.me>
Date:	Thu, 16 Jul 2015 14:23:44 +0200
From:	Matias Bjørling <m@...rling.me>
To:	Christoph Hellwig <hch@...radead.org>
CC:	Stephen.Bates@...s.com, keith.busch@...el.com, javier@...htnvm.io,
	linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
	axboe@...com, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v4 0/8] Support for Open-Channel SSDs

> As a start add a new submit_io method to the nvm_dev_ops, and add
> an implementation similar to pscsi_execute_cmd in
> drivers/target/target_core_pscsi.c for nvme, and a trivial no op
> for a null-nvm driver replacing the null-blk additions.  This
> will give you very similar behavior to your current code, while
> allowing to drop all the hacks in the block code.  Note that simple
> plugging will work just fine which should be all you'll need.
>

A quick question. The flow is getting into place and it is looking good.

However, the code path is still left with a per device flash block 
management core data structure in gendisk->nvm. ->nvm holds the device 
configuration (number of flash chips, channels, flash page sizes, etc.), 
free/used blocks in the media and other small structures. Basically 
keeping track of the state of the blocks on the media.

It is nice to have it associated with gendisk, as it then easily can be 
accessed from lightnvm code, without knowing which device driver that is 
underneath.

If moving it outside gendisk, one approach would be to create a separate 
block device for each open-channel ssd initialized. E.g. /dev/nvme0n1 
has its block management information exposed through 
/dev/lnvm/nvme0n1_bm. For each *_bm, the private field holds a map 
between request_queue and bm. Effectively using a gendisk to act as a 
link between the real device and any FTL target. This seems just as 
hacky as the gendisk approach.

Any other approaches or is gendisk good for now?

Thanks, Matias




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ