lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <586510be-5a56-5e99-6ee6-ee20031f166b@lightnvm.io>
Date:   Thu, 21 Jan 2021 21:14:35 +0100
From:   Matias Bjørling <mb@...htnvm.io>
To:     Heiner Litz <hlitz@...c.edu>
Cc:     Jens Axboe <axboe@...nel.dk>, Pan Bian <bianpan2016@....com>,
        linux-block@...r.kernel.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] lightnvm: fix memory leak when submit fails

On 21/01/2021 20.49, Heiner Litz wrote:
> there are a couple more, but again I would understand if those are
> deemed not important enough to keep it.
>
> device emulation of (non-ZNS) SSD block device

That'll soon be available. We will be open-sourcing a new device mapper 
(dm-zap), which implements an indirection layer that enables ZNS SSDs to 
be exposed as a conventional block device.

> die control: yes endurance groups would help but I am not aware of any
> vendor supporting it
It is out there. Although, is this still important in 2021? OCSSD was 
made back in the days where media program/erase suspend wasn't commonly 
available and SSD controller were more simple. With today's media and 
SSD controllers, it is hard to compete without leaving media throughput 
on the table. If needed, splitting a drive into a few partitions should 
be sufficient for many many types of workloads.
> finer-grained control: 1000's of open blocks vs. a handful of
> concurrently open zones

It is dependent on the implementation - ZNS SSDs also supports 1000's of 
open zones.

Wrt to available OCSSD hardware - there isn't, to my knowledge, proper 
implementations available, where media reliability is taken into account.

Generally for the OCSSD hardware implementations, their UBER is 
extremely low, and as such RAID or similar schemes must be implemented 
on the host. pblk does not implement this, so at best, one should not 
store data if one wants to get it back at some point. It also makes for 
an unfair SSD comparison, as there is much more to an SSD than what 
OCSSD + pblk implements. At worst, it'll lead to false understanding of 
the challenges of making SSDs, and at best, work can be used as the 
foundation for doing an actual SSD implementation.

> OOB area: helpful for L2P recovery

It is known as LBA metadata in NVMe. It is commonly available in many of 
today's SSD.

I understand your point that there is a lot of flexibility, but my 
counter point is that there isn't anything in OCSSD, that is not 
implementable or commonly available using today's NVMe concepts. 
Furthermore, the known OCSSD research platforms can easily be updated to 
expose the OCSSD characteristics through standardized NVMe concepts. 
That would probably make for a good research paper.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ