lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140721192825.GA25962@kmo-pixel>
Date:	Mon, 21 Jul 2014 12:28:25 -0700
From:	Kent Overstreet <kmo@...erainc.com>
To:	Hannes Reinecke <hare@...e.de>
Cc:	John Utz <John.Utz@....com>, Mike Snitzer <snitzer@...hat.com>,
	"dm-devel@...hat.com" <dm-devel@...hat.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>, tytso@....edu
Subject: Re: ZAC target (Was: Re: dm-multipath: Accept failed paths for
 multipath maps)

On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
> On 07/18/2014 07:04 PM, John Utz wrote:
> >>On 07/18/2014 05:31 AM, John Utz wrote:
> >>>Thankyou very much for the exhaustive answer! I forwarded on to my
> >>>project peers because i don't think any of us where aware of the
> >>>existing infrastructure.
> >>>
> >>>Of course, said infrastructure would have to be taught about ZAC,
> >>>but it seems like it would be a nice place to start testing from....
> >>>
> >>ZAC is a different beast altogether; I've posted an initial set of
> >>patches a while back on linux-scsi.
> >>But I don't think multipath needs to be changed for that.
> >>Other areas of device-mapper most certainly do.
> >
> >Pretty sure John is working on a new ZAC-oriented DM target.
> >
> >YUP.
> >
> >Per Ted T'so's suggestion several months ago, the goal is to create
> > a new DM target that implements the ZAC/ZBC command set and the SMR
> > write pointer architecture so that FSfolksen can try their hand at
> > porting their stuff to it.
> >
> >It's in the very early stages so there is nothing to show yet, but
> > development is ongoing. There are a few unknowns about how to surface
> > some specific behaviors (new verbs and errors, particularly errors
> > with sense codes that return a write pointer) but i have not gotten
> > far enuf along in development to be able to construct succint and
> > specific questions on the topic so that will have to wait for a bit.
> >
> I was pondering the 'best' ZAC implementation, too, and found the
> 'report zones' command _very_ cumbersome to use.
> Especially the fact that in theory each zone could have a different size
> _and_ plenty of zones could be present will be making zone lookup hellish.
> 
> However: it seems to me that we might benefit from a generic
> 'block boundaries' implementation.
> Reasoning here is that several subsystems (RAID, ZAC/ZBC, and things like
> referrals) impose I/O scheduling boundaries which must not be crossed when
> assembling requests.

Wasn't Ted working on such a thing?

> Seeing that we already have some block limitations I was wondering if we
> couldn't have some set of 'I/O scheduling boundaries' as part
> of the request_queue structure.

I'd prefer not to dump yet more crap in request_queue, but that's a fairly minor
quibble :)

I also tend to think having different size zones is crazy and I would avoid
making any effort to support that in practice, but OTOH there's good reason for
wanting one or two "normal" zones and the rest append only so the interface is
going to have to accomadate some differences between zones.

Also, depending on the approach supporting different size zones might not
actually be problematic. If you're starting with something that's pure COW and
you're just plugging in this "ZAC allocation" stuff (which I think is what I'm
going to do in bcache) then it might not actually be an issue.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ