lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120824184034.GA29932@redhat.com>
Date:	Fri, 24 Aug 2012 14:40:34 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Kent Overstreet <koverstreet@...gle.com>
Cc:	linux-bcache@...r.kernel.org, linux-kernel@...r.kernel.org,
	dm-devel@...hat.com, tj@...nel.org, mpatocka@...hat.com,
	bharrosh@...asas.com, Alasdair Kergon <agk@...hat.com>
Subject: Re: [PATCH v6 02/13] dm: Use bioset's front_pad for
 dm_rq_clone_bio_info

On Fri, Aug 24, 2012 at 12:14:48AM -0700, Kent Overstreet wrote:

[..]
> > > -static struct dm_rq_clone_bio_info *alloc_bio_info(struct mapped_device *md)
> > > -{
> > > -	return mempool_alloc(md->io_pool, GFP_ATOMIC);
> > > -}
> > > -
> > > -static void free_bio_info(struct dm_rq_clone_bio_info *info)
> > > -{
> > > -	mempool_free(info, info->tio->md->io_pool);
> > > -}
> > > -
> > 
> > With this change, do you still need "_rq_bio_info_cache" slab cache? I would
> > think that it can be cleaned up now?
> 
> It looks like it, but I'm hesitent to make more extensive changes to the
> dm code given that I'm unfamiliar with it and I haven't been able to
> personally test the request type dm target code.
> 
> That and the way io_pool is overloaded. I see too many ways I could
> screw things up.

I understand your concern but still if you leave it behind, job is
half done. You moved rq_bio_info in bio front padding but left the
associated cache and mempool behind. I would say we need to clean
it up and then get ACK from dm/md folks.

I am looking at the code and one thing which is not clear to me is
__bind_mempools() which assumes that md->io_pool is always set. With
your change md->io_pool is set only for BIO based targets and not
request based targets. So that will need some tidying up.

Testing of request based target should be easy. Just enable multipath
for your sata disk.


> 
> Also it looks like the equivalent change ought to be done with struct
> dm_io first (then we'd have removed all the users of io_pool), but
> honestly it takes me forever to do anything in the dm code so I'd rather
> leave that to someone else.

I think we can leave io_pool behind. Just that it remains null for
request based targets.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ