[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130925224755.GJ30372@lenny.home.zabbo.net>
Date: Wed, 25 Sep 2013 15:47:55 -0700
From: Zach Brown <zab@...hat.com>
To: Kent Overstreet <kmo@...erainc.com>
Cc: hch@...radead.org, axboe@...nel.dk, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] block: Introduce bio_for_each_page()
On Wed, Sep 25, 2013 at 02:49:10PM -0700, Kent Overstreet wrote:
> On Wed, Sep 25, 2013 at 02:17:02PM -0700, Zach Brown wrote:
> > > void zero_fill_bio(struct bio *bio)
> > > {
> > > - unsigned long flags;
> > > struct bio_vec bv;
> > > struct bvec_iter iter;
> > >
> > > - bio_for_each_segment(bv, bio, iter) {
> > > +#if defined(CONFIG_HIGHMEM) || defined(ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE)
> > > + bio_for_each_page(bv, bio, iter) {
> > > + unsigned long flags;
> > > char *data = bvec_kmap_irq(&bv, &flags);
> > > memset(data, 0, bv.bv_len);
> > > flush_dcache_page(bv.bv_page);
> > > bvec_kunmap_irq(data, &flags);
> > > }
> > > +#else
> > > + bio_for_each_segment(bv, bio, iter)
> > > + memset(page_address(bv.bv_page) + bv.bv_offset,
> > > + 0, bv.bv_len);
> > > +#endif
> >
> > This looks pretty sketchy. I'd expect this to be doable with one loop
> > and that bvec_kmap_irq() and flush_dcache_page() would fall back to
> > page_address() and nops when they're not needed.
> >
> > Where did this come from?
>
> It's just that if we need the kmap or the flush_dcache_page we have to
> process the bio one 4k page at a time - if not, we can process 64k (or
> whatever) bvecs all at once. That doesn't just save us memcpy calls, we
> can also avoid all the machinery in bio_for_each_page() for chunking up
> large bvecs into single page bvecs.
Understood. A comment would probably be wise as that ifdefery is going
to raise all the eyebrows.
- z
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists