lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Fri, 15 Aug 2008 17:16:37 -0700 (PDT)
From:	Tech Yes <techoyes@...oo.com>
To:	Kernel Linux <linux-kernel@...r.kernel.org>
Cc:	mbroz@...hat.com, agk@...hat.com
Subject: dm_io_async_bvec API in 2.6.26

Hi All



I am in the process of porting a driver written for 2.6.21 that uses 
dm_io_async_bvec and dm_io_sync_vm. Looks like in the latest kernel
both these API s have been removed.

After looking at the patch I created 2 wrapper functions as follows. 

static int  _my_dm_io_async_bvec(unsigned int num_regions, struct io_region
*where, int rw, struct bio_vec *bvec, io_notify_fn fn, void *context)

{
    struct io_job *job=(struct io_job *)context;
    struct io_job_owner *cctx= job->owner;
    struct dm_io_request iorq;

    iorq.bi_rw = (rw | (1 << BIO_RW_SYNC));
    iorq.mem.type = DM_IO_BVEC;
    iorq.mem.ptr.bvec = bvec;
    iorq.notify.fn = fn;
    iorq.notify.context = context;
    iorq.client = cctx->dm_ioclnt; 
    return dm_io(&iorq,num_regions,where,NULL);
}



static int _my_dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw, void *data, unsigned long *error_bits, struct io_job_owner *cctx)

{
    struct dm_io_request iorq;
    iorq.bi_rw= (rw | (1 << BIO_RW_SYNC));

    iorq.mem.type=DM_IO_VMA;
    iorq.mem.ptr.vma=data;
    iorq.notify.fn=NULL;
    iorq.notify.context=NULL;
    iorq.client=cctx->mdx_ioclnt;
    return dm_io(&iorq,num_regions,where,NULL);
}


Except
passing in the owner of the io job in dm_io_sync_vm the calling
convention is the same as what we had in 2.6.21. I create the dm io
clients in the constructor of the dm.

Here is the question. As advised in the comment in dm_io if I use the BIO_RW_SYNC flag my performance tanks. If I don't use it  I get  data corruption within my dm.  How do  I  get around this?  Is there an example  I could follow that previously used  these API s and now using dm-io that you can direct me to? 

My porting was based on the dm-raid1 port.

Interestingly they dont use the flag BIO_RW_SYNC and I cannot find a place
where they call blk_unplug() with regards to dm-io in that code. 

Some suggestions are very welcome

Thanks in advance

Yes2Tech


      

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ