lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1229174086.4153.1003.camel@haakon2.linux-iscsi.org>
Date:	Sat, 13 Dec 2008 05:14:46 -0800
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	Bart Van Assche <bart.vanassche@...il.com>
Cc:	linux-iscsi-target-dev@...glegroups.com,
	LKML <linux-kernel@...r.kernel.org>,
	linux-scsi <linux-scsi@...r.kernel.org>
Subject: Re: [Announce]: Target_Core_Mod/ConfigFS and LIO-Target v3.0 work

On Sat, 2008-12-13 at 13:50 +0100, Bart Van Assche wrote:
> On Sat, Dec 13, 2008 at 1:33 PM, Nicholas A. Bellinger
> <nab@...ux-iscsi.org> wrote:
> > The point is that neither you nor Vlad would acknowledge any of the
> > issues on that thread.
> 
> What that thread started with is whether or not higher-order
> allocations would help the performance of a storage target. I replied
> that the final argument in any discussion about performance are
> performance measurements. You failed to publish any performance
> numbers in that thread, which is why I stopped replying.
> 

This was just one of the items that I mentioned that is implemented in
Target_Core_Mod/ConfigFS v3.0 that is lacking in SCST Core.  The list
(which has not changed) is the following:

<SNIP>
        The fudemental limitiation I ran into wrt SCST core has to do with
        memory allocation (or rather, lack there of).  The problem is that for
        the upstream generic kernel target, the requirements are the following:
        
        A single codepath memory allocating *AND* mapping for:
        
        I) Every type of device_type
        II) Every combination of max_sectors and sector_size with per PAGE_SIZE
        segments and multiple contigious PAGE_SIZE memory segments
        III) Every combination of I and II while receiving a CDB with
        sector_count > $STORAGE_OBJECT->max_sectors
        IV) Allocating multiple contigious struct page from the memory allocator
        for I, II, III.
        
        So, if we are talking about these first two, both target_core_mod and
        SCST core have them (at least I think SCST has #2 with PAGE_SIZE memory
        segments).  I know that target_core_mod has #3 (because I explictly
        designed it this way), and you have some patch for SCST to do this,
        great!.  However, your algortims currently assume PAGE_SIZE by default,
        which is a problem for not just II, III, and IV above.  :-(
        
        V) Accept pre-registered memory segments being passed into
        target_core_mod that are then mapped (NOT MEMCPY!!) to struct
        scatterlist->page_link handling all cases (I, II, III, and IV) for
        zero-copy DMA with multiple contigious PAGE_SIZE memory segments (Very
        fast!!) using a *SINGLE* codepath down to every Linux Storage Subsystem
        past, present and future.  This is the target_core_mod design that has
        existed since 2006.

</SNIP>

So, you are talking about IV) above, which is just one of the items.  As
mentioned,the big item for me is V), which means you are going to have
to do some fundemental changes to SCST core to make this work.  As
previously mentioned, these five design requirements have been part of
LIO-Target v2.x and Target_Core_Mod/ConfigFS v3.0 from the start.

> > Lets not even get into how you claimed RDMA
> > meant only userspace ops on virtual memory addresses using a vendor
> > specific API, or that RDMA using virtual addresses would be
> > communicating with drivers/scsi or block/ (which obviously use
> struct
> > page).
> 
> I never claimed that RDMA is only possible from user space -- that was
> a misinterpretation of your side.
> 
> I never referred to any vendor specific RDMA API.
> 
> But I agree that the following paragraph I cited from Intel's VIA
> architecture document may be misleading:
> 
> The VI provider is directly responsible for a number of functions nor-
> mally supplied by the operating system. The VI provider manages the
> pro-
> tected sharing of the network controller, virtual to physical
> translation of
> buffer addresses, and the synchronization of completed work via
> interrupts.
> The VI provider also provides a reliable transport service, with the
> level of re-
> liability depending upon the capabilities of the underlying network.
> 
> I guess the above paragraph means that RDMA hardware must have
> scatter/gather support.

I have no idea why you keep mentioning Intel's VIA in the context of
RDMA and generic target mode.  This API has *NOTHING* to do with a
target mode engine using generic algorithms for zero-copy struct page
mapping from RDMA capable hardware into Linux/SCSI, Linux/BLOCK or
Linux/VFS subsystems.

--nab


> 
> Bart.
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ