lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR2101MB072909460F3898C6D13B4D4CCEB60@MWHPR2101MB0729.namprd21.prod.outlook.com>
Date:   Wed, 18 Apr 2018 17:11:14 +0000
From:   Long Li <longli@...rosoft.com>
To:     Tom Talpey <tom@...pey.com>,
        David Laight <David.Laight@...LAB.COM>,
        Steve French <sfrench@...ba.org>,
        "linux-cifs@...r.kernel.org" <linux-cifs@...r.kernel.org>,
        "samba-technical@...ts.samba.org" <samba-technical@...ts.samba.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
CC:     "stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: RE: [Patch v3 2/6] cifs: Allocate validate negotiation request
 through kmalloc

> Subject: Re: [Patch v3 2/6] cifs: Allocate validate negotiation request through
> kmalloc
> 
> On 4/18/2018 9:08 AM, David Laight wrote:
> > From: Tom Talpey
> >> Sent: 18 April 2018 12:32
> > ...
> >> On 4/17/2018 8:33 PM, Long Li wrote:
> >>> From: Long Li <longli@...rosoft.com>
> >>>
> >>> The data buffer allocated on the stack can't be DMA'ed, and hence
> >>> can't send through RDMA via SMB Direct.
> >>
> >> This comment is confusing. Any registered memory can be DMA'd, need
> >> to state the reason for the choice here more clearly.
> >
> > The stack could be allocated with vmalloc().
> > In which case the pages might not be physically contiguous and there
> > is no
> > (sensible) call to get the physical address required by the dma
> > controller (or other bus master).
> 
> Memory registration does not requires pages to be physically contiguous.
> RDMA Regions can and do support very large physical page scatter/gather,
> and the adapter DMA's them readily. Is this the only reason?

ib_dma_map_page will return an invalid DMA address for a buffer on stack. Even worse, this incorrect address can't be detected by ib_dma_mapping_error. Sending data from this address to hardware will not fail, but the remote peer will get junk data.

I think it makes sense as stack is dynamic and can shrink as I/O proceeds, so the buffer is gone. Other kernel code use only data on the heap for DMA, e.g. BLK/SCSI layer never use buffer on the stack to send data.

> 
> Tom.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ