lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Aug 2011 09:11:40 -0400
From:	Jeff Layton <jlayton@...ba.org>
To:	Justin Piszcz <jpiszcz@...idpixels.com>
Cc:	"J. R. Okajima" <hooanon05@...oo.co.jp>,
	Jesper Juhl <jj@...osbits.net>, linux-kernel@...r.kernel.org,
	Alan Piszcz <ap@...arrain.com>,
	Steve French <sfrench@...ba.org>, linux-cifs@...r.kernel.org
Subject: Re: Kernel 3.0: Instant kernel crash when mounting CIFS (also
 crashes with linux-3.1-rc2

On Thu, 18 Aug 2011 08:22:44 -0400 (EDT)
Justin Piszcz <jpiszcz@...idpixels.com> wrote:

> 
> 
> On Thu, 18 Aug 2011, Justin Piszcz wrote:
> 
> >
> >
> > On Thu, 18 Aug 2011, Justin Piszcz wrote:
> >
> >> 
> >> 
> >> On Thu, 18 Aug 2011, J. R. Okajima wrote:
> >> 
> >>> 
> >>> Justin Piszcz:
> >>>> Does anyone know if any kernel supports CIFS w/out crashing? I'd like to
> >>>> backup some CIFS shares, thanks.
> >>>> 
> >>>> 
> >>>> mount -t cifs //w2/x /mnt -o user=user,pass=pass
> >>>> 
> >>>> [  881.388836] CIFS VFS: cifs_mount failed w/return code = -22
> >>> 	:::
> >>> 
> >>> Since it failed mounting, this patch will help you. Although the patch
> >>> will fix one bug, there still may exist another problem.
> >>> 
> >>> http://marc.info/?l=linux-cifs&m=131345112022031&w=2
> >> 
> >> Hi,
> >> 
> >> Latest patch (this one) applied to linux-3.1-rc2 works, at least it mounted
> >> this time and did not instantly crash the kernel!
> >> 
> >> I also tried the hostname again (and it did not crash the kernel, but it 
> >> failed to mount).
> >> 
> >> Used the IP and it mounted successfully:
> >> //10.0.0.11/x          28T  5.0T   23T  19% /mnt
> >> //10.0.0.11/y          19T  1.2T   18T   7% /mnt2
> >> 
> >> It has not crashed yet (which is good), I'll apply this patch to my
> >> production machine and test taking backups of this data and let you know
> >> if it crashes again, thanks!
> >> 
> >> Justin.
> >
> >
> > Hello,
> >
> > It is working but very slowly:
> >
> > Device eth6 [10.0.1.2] (1/1):
> > ================================================================================
> > Incoming:                               Outgoing:
> > Curr: 37.60 MByte/s                     Curr: 0.44 MByte/s
> > Avg: 4.98 MByte/s                       Avg: 0.09 MByte/s
> > Min: 0.00 MByte/s                       Min: 0.00 MByte/s
> > Max: 40.79 MByte/s                      Max: 0.48 MByte/s
> > Ttl: 1.45 GByte                         Ttl: 26.77 MByte
> >
> > Over 10GbE the other direction (Linux -> Windows (via Samba)) I get 500MiB/s, 
> > is CIFS slow?
> >
> > I'll look into options to tweak the speed but this is very poor speed when 
> > you have to transfer 5-10TB.  However, it is not crashing anymore, so any 
> > speed is better than that :)
> >
> > Justin.
> 
> Hi,
> 
> Mounting with:
> rw,uid=1000,gid=100,mode=0644,rsize=130048,wsize=1048576,credentials=/root/.cifs 
> Same speed:
> Device eth6 [10.0.1.2] (1/1):
> ================================================================================
> Incoming:                               Outgoing:
> Curr: 32.42 MByte/s                     Curr: 0.38 MByte/s
> Avg: 30.72 MByte/s                      Avg: 0.39 MByte/s
> Min: 0.00 MByte/s                       Min: 0.00 MByte/s
> Max: 43.64 MByte/s                      Max: 0.59 MByte/s
> Ttl: 20.15 GByte                        Ttl: 261.03 MByte
> 
> Thoughts?
> 
> Has anyone achieved > 30-40MB/s with CIFS?
> This is a 10GbE link (and yes JUMBO frames are enabled on both sides, and 
> again, samba from Linux->Windows = 500MB/s)
> 
> Justin.
> 

To be clear -- incoming in this case is reads or writes?

Up until 3.0 cifs.ko didn't parallelize writes from a single thread. In
3.0 I added a patchset to increase the allowable wsize and to allow the
kernel to issue writes in parallel.

Reads still suffer from the same problem however. I'm working on a
patchset that should do the same thing for them, but it requires a
fairly substantial overhaul of the receive codepaths.

-- 
Jeff Layton <jlayton@...ba.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ