lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <470F8263.4020107@systella.fr>
Date:	Fri, 12 Oct 2007 16:19:15 +0200
From:	BERTRAND Joël <joel.bertrand@...tella.fr>
To:	linux-kernel@...r.kernel.org
CC:	nbd-general@...ts.sourceforge.net, sparclinux@...r.kernel.org
Subject: Blockdev API returns erroneous size on /dev/nbd

	Hello,

	I'm trying to build a Raid1 volume over TCP/IP. I use net block device 
on a 2.6.23 linux kernel on a T1000 server (sparc64).

NBD server exports /dev/md7 (raid5 volume):
Root poulenc:[~] > blockdev --getsize64 /dev/md7
1499879178240
Root poulenc:[/] > nbd-server 2000 /dev/md7 1464725760 -l 192.168.0.0/24 
-C /etc/nbd-server/empty

On NDB client side:
Root gershwin:[~] > blockdev --getsize64 /dev/nbd0
1464725504
Root gershwin:[~] > blockdev --getsize64 /dev/md7
1499879178240

	When NDB client tries to connect to its server, I obtain :

Root gershwin:[/usr/scripts] > nbd-client bs=4096 192.168.0.2 2000 /dev/nbd0
Negotiation: ..size = 1430396KB
bs=4096, sz=357599
Root gershwin:[/usr/scripts] >

Thus, nbd seems to work. Please note that size is correctly returned. 
Now, I would build a raid1 array on this server with /dev/nbd0 and 
/dev/md7. But mdadm return

Root gershwin:[/usr/scripts] > mdadm -C /dev/md8 -l1 -n2 /dev/md7 /dev/nbd0
mdadm: /dev/md7 appears to contain an ext2fs file system
     size=1464725760K  mtime=Thu Jan  1 01:00:00 1970
mdadm: /dev/md7 appears to be part of a raid array:
     level=raid0 devices=2 ctime=Fri Oct 12 14:22:27 2007
mdadm: /dev/nbd0 appears to contain an ext2fs file system
     size=1464725760K  mtime=Thu Jan  1 01:00:00 1970
mdadm: /dev/nbd0 appears to be part of a raid array:
     level=raid0 devices=2 ctime=Fri Oct 12 14:22:27 2007
mdadm: largest drive (/dev/md7) exceed size (1430272K) by more than 1%

I have seen that it is possible to use nbd to build raid. But how can I 
fix the size of exported device ? Indeed, /dev/md7 appears to be 1024 
times greater than the _same_ block device exported by nbd !

Other question : I have tried to write large files on /dev/nbd0 and 
kernel returns :

nbd0: rw=1, want=2877616, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877624, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877632, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877640, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877648, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877656, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877664, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877672, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877680, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877688, limit=2860792
attempt to access beyond end of device
nbd0: rw=1, want=2877696, limit=2860792
attempt to access beyond end of device

	I suspect a bug, but I haven't find any fix or workaround...

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ