lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A707578.3010901@steeleye.com>
Date:	Wed, 29 Jul 2009 12:14:48 -0400
From:	Paul Clements <paul.clements@...eleye.com>
To:	linux-raid@...r.kernel.org, Neil Brown <neilb@...e.de>,
	kernel list <linux-kernel@...r.kernel.org>
CC:	ian.campbell@...rix.com
Subject: [BUG] raid1 behind writes alter bio structure illegally

I've run into this bug on a 2.6.18 kernel, but I think the fix is still 
applicable to the latest kernels (even though the symptoms would be 
slightly different).

Perhaps someone who knows the block and/or SCSI layers well can comment 
on the legality of attaching new pages to a bio without fixing up the 
internal bio counters (details below)?

Thanks,
Paul

Environment:
-----------

Citrix XenServer 5.5 (2.6.18 Red Hat-derived kernel)

LVM over raid1 over SCSI/nbd

Description:
-----------

The problem is due to the behind-write code in raid1. It turns out the 
code is doing something a little non-kosher with the bio's and pages
associated with them. This causes (at least) the SCSI layer to get upset 
and fail the write requests.

Basically, when we do behind writes in raid1, we have to make a copy of
the original data that is being written, since we're going to complete
the request back up to user level before all the devices are finished
writing the data (e.g., the SCSI disk completes the write and raid1 then
completes the write back to user level, while nbd is still sending data
across the network).

The problem is actually a pretty simple one -- these copied pages 
(behind_pages in raid1 code) are allocated at different memory addresses 
than the original ones (obviously). This can cause the internal segment 
counts (nr_phys_segments) that were calculated in the bio when it was 
originally created (or cloned) to be invalid. Specifically, the SCSI 
layer notices the values are invalid when it tries to build its scatter 
gather list. The error:

Incorrect number of segments after building list
counted 94, received 64
req nr_sec 992, cur_nr_sec 8

appears in the kernel logs when this happens. (This exact message is no 
longer present in the kernel, but SCSI still appears to be building its 
scatter gather list in a similar fashion.)

Solution:
--------

The patch adds a call to blk_recount_segments to fix up the bio
structure to account for the new page addresses that have
been attached to the bio.

View attachment "xen-5.5-raid1-blk_recount_segments_fix.diff" of type "text/x-diff" (1176 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ