lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47282B1D.8030501@sw.ru>
Date:	Wed, 31 Oct 2007 10:13:33 +0300
From:	Vasily Averin <vvs@...ru>
To:	agk@...hat.com, Andrew Morton <akpm@...ux-foundation.org>,
	dm-devel@...hat.com, linux-kernel@...r.kernel.org,
	devel@...nvz.org, stable@...nel.org,
	Jens Axboe <jens.axboe@...cle.com>,
	Milan Broz <mbroz@...hat.com>
Subject: Re: [dm-devel] Re: dm: bounce_pfn limit added

Alasdair G Kergon wrote:
> So currently we treat bounce_pfn as a property that does not need to be
> propagated through the stack.
> 
> But is that the right approach?
> - Is there a blk_queue_bounce() missing either from dm or elsewhere?
>   (And BTW can the bio_alloc() that lurks within lead to deadlock?)
> 
> Firstly, what's going wrong?
> - What is the dm table you are using?  (output of 'dmsetup table')
>   - Which dm targets and with how many underlying devices?
> - Which underlying driver?
> - Is this direct I/O to the block device from userspace, or via some
> filesystem or what?

On my testnode I have  6 Gb memory (1Gb normal zone for i386 kernels),
i2o hardware and lvm over i2o.

[root@...0 ~]# dmsetup table
vzvg-vz: 0 10289152 linear 80:5 384
vzvg-vzt: 0 263127040 linear 80:5 10289536
[root@...0 ~]# cat /proc/partitions
major minor  #blocks  name

  80     0  143374336 i2o/hda
  80     1     514048 i2o/hda1
  80     2    4096575 i2o/hda2
  80     3    2040255 i2o/hda3
  80     4          1 i2o/hda4
  80     5  136721151 i2o/hda5
 253     0    5144576 dm-0
 253     1  131563520 dm-1

Diotest from LTP test suite with ~1Mb buffer size and files on dm-over-i2o
paritions corrupts i2o_iop0_msg_inpool slab.

I2o on this node is able to handle only requests with up to 38 segments. Device
mapper correctly creates such requests and as you know it uses
max_pfn=BLK_BOUNCE_ANY. When this request translates to underlying device, it
clones bio and cleans BIO_SEG_VALID flag.

In this way underlying device calls blk_recalc_rq_segments() to recount number
of segments. However blk_recalc_rq_segments uses bounce_pfn=BLK_BOUNCE_HIGH
taken from underlying device. As result number of segments become over than
max_hw_segments limit.

Unfortunately there is not any checks and when i2o driver handles this incorrect
request it fills the memory out of i2o_iop0_msg_inpool slab.

Thank you,
	Vasily Averin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ