lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Oct 2012 05:59:34 +1000
From:	Dave Chinner <dchinner@...hat.com>
To:	Lukáš Czerner <lczerner@...hat.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>, Jens Axboe <axboe@...nel.dk>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] loop: Limit the number of requests in the bio list

On Tue, Oct 02, 2012 at 10:52:05AM +0200, Lukáš Czerner wrote:
> On Mon, 1 Oct 2012, Jeff Moyer wrote:
> > Date: Mon, 01 Oct 2012 12:52:19 -0400
> > From: Jeff Moyer <jmoyer@...hat.com>
> > To: Lukas Czerner <lczerner@...hat.com>
> > Cc: Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
> >     Dave Chinner <dchinner@...hat.com>
> > Subject: Re: [PATCH] loop: Limit the number of requests in the bio list
> > 
> > Lukas Czerner <lczerner@...hat.com> writes:
> > 
> > > Currently there is not limitation of number of requests in the loop bio
> > > list. This can lead into some nasty situations when the caller spawns
> > > tons of bio requests taking huge amount of memory. This is even more
> > > obvious with discard where blkdev_issue_discard() will submit all bios
> > > for the range and wait for them to finish afterwards. On really big loop
> > > devices this can lead to OOM situation as reported by Dave Chinner.
> > >
> > > With this patch we will wait in loop_make_request() if the number of
> > > bios in the loop bio list would exceed 'nr_requests' number of requests.
> > > We'll wake up the process as we process the bios form the list.
> > 
> > I think you might want to do something similar to what is done for
> > request_queues by implementing a congestion on and off threshold.  As
> > Jens writes in this commit (predating the conversion to git):
> 
> Right, I've had the same idea. However my first proof-of-concept
> worked quite well without this and my simple performance testing did
> not show any regression.
> 
> I've basically done just fstrim, and blkdiscard on huge loop device
> measuring time to finish and dd bs=4k throughput. None of those showed
> any performance regression. I've chosen those for being quite simple
> and supposedly issuing quite a lot of bios. Any better
> recommendation to test this ?
> 
> Also I am still unable to reproduce the problem Dave originally
> experienced and I was hoping that he can test whether this helps or
> not.
> 
> Dave could you give it a try please ? By creating huge (500T, 1000T,
> 1500T) loop device on machine with 2GB memory I was not able to reproduce
> that. Maybe it's that xfs punch hole implementation is so damn fast
> :). Please let me know.

Try a file with a few hundred thousand extents in it (preallocate
them). I found this while testing large block devices on loopback
devices, not with empty files.

Cheers,

Dave.
-- 
Dave Chinner
dchinner@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ