lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Sep 2009 11:43:05 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Christoph Hellwig <hch@...radead.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	chris.mason@...cle.com, david@...morbit.com, tytso@....edu,
	akpm@...ux-foundation.org, jack@...e.cz
Subject: Re: [PATCH 2/8] writeback: move dirty inodes from super_block to
	backing_dev_info

On Fri, Sep 04, 2009 at 08:53:57AM +0200, Jens Axboe wrote:
> > +	if (wbc->sync_mode == WB_SYNC_ALL)
> > +		bdi_wait_on_work_clear(&work);
> >  }
> 
> That doesn't work, you have to wait for on-stack work. So either we just
> punt and not do anything for WB_SYNC_NONE if the allocation fails, or we
> punt to stack and do the wait. Since it's a cleaning action and
> allocation fails, falling back to the stack and waiting seems like the
> most appropriate choice.

True, the wait needs to be unconditional.  Updated version below.
But now that I look at it, I wonder if we should even bother with it.
bdi_start_writeback is only used in WC_SYNC_NONE mode in
balance_dirty_pages.  So if we really run so much out of memory that we
can't allocate the bdi_work we might just throttle and wait for the
flusher thread to do it's work.  That would get rid of all the
special cases for the on-stack bdi_work instances.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ