[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150702023706.GK26440@mtj.duckdns.org>
Date: Wed, 1 Jul 2015 22:37:06 -0400
From: Tejun Heo <tj@...nel.org>
To: Jan Kara <jack@...e.cz>
Cc: axboe@...nel.dk, linux-kernel@...r.kernel.org, hch@...radead.org,
hannes@...xchg.org, linux-fsdevel@...r.kernel.org,
vgoyal@...hat.com, lizefan@...wei.com, cgroups@...r.kernel.org,
linux-mm@...ck.org, mhocko@...e.cz, clm@...com,
fengguang.wu@...el.com, david@...morbit.com, gthelen@...gle.com,
khlebnikov@...dex-team.ru
Subject: Re: [PATCH 41/51] writeback: make wakeup_flusher_threads() handle
multiple bdi_writeback's
Hello,
On Wed, Jul 01, 2015 at 10:15:28AM +0200, Jan Kara wrote:
> I was looking at who uses wakeup_flusher_threads(). There are two usecases:
>
> 1) sync() - we want to writeback everything
> 2) We want to relieve memory pressure by cleaning and subsequently
> reclaiming pages.
>
> Neither of these cares about number of pages too much if you write enough.
What's enough tho? Saying "yeah let's try about 1000 pages" is one
thing and "let's try about 1000 pages on each of 100 cgroups" is a
quite different operation. Given the nature of "let's try to write
some", I'd venture to say that writing somewhat less is an a lot
better behavior than possibly trying to write out possibly huge amount
given that the amount of fluctuation such behaviors may cause
system-wide and how non-obvious the reasons for such fluctuations
would be.
> So similarly as we don't split the passed nr_pages argument among bdis, I
bdi's are bound by actual hardware. wb's aren't. This is a purely
logical construct and there can be a lot of them. Again, trying to
write 1024 pages on each of 100 devices and trying to write 1024 * 100
pages to single device are quite different.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists