[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121015081715.GC29069@dhcp22.suse.cz>
Date: Mon, 15 Oct 2012 10:17:15 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Alex Bligh <alex@...x.org.uk>
Cc: linux-kernel@...r.kernel.org
Subject: Re: Local DoS through write heavy I/O on CFQ & Deadline
On Fri 12-10-12 17:29:50, Alex Bligh wrote:
> Michael,
>
> --On 12 October 2012 16:58:39 +0200 Michal Hocko <mhocko@...e.cz> wrote:
>
> >Once dirty_ratio (resp. dirty_bytes) limit is hit then the process which
> >writes gets throttled. If this is not the case then there is a bug in
> >the throttling code.
>
> I believe that is the problem.
>
> >>Isn't the only thing that is going to change that it ends up
> >>triggering the writeback earlier?
> >
> >Set the limit lowe?
>
> I think you mean 'lower'. If I do that, what I think will happen
> is that it will start the write-back earlier,
Yes this is primarily controlled by dirty_background_{bytes|ratio}.
> but the writeback once started will not keep up with the generation of
> data, possibly because the throttling isn't going to work.
This would be good to confirm.
> Note that for instance using ionice to set priority or class to 'idle'
> has no effect. So, to test my hypothesis ...
This has been tested with the original dirty_ratio configuration, right?
> >>Happy to test etc - what would you suggest, dirty_ratio=5,
> >>dirty_background_ratio=2 ?
> >
> >These are measured in percentage. On the other hand if you use
> >dirty_bytes resp. dirty_background_bytes then you get absolute numbers
> >independent on the amount of memory.
>
> ... what would you suggest I set any of these to in order to test
> (assuming the same box) so that it's 'low enough' that if it still
> hangs, it's a bug, rather than it's simply 'not low enough'. It's
> an 8G box and clearly I'm happy to set either the _ratio or _bytes
> entries.
I would use _ratio variants as you have a better control over the amount
of dirty data that can accumulate. You will need to experiment a bit to
tune this up. Maybe somebody with more IO experiences can help you more
with this.
I think what you see is related to your filesystem as well. Other
processes probably wait for fsync but the amount of dirty data is so big
it takes really long to finish it.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists