lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Apr 2007 11:14:09 +0200
From:	Miklos Szeredi <miklos@...redi.hu>
To:	a.p.zijlstra@...llo.nl
CC:	miklos@...redi.hu, neilb@...e.de, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	dgc@....com, tomoki.sekiyama.qu@...achi.com, nikita@...sterfs.com,
	trond.myklebust@....uio.no, yingchao.zhou@...il.com
Subject: Re: [PATCH 10/10] mm: per device dirty threshold

> > > > This is probably a
> > > >  reasonable thing to do but it doesn't feel like the right place.  I
> > > >  think get_dirty_limits should return the raw threshold, and
> > > >  balance_dirty_pages should do both tests - the bdi-local test and the
> > > >  system-wide test.
> > > 
> > > Ok, that makes sense I guess.
> > 
> > Well, my narrow minded world view says it's not such a good idea,
> > because it would again introduce the deadlock scenario, we're trying
> > to avoid.
> 
> I was only referring to the placement of the clipping; and exactly where
> that happens does not affect the deadlock.

OK.

> > In a sense allowing a queue to go over the global limit just a little
> > bit is a good thing.  Actually the very original code does that: if
> > writeback was started for "write_chunk" number of pages, then we allow
> > "ratelimit" (8) _new_ pages to be dirtied, effectively ignoring the
> > global limit.
> 
> It might be time to get rid of that rate-limiting.
> balance_dirty_pages()'s fast path is not nearly as heavy as it used to
> be. All these fancy counter systems have removed quite a bit of
> iteration from there.

Hmm.  The rate limiting probably makes lots of sense for
dirty_exceeded==0, when ratelimit can be a nice large value.

For dirty_exceeded==1 it may make sense to disable ratelimiting, OTOH
having a granularity of 8 pages probably doesn't matter, because of
the granularity of the percpu counter is usually larger (except on UP).

> > That's why I've been saying, that the current code is so unfair: if
> > there are lots of dirty pages to be written back to a particular
> > device, then balance_dirty_pages() allows the dirty producer to make
> > even more pages dirty, but if there are _no_ dirty pages for a device,
> > and we are over the limit, then that dirty producer is allowed
> > absolutely no new dirty pages until the global counts subside.
> 
> Well, that got fixed on a per device basis with this patch, it is still
> true for multiple tasks writing to the same device.

Yes, this is the part of this patchset I'm personally interested in ;)

> > I'm still not quite sure what purpose the above "soft" limiting
> > serves.  It seems to just give advantage to writers, which managed to
> > accumulate lots of dirty pages, and then can convert that into even
> > more dirtyings.
> 
> The queues only limit the actual in-flight writeback pages,
> balance_dirty_pages() considers all pages that might become writeback as
> well as those that are.
> 
> > Would it make sense to remove this behavior, and ensure that
> > balance_dirty_pages() doesn't return until the per-queue limits have
> > been complied with?
> 
> I don't think that will help, balance_dirty_pages drives the queues.
> That is, it converts pages from mere dirty to writeback.

Yes.  But current logic says, that if you convert "write_chunk" dirty
to writeback, you are allowed to dirty "ratelimit" more. 

D: number of dirty pages
W: number of writeback pages
L: global limit
C: write_chunk = ratelimit_pages * 1.5
R: ratelimit

If D+W >= L, then R = 8

Let's assume, that D == L and W == 0.  And that all of the dirty pages
belong to a single device.  Also for simplicity, lets assume an
infinite length queue, and a slow device.

Then while converting the dirty pages to writeback, D / C * R new
dirty pages can be created.  So when all existing dirty have been
converted:

  D = L / C * R
  W = L

  D + W = L * (1 + R / C)

So we see, that we're now even more above the limit than before the
conversion.  This means, that we starve writers to other devices,
which don't have as many dirty pages, because until the slow device
doesn't finish these writes they will not get to do anything.

Your patch helps this in that if the other writers have an empty queue
and no dirty, they will be allowed to slowly start writing.  But they
will not gain their full share until the slow dirty-hog goes below the
global limit, which may take some time.

So I think the logical thing to do, is if the dirty-hog is over it's
queue limit, don't let it dirty any more until it's dirty+writeback go
below the limit.  That allowes other devices to more quickly gain
their share of dirty pages.

Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists