lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1274976275.15516.79.camel@localhost>
Date:	Thu, 27 May 2010 19:04:35 +0300
From:	Artem Bityutskiy <dedekind1@...il.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Al Viro <viro@...IV.linux.org.uk>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCHv4 17/17] writeback: lessen sync_supers wakeup count

On Fri, 2010-05-28 at 01:44 +1000, Nick Piggin wrote:
> > I think this will change the behavior of 'sync_supers()' too much. ATM,
> > it makes only one SB pass, then sleeps, then another one, then sleeps.
> > And we should probably preserve this behavior. So I'd rather make it:
> > 
> > if (supers_dirty)
> > 	bdi_arm_supers_timer();
> > set_current_state(TASK_INTERRUPTIBLE);
> > schedule();
> > 
> > This way we will keep the behavior closer to the original.
> 
> Well your previous code had the same issue (ie. it could loop again
> in sync_supers). But fair point perhaps.

I think no, it either armed the timer of went to sleep, but it does not
matter much :-)

> But we cannot do the above, because again the timer might go off
> before we set current state. We'd lose the wakeup and never wake
> up again.
> 
> Putting it inside set_current_state() should be OK. I suppose.

Oh, right!

> > There is spin_lock(&sb_lock) in sync_supers(), so strictly speak this
> > 'smp_mb()' is not needed if we move supers_dirty = 0 into
> > 'sync_supers()' and add a comment that a mb is required, in case some
> > one modifies the code later?
> > 
> > Or this is not worth it?
> 
> It's a bit tricky. spin_lock only gives an acquire barrier, which
> prevents CPU executing instructions inside the critical section
> before acquiring the lock. It actually allows stores to be deferred
> from becoming visible to other CPUs until inside the critical section.
> So the load of sb->s_dirty could indeed still happen before the
> store is seen.
> 
> Locks do allow you to avoid thinking about barriers, but *only* when
> all memory accesses to all shared variables are inside the locks
> (or when a section has just a single access, which by definition don't
> need ordering with another access).

Oh, ok. I need to read carefully Documentation/memory-barriers.txt.

> > BTW, do you want me to keep you to be the patch author, add your
> > signed-off-by and my original commit message?
> 
> I'm not concerned. You contributed more to the idea+implementation,
> so record yourself as author.

Ok, but thank you a lot for helping!

-- 
Best Regards,
Artem Bityutskiy (Артём Битюцкий)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ