lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121102180606.GA13255@redhat.com>
Date:	Fri, 2 Nov 2012 19:06:06 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Mikulas Patocka <mpatocka@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Ananth N Mavinakayanahalli <ananth@...ibm.com>,
	Anton Arapov <anton@...hat.com>, linux-kernel@...r.kernel.org
Subject: [PATCH v2 0/1] percpu_rw_semaphore: reimplement to not block the
	readers unnecessarily

On 11/01, Linus Torvalds wrote:
>
> On Wed, Oct 31, 2012 at 12:41 PM, Oleg Nesterov <oleg@...hat.com> wrote:
> >
> > With this patch down_read/up_read does synchronize_sched() twice and
> > down_read/up_read are still possible during this time, just they use
> > the slow path.
>
> The changelog is wrong (it's the write path, not read path, that does
> the synchronize_sched).

Fixed, thanks,

> >  struct percpu_rw_semaphore {
> > -       unsigned __percpu *counters;
> > -       bool locked;
> > -       struct mutex mtx;
> > +       int __percpu            *fast_read_ctr;
>
> This change is wrong.
>
> You must not make the 'fast_read_ctr' thing be an int. Or at least you
> need to be a hell of a lot more careful about it.
>
> Why?
>
> Because the readers update the counters while possibly moving around
> cpu's, the increment and decrement of the counters may be on different
> CPU's. But that means that when you add all the counters together,
> things can overflow (only the final sum is meaningful). And THAT in
> turn means that you should not use a signed count, for the simple
> reason that signed integers don't have well-behaved overflow behavior
> in C.

Yes, Mikulas has pointed this too, but I forgot to make it "unsigned".

> Now, I doubt you'll find an architecture or C compiler where this will
> actually ever make a difference,

Yes. And we have other examples, say, mnt->mnt_pcp->mnt_writers is "int".

> but the fact remains that you
> shouldn't use signed integers for counters like this. You should use
> unsigned, and you should rely on the well-defined modulo-2**n
> semantics.

OK, I changed this.

But please note that clear_fast_ctr() still returns "int", even if it
uses "unsigned" to calculate the result. Because we use this value for
atomic_add(int i) and it can be actually negative, so to me it looks
a bit better this way even if the generated code is the same.

> I'd also like to see a comment somewhere in the source code about the
> whole algorithm and the rules.

Added the comments before down_read and down_write.

> Other than that, I guess it looks ok.

Great, please see v2.

I am not sure I addressed Paul's concerns, so I guess I need his ack.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ