lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100605174034.GA13506@redhat.com>
Date:	Sat, 5 Jun 2010 19:40:34 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Henrik Rydberg <rydberg@...math.org>
Cc:	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
	Jiri Kosina <jkosina@...e.cz>,
	Mika Kuoppala <mika.kuoppala@...ia.com>,
	Benjamin Tissoires <tissoire@...a.fr>,
	Rafi Rubin <rafi@...s.upenn.edu>
Subject: Re: [PATCH 1/4] input: Introduce buflock, a one-to-many circular
	buffer mechanism

Hi Henrik,

On 06/04, Henrik Rydberg wrote:
>
> But additional usage documentation ought to be
> in place, in other words. Noted.

Yes, thanks.

Especially a small example (even in pseudo-code) in buflock.h can help
the reader to quickly understand how this buflock actually works.

> > Whatever we do, buflock_read() can race with the writer and read
> > the invalid item.
>
> True. However, one could argue this is a highly unlikely case given the
> (current) usage.

Agreed, but then I'd strongly suggest you to document this in the header.
The possible user of this API should know the limitations.

> Or, one could remedy it by not wrapping the indexes modulo SIZE.

You mean, change the implementation? Yes.

One more question. As you rightly pointed out, this is similar to seqlocks.
Did you consider the option to merely use them?

IOW,
	struct buflock_writer {
		seqcount_t	lock;
		unsigned int	head;
	};

In this case the implementation is obvious and correct.

Afaics, compared to the current implentation it has the only drawback:
the reader has to restart if it races with any write, while with your
code it only restarts if the writer writes to the item we are trying
to read.

> Regarding the barriers used in the code, would it be possible to get a picture
> of exactly how bad those operations are for performance?

Oh, sorry I don't know, and this obvioulsy differs depending on arch.
I never Knew how these barriers actually work in hardware, just have
the foggy ideas about the "side effects" they have ;)

And I agree with Dmitry, the last smp_Xmb() in buflock_write/read looks
unneeded. Both helpers do not care about the subsequent LOAD/STORE's.

write_seqcount_begin() has the "final" wmb, yes. But this is because
it does care. We are going to modify something under this write_lock,
the result of these subsequent STORE's shouldn't be visible to reader
before it sees the result of ++sequence.

> Is it true that a
> simple spinlock might be faster on average, for instance?

May be. But without spinlock's the writer can be never delayed by
reader. I guess this was your motivation.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ