lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4C0A989D.5040001@bitmath.org>
Date:	Sat, 05 Jun 2010 20:34:05 +0200
From:	Henrik Rydberg <rydberg@...math.org>
To:	Oleg Nesterov <oleg@...hat.com>
CC:	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
	Jiri Kosina <jkosina@...e.cz>,
	Mika Kuoppala <mika.kuoppala@...ia.com>,
	Benjamin Tissoires <tissoire@...a.fr>,
	Rafi Rubin <rafi@...s.upenn.edu>
Subject: Re: [PATCH 1/4] input: Introduce buflock, a one-to-many circular
 buffer mechanism

Hi Oleg,

thanks for having another look at this.

[...]
>>> Whatever we do, buflock_read() can race with the writer and read
>>> the invalid item.
>> True. However, one could argue this is a highly unlikely case given the
>> (current) usage.
> 
> Agreed, but then I'd strongly suggest you to document this in the header.
> The possible user of this API should know the limitations.
> 
>> Or, one could remedy it by not wrapping the indexes modulo SIZE.
> 
> You mean, change the implementation? Yes.

I feel this is the only option now.

> One more question. As you rightly pointed out, this is similar to seqlocks.
> Did you consider the option to merely use them?
> 
> IOW,
> 	struct buflock_writer {
> 		seqcount_t	lock;
> 		unsigned int	head;
> 	};
> 
> In this case the implementation is obvious and correct.
> 
> Afaics, compared to the current implentation it has the only drawback:
> the reader has to restart if it races with any write, while with your
> code it only restarts if the writer writes to the item we are trying
> to read.

Yes, I did consider it, but it is suboptimal. :-)

We fixed the immediate problem in another (worse but simpler) way, so this
implementation is now pursued more out of academic interest.

>> Regarding the barriers used in the code, would it be possible to get a picture
>> of exactly how bad those operations are for performance?
> 
> Oh, sorry I don't know, and this obvioulsy differs depending on arch.
> I never Knew how these barriers actually work in hardware, just have
> the foggy ideas about the "side effects" they have ;)
> 
> And I agree with Dmitry, the last smp_Xmb() in buflock_write/read looks
> unneeded. Both helpers do not care about the subsequent LOAD/STORE's.
> 
> write_seqcount_begin() has the "final" wmb, yes. But this is because
> it does care. We are going to modify something under this write_lock,
> the result of these subsequent STORE's shouldn't be visible to reader
> before it sees the result of ++sequence.

The relation between storing the writer head and synchronizing the reader head
is similar in structure, in my view. On the other hand, it might be possible to
remove one of the writer heads altogether, which would make things simpler still.

>> Is it true that a
>> simple spinlock might be faster on average, for instance?
> 
> May be. But without spinlock's the writer can be never delayed by
> reader. I guess this was your motivation.

Yes, one of them. The other was a lock where readers do not wait for each other.

Thanks!

Henrik

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ