lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	12 Apr 2015 15:36:54 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	peterx@...radead.org
Cc:	linux@...izon.com, linux-kernel@...r.kernel.org
Subject: Two other ways to do latched seqcounts

Just FYI, There are some additional ways to do this, with lower chance
of having to retry.

They increase either space or write cost, so they're not panaceas,
but do reduce the number of read retries to near-zero.

A problem with the current version is that there is only one
guaranteed-consistent copy, so a write always forces concurrent
readers to retry.

So if reading takes a while (the "extract the info I want" is slow)
and writes are frequent, you might retry a lot.

There are two ways to expand the read window.

One is to keep four copies, not two.  I have a private patch to the
PPS code that does this, since the data being protected is very small:
a few timestamps.  This gives three guaranteed-consistent copies, and a
reader doesn't have to retry unless there have been three writes between
the beginning and end of the sequence.

I.e.
void latch_query(struct latch_struct *latch, struct data *data)
{
	unsigned seq;

	do {
		int idx = (seq = latch->seq) & 3;

		smp_rmb();

		*data = latch->data[idx];	/* Or extract the portion of interest */

		smp_rmb();
	} while (latch->seq - seq >= 3u);
}


The second is to update the seqlock twice per write.

While the lsbit of the seqlock is zero, *both* copies are valid, but
the one identified by the second-lsbit is more up to date and preferred.

While the lsbit of the seqlock is one, an update is in progress.
But I may read the old copy.  And even if the update completes while
I'm reading, I know my copy is still consistent until the *next*
update starts.

void latch_query(struct latch_struct *latch, struct data *data)
{
	unsigned seq;

	do {
		int idx = (seq = latch->seq) >> 1 & 1;

		smp_rmb();

		*data = latch->data[idx];	/* Or extract the portion of interest */

		smp_rmb();

		seq |= 1;	/* Could OR or AND, but OR is more available than ANDN. */
	} while (latch->seq - seq >= 2u);
}

One note about this version is that, although you need to update the seq variable
twice, it doesn't increase the number of smp_wmb you need.  It's:

void latch_modify(struct latch_struct *latch, ...)
{
	unsigned seq = latch->seq |= 1;

	smp_wmb();

	modify(latch->data[++seq >> 1 & 1], ...);

	smp_wmb();

	latch->seq = seq;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ