[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220302031738-mutt-send-email-mst@kernel.org>
Date: Wed, 2 Mar 2022 03:30:06 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: Laszlo Ersek <lersek@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
QEMU Developers <qemu-devel@...gnu.org>,
linux-hyperv@...r.kernel.org,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Alexander Graf <graf@...zon.com>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
adrian@...ity.io,
Daniel P. Berrangé <berrange@...hat.com>,
Dominik Brodowski <linux@...inikbrodowski.net>,
Jann Horn <jannh@...gle.com>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"Brown, Len" <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Linux PM <linux-pm@...r.kernel.org>,
Colm MacCarthaigh <colmmacc@...zon.com>,
Theodore Ts'o <tytso@....edu>, Arnd Bergmann <arnd@...db.de>
Subject: Re: propagating vmgenid outward and upward
On Tue, Mar 01, 2022 at 07:37:06PM +0100, Jason A. Donenfeld wrote:
> Hi Michael,
>
> On Tue, Mar 1, 2022 at 6:17 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > Hmm okay, so it's a performance optimization... some batching then? Do
> > you really need to worry about every packet? Every 64 packets not
> > enough? Packets are after all queued at NICs etc, and VM fork can
> > happen after they leave wireguard ...
>
> Unfortunately, yes, this is an "every packet" sort of thing -- if the
> race is to be avoided in a meaningful way. It's really extra bad:
> ChaCha20 and AES-CTR work by xoring a secret stream of bytes with
> plaintext to produce a ciphertext. If you use that same secret stream
> and xor it with a second plaintext and transmit that too, an attacker
> can combine the two different ciphertexts to learn things about the
> original plaintext.
>
> But, anyway, it seems like the race is here to stay given what we have
> _currently_ available with the virtual hardware. That's why I'm
> focused on trying to get something going that's the least bad with
> what we've currently got, which is racy by design. How vitally
> important is it to have something that doesn't race in the far future?
> I don't know, really. It seems plausible that that ACPI notifier
> triggers so early that nothing else really even has a chance, so the
> race concern is purely theoretical. But I haven't tried to measure
> that so I'm not sure.
>
> Jason
I got curious, and wrote a dumb benchmark:
#include <stdio.h>
#include <assert.h>
#include <limits.h>
#include <string.h>
struct lng {
unsigned long long l1;
unsigned long long l2;
};
struct shrt {
unsigned long s;
};
struct lng l = { 1, 2 };
struct shrt s = { 3 };
static void test1(volatile struct shrt *sp)
{
if (sp->s != s.s) {
printf("short mismatch!\n");
s.s = sp->s;
}
}
static void test2(volatile struct lng *lp)
{
if (lp->l1 != l.l1 || lp->l2 != l.l2) {
printf("long mismatch!\n");
l.l1 = lp->l1;
l.l2 = lp->l2;
}
}
int main(int argc, char **argv)
{
volatile struct shrt sv = { 4 };
volatile struct lng lv = { 5, 6 };
if (argc > 1) {
printf("test 1\n");
for (int i = 0; i < 10000000; ++i)
test1(&sv);
} else {
printf("test 2\n");
for (int i = 0; i < 10000000; ++i)
test2(&lv);
}
return 0;
}
Results (built with -O2, nothing fancy):
[mst@...k ~]$ perf stat -r 1000 ./a.out 1 > /dev/null
Performance counter stats for './a.out 1' (1000 runs):
5.12 msec task-clock:u # 0.945 CPUs utilized ( +- 0.07% )
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
52 page-faults:u # 10.016 K/sec ( +- 0.07% )
20,190,800 cycles:u # 3.889 GHz ( +- 0.01% )
50,147,371 instructions:u # 2.48 insn per cycle ( +- 0.00% )
20,032,224 branches:u # 3.858 G/sec ( +- 0.00% )
1,604 branch-misses:u # 0.01% of all branches ( +- 0.26% )
0.00541882 +- 0.00000847 seconds time elapsed ( +- 0.16% )
[mst@...k ~]$ perf stat -r 1000 ./a.out > /dev/null
Performance counter stats for './a.out' (1000 runs):
7.75 msec task-clock:u # 0.947 CPUs utilized ( +- 0.12% )
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
52 page-faults:u # 6.539 K/sec ( +- 0.07% )
30,205,916 cycles:u # 3.798 GHz ( +- 0.01% )
80,147,373 instructions:u # 2.65 insn per cycle ( +- 0.00% )
30,032,227 branches:u # 3.776 G/sec ( +- 0.00% )
1,621 branch-misses:u # 0.01% of all branches ( +- 0.23% )
0.00817982 +- 0.00000965 seconds time elapsed ( +- 0.12% )
So yes, the overhead is higher by 50% which seems a lot but it's from a
very small number, so I don't see why it's a show stopper, it's not by a
factor of 10 such that we should sacrifice safety by default. Maybe a
kernel flag that removes the read replacing it with an interrupt will
do.
In other words, premature optimization is the root of all evil.
--
MST
Powered by blists - more mailing lists