[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx3WKAL7e3TAs-2okggW6Z+DHANdmcsr+WTDh3N-Mx+Xw@mail.gmail.com>
Date: Wed, 21 Feb 2018 12:40:48 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Luck, Tony" <tony.luck@...el.com>
Cc: Andi Kleen <ak@...ux.intel.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Joe Konno <joe.konno@...ux.intel.com>,
"linux-efi@...r.kernel.org" <linux-efi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jeremy Kerr <jk@...abs.org>,
Matthew Garrett <mjg59@...gle.com>,
Peter Jones <pjones@...hat.com>,
Andy Lutomirski <luto@...nel.org>,
James Bottomley <james.bottomley@...senpartnership.com>
Subject: Re: [PATCH 1/2] fs/efivarfs: restrict inode permissions
On Wed, Feb 21, 2018 at 11:58 AM, Luck, Tony <tony.luck@...el.com> wrote:
>
> How are you envisioning this rate-limiting to be implemented? Are
> you going to fail an EFI call if the rate is too high? I'm thinking that
> we just add a delay to each call so that we can't exceed the limit.
Delaying sounds ok, I guess.
But the "obvious" implementation may be simple:
static void efi_ratelimit(void)
{
static DEFINE_RATELIMIT_STATE(ratelimit, HZ, 100);
if (!__ratelimit(&ratelimit))
msleep(10);
}
}
but the above is actually completely broken.
Why? If you have multiple processes, they can each only do a hundred
per second, but globally they can do millions per second by just
having a few thousand threads. They all sleep, but..
So how do you restrict it *globally*?
If you put this all inside a lock like a mutex, you can generate
basically arbitrary delays, and you're back to the DoS schenario. A
fair lock will allow thousands of waiters to line up and make the
delay be
But maybe I'm missing some really obvious way. You *can* make the
msleep be a spinning wait instead, and rely on the scheduler, I guess.
Or maybe I'm just stupid and am overlooking the obvious case.
Again, making the ratelimiting be per-user makes all of these issues
go away. Then one user cannot delay another one.
Linus
Powered by blists - more mailing lists