[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <adfd804c-a3a4-4a07-babb-0a957dafac4b@nvidia.com>
Date: Tue, 7 May 2024 15:49:45 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Axel Rasmussen <axelrasmussen@...gle.com>
CC: David Hildenbrand <david@...hat.com>, Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>, Peter Zijlstra
<peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar
<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, <x86@...nel.org>, "H .
Peter Anvin" <hpa@...or.com>, LKML <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH] x86/fault: speed up uffd-unit-test by 10x: rate-limit
"MCE: Killing" logs
On 5/7/24 11:15 AM, Axel Rasmussen wrote:
> On Tue, May 7, 2024 at 11:11 AM John Hubbard <jhubbard@...dia.com> wrote:
>>
>> On 5/7/24 11:08 AM, Axel Rasmussen wrote:
>>> On Tue, May 7, 2024 at 9:43 AM David Hildenbrand <david@...hat.com> wrote:
>> ...
>>>>> That thread seems to have stalled.
>>>>
>>>> Yes, there was no follow-up.
>>>
>>> Apologies, I had completely forgotten about this. I blame the weekend. :)
>>>
>>> No objections from me to the simple rate limiting proposed here, if
>>> useful you can take:
>>>
>>> Acked-by: Axel Rasmussen <axelrasmussen@...gle.com>
>>>
>>> But, it seems to me the earlier proposal may still be useful.
>>> Specifically, don't print at all for "synthetic" poisons from
>>> UFFDIO_POISON or similar mechanisms. This way, "real" errors aren't
>>> gobbled up by the ratelimit due to spam from "synthetic" errors. If
>>> folks agree, I can *actually* send a patch this time. :)
>>>
>>
>> That sounds good to me. (Should it also rate limit, though? I'm leaning
>> toward yes.)
>
> I believe the proposal so far was, simulated poisons aren't really
> "global" events, and are only relevant to the process itself. So don't
> send them to the global kernel log at all, and instead let the process
> do whatever it wants with them (e.g. it could print something when it
> receives a signal, perhaps with rate limiting).
OK. And seeing as how I'm not (at all) in alignment with Borislav on
where to apply rate limiting, we'd better go with your approach.
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists