[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92919086-0a7e-520d-0465-b9e3051e965a@i-love.sakura.ne.jp>
Date: Mon, 26 Aug 2019 19:40:27 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...nel.org>, Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
syzbot <syzbot+8ab2d0f39fb79fe6ca40@...kaller.appspotmail.com>
Subject: Re: [PATCH] /dev/mem: Bail out upon SIGKILL when reading memory.
On 2019/08/26 1:54, Linus Torvalds wrote:
> On Sat, Aug 24, 2019 at 10:50 PM Tetsuo Handa
> <penguin-kernel@...ove.sakura.ne.jp> wrote:
>>
>> @@ -142,7 +144,7 @@ static ssize_t read_mem(struct file *file, char __user *buf,
>> sz = size_inside_page(p, count);
>> cond_resched();
>> err = -EINTR;
>> - if (fatal_signal_pending(current))
>> + if (signal_pending(current))
>> goto failed;
>>
>> err = -EPERM;
>
> So from a "likelihood of breaking" standpoint, I'd really like to make
> sure that the "signal_pending()" checks come at the *end* of the loop.
>
> That way, if somebody is doing a 4-byte read from MMIO, he'll never see -EINTR.
>
> I'm specifically thinking of tools like user-space 'lspci' etc, which
> I wouldn't be surprised could happen.
>
> Also, just in case things break, I do agree with Ingo that this should
> be split up into several patches.
Thinking from how read_mem() returns error code instead of returning bytes
already processed, any sane users will not try to read so much memory (like 2GB).
If userspace programs want to read so much memory, there must have been attempts
to improve performance. I guess that userspace program somehow knows which region
to read and tries to read only meaningful pages (which would not become hundreds MB).
Thus, I don't think we want to make /dev/{mem,kmem} intrruptible. Just making killable
in case insane userspace program (like fuzzer) tried to read/write so much memory
will be sufficient...
Powered by blists - more mailing lists