lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjD5MMbAm0OkbKadzerwG3yGr9SvE7mrsCnhmiqRRdxMw@mail.gmail.com>
Date:   Sat, 24 Aug 2019 13:56:54 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        syzbot <syzbot+8ab2d0f39fb79fe6ca40@...kaller.appspotmail.com>
Subject: Re: [PATCH] /dev/mem: Bail out upon SIGKILL when reading memory.

On Sat, Aug 24, 2019 at 1:22 PM Ingo Molnar <mingo@...nel.org> wrote:
>
> That makes sense: I measured 17 seconds per 100 MB of data, which is is
> 0.16 usecs per byte. The instruction used by
> copy_user_enhanced_fast_string() is REP MOVSB - which supposedly goes as
> high as cacheline size accesses - but perhaps those get broken down for
> physical memory that has no device claiming it?

All the "rep string" optimizations are _only_ done for regular memory.

When it hits any IO accesses, it will do the accesses at the specified
size (so "movsb" will do it a byte at a time).

0.16 usec per byte is faster than the traditional ISA 'inb', but not
by a huge factor.

> Interruption is arguably *not* an 'error', so preserving partial reads
> sounds like the higher quality solution, nevertheless one could argue
> that particual read *could* be returned by read_kmem() if progress was
> made.

So if we react to regular signals, not just fatal ones, we definitely
need to honor partial reads.

> I.e. if for example an iomem area is already mapped by a driver with some
> conflicting cache attribute, xlate_dev_mem_ptr() AFAICS will not
> ioremap_cache() it to cached? IIRC some CPUs would triple fault or
> completely misbehave on certain cache attribute conflicts.

Yeah, I guess we have the machine check possibility with mixed cached cases.

> This check in mremap() might also trigger:
>
>         if (is_ram == REGION_MIXED) {
>                 WARN_ONCE(1, "memremap attempted on mixed range %pa size: %#lx\n",
>                                 &offset, (unsigned long) size);
>                 return NULL;
>         }
>
> So I'd say xlate_dev_mem_ptr() looks messy, but is possibly a bit more
> robust in this regard?

It clearly does work. Slowly, but work.

At least it works on x86. On some other architectures /dev/mem
definitely cannot possibly handle IO memory correctly, because you
can't even try to just read it as regular memory.

But those architectures are few and likely not relevant anyway (eg early alpha).

             Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ