lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez1pnatAB095dnbrn9LbuQe4+ENwh-WEW36pM40ozhpruw@mail.gmail.com>
Date:   Fri, 10 Dec 2021 21:29:35 +0100
From:   Jann Horn <jannh@...gle.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     kernel test robot <oliver.sang@...el.com>,
        Miklos Szeredi <mszeredi@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        kernel test robot <lkp@...el.com>,
        "Huang, Ying" <ying.huang@...el.com>,
        Feng Tang <feng.tang@...el.com>,
        Zhengjun Xing <zhengjun.xing@...ux.intel.com>,
        fengwei.yin@...el.com
Subject: Re: [fget] 054aa8d439: will-it-scale.per_thread_ops -5.7% regression

 On Fri, Dec 10, 2021 at 7:34 PM Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Thu, Dec 9, 2021 at 9:38 PM kernel test robot <oliver.sang@...el.com> wrote:
> >
> > FYI, we noticed a -5.7% regression of will-it-scale.per_thread_ops due to commit:
> > 054aa8d439b9  ("fget: check that the fd still exists after getting a ref to it")
>
> Well, some downside of the new checks was expected, that's just much
> more than I really like or would have thought.
>
> But it's exactly where you'd expect:
>
> >      27.16 ± 10%      +4.3       31.51 ±  2%  perf-profile.calltrace.cycles-pp.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >      22.91 ± 10%      +4.4       27.34 ±  2%  perf-profile.calltrace.cycles-pp.__fget_files.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64
> >      26.33 ± 10%      +4.4       30.70 ±  2%  perf-profile.children.cycles-pp.__fget_light
> >      22.92 ± 10%      +4.4       27.35 ±  2%  perf-profile.children.cycles-pp.__fget_files
> >      22.70 ± 10%      +4.4       27.11 ±  2%  perf-profile.self.cycles-pp.__fget_files
>
> although there's odd spikes in dTLB-loads etc.
>
> I checked whether it's some unexpected code generation issue, but the
> new "re-check file table after refcount update" really looks very
> cheap when I look at what gcc generates, there's nothing really
> unexpected there.
>
> What did change was:
>
>  (a) some branches go other ways, which might well affect branch
> prediction and just be unlucky. It might be that just marking the
> mismatch case "unlikely()" will help.
>
>  (b) the obvious few new instructions (re-load and check file table
> pointer, re-load and check file pointer)
>
>  (c) that __fget_files() function is now no longer a leaf function in
> a simple config case, since it calls "fput_many" in the error case.
>
> And that (c) is worth mentioning simply because it means that the
> function goes from not having any stack frame at all, to having to
> save/restore four registers. So now it has the usual push/pop
> sequences.
>
> It may also be that the test-case actually does a lot of threaded
> open/close/poll, and either actually triggers the re-lookup looping
> case (unlikely) or just sees a lot of cacheline bouncing that now got
> worse due to the re-check of the file pointer.
>
> So this regression looks real, and the issue seems to be that
> __fget_files() really is _that_ important for this do_sys_poll()
> benchmark, and even just the handful of extra instructions end up
> being meaningful.
>
> Oliver - I'm attaching the obvious "unlikely9)" oneliner in case it's
> just "gcc thought the retry loop was the common case" issue and bad
> branch prediction.
>
> And it would perhaps be interesting to get an actual instruction-level
> profile of that __fget_files() thing for that benchmark, if that
> pinpoints exactly what is going on and in case that would be easy to
> get on that machine.
>
> Because it might just be truly horrendously bad luck, with the 32-byte
> stack frame meaning that the kernel stack goes one more page down
> (just jhandwaving from the dTLB number spike), and this all being just
> random bad luck on that particular benchmark.
>
> Of course, the thing about poll() is that for that case, we *don't*
> really need the "re-check the file descriptor" code at all, since the
> resulting fd isn't going to be installed as a new fd, and it doesn't
> matter for the socket garbage collector logic.
>
> So maybe it was a mistake to put that re-check in the generic fdget()
> code - yes, it should be cheap, but it's also some of the most hot
> code in the kernel on some loads.
>
> But if we move it elsewhere, we'd need to come up with some list of
> "these cases need it". Some are obvious: dup, dup2, unix domain file
> passing. It's the non-obvious ones I'd worry about.

The thing is, even though my proof of concept used dup() to put the
file in the fd table again, that's not strictly necessary. Instead of
using dup() for the race, you could also use recvmsg() directly with
the right timing.

And the recvmsg() path is wired up to a ton of syscalls, including
read(), I believe? So you'd have to special-case read(), readv(),
recv(), recvmsg(), io_submit(), splice(), the io_uring stuff, and so
on. And I think read() is probably one of the hottest syscalls related
to file I/O?


Oh, and I just realized that your patch probably actually also fixes
an issue entirely unrelated to unix sockets. __fdget_pos() does this:

unsigned long __fdget_pos(unsigned int fd)
{
  unsigned long v = __fdget(fd);
  struct file *file = (struct file *)(v & ~3);

  if (file && (file->f_mode & FMODE_ATOMIC_POS)) {
    if (file_count(file) > 1) {
      v |= FDPUT_POS_UNLOCK;
      mutex_lock(&file->f_pos_lock);
    }
  }
  return v;
}

and with the same fget race, I think you could get past that
file_count(file) check?

FMODE_ATOMIC_POS is always set for regular files and directories,
which means that this is what protects getdents()'s access to the file
offset when it calls into f_op->iterate_shared():

SYSCALL_DEFINE3(getdents, unsigned int, fd,
struct linux_dirent __user *, dirent, unsigned int, count)
{
  struct fd f;
[...]
  f = fdget_pos(fd);
  if (!f.file)
    return -EBADF;

  error = iterate_dir(f.file, &buf.ctx);
  if (error >= 0)
    error = buf.error;
[...]
  fdput_pos(f);
  return error;
}

int iterate_dir(struct file *file, struct dir_context *ctx)
{
  struct inode *inode = file_inode(file);
  bool shared = false;
  int res = -ENOTDIR;
  if (file->f_op->iterate_shared)
    shared = true;
  else if (!file->f_op->iterate)
    goto out;
[...]
  if (shared)
    res = down_read_killable(&inode->i_rwsem);
  else
    [...]
  if (res)
    goto out;
[...]
  if (!IS_DEADDIR(inode)) {
    ctx->pos = file->f_pos;
    if (shared)
      res = file->f_op->iterate_shared(file, ctx);
    else
      res = file->f_op->iterate(file, ctx);
    file->f_pos = ctx->pos;
[...]
  }
  if (shared)
    inode_unlock_shared(inode);
  else
    [...]
out:
return res;
}

And the ext4 implementation of ->iterate_shared(), which doesn't seem
to be taking any exclusive locks, then also reads and writes
->f_version and relies on having that in sync with ->f_pos. (At least
in the inline storage case it looks that way, I haven't looked at the
rest.) So I think that without your fix, it might also be possible to
get ext4 to read a struct ext4_dir_entry_2 from a misaligned offset? I
don't think that would lead to memory corruption, just to getting
bogus data from getdents() and/or making ext4 think that the
filesystem is corrupted, but it's not exactly great...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ