lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADBMgpwixDpGxWxFhMup9YD7DoCc3UPz8jYwFvUPQvhJGdeEUQ@mail.gmail.com>
Date: Fri, 19 Jan 2024 19:29:40 -0800
From: Dylan Hatch <dylanbhatch@...gle.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Kees Cook <keescook@...omium.org>, 
	Frederic Weisbecker <frederic@...nel.org>, "Joel Fernandes (Google)" <joel@...lfernandes.org>, 
	Ard Biesheuvel <ardb@...nel.org>, "Matthew Wilcox (Oracle)" <willy@...radead.org>, 
	Thomas Gleixner <tglx@...utronix.de>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>, 
	"Eric W. Biederman" <ebiederm@...ssion.com>, Vincent Whitchurch <vincent.whitchurch@...s.com>, 
	Dmitry Vyukov <dvyukov@...gle.com>, Luis Chamberlain <mcgrof@...nel.org>, 
	Mike Christie <michael.christie@...cle.com>, David Hildenbrand <david@...hat.com>, 
	Catalin Marinas <catalin.marinas@....com>, Stefan Roesch <shr@...kernel.io>, 
	Joey Gouly <joey.gouly@....com>, Josh Triplett <josh@...htriplett.org>, Helge Deller <deller@....de>, 
	Ondrej Mosnacek <omosnace@...hat.com>, Florent Revest <revest@...omium.org>, 
	Miguel Ojeda <ojeda@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] getrusage: move thread_group_cputime_adjusted()
 outside of lock_task_sighand()

On Fri, Jan 19, 2024 at 6:16 AM Oleg Nesterov <oleg@...hat.com> wrote:
>
> thread_group_cputime() does its own locking, we can safely shift
> thread_group_cputime_adjusted() which does another for_each_thread loop
> outside of ->siglock protected section.
>
> This is also preparation for the next patch which changes getrusage() to
> use stats_lock instead of siglock. Currently the deadlock is not possible,
> if getrusage() enters the slow path and takes stats_lock, read_seqretry()
> in thread_group_cputime() must always return 0, so thread_group_cputime()
> will never try to take the same lock. Yet this looks more safe and better
> performance-wise.
>
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>
> ---
>  kernel/sys.c | 34 +++++++++++++++++++---------------
>  1 file changed, 19 insertions(+), 15 deletions(-)
>
> diff --git a/kernel/sys.c b/kernel/sys.c
> index e219fcfa112d..70ad06ad852e 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -1785,17 +1785,19 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
>         struct task_struct *t;
>         unsigned long flags;
>         u64 tgutime, tgstime, utime, stime;
> -       unsigned long maxrss = 0;
> +       unsigned long maxrss;
> +       struct mm_struct *mm;
>         struct signal_struct *sig = p->signal;
>
> -       memset((char *)r, 0, sizeof (*r));
> +       memset(r, 0, sizeof(*r));
>         utime = stime = 0;
> +       maxrss = 0;
>
>         if (who == RUSAGE_THREAD) {
>                 task_cputime_adjusted(current, &utime, &stime);
>                 accumulate_thread_rusage(p, r);
>                 maxrss = sig->maxrss;
> -               goto out;
> +               goto out_thread;
>         }
>
>         if (!lock_task_sighand(p, &flags))
> @@ -1819,9 +1821,6 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
>                 fallthrough;
>
>         case RUSAGE_SELF:
> -               thread_group_cputime_adjusted(p, &tgutime, &tgstime);
> -               utime += tgutime;
> -               stime += tgstime;
>                 r->ru_nvcsw += sig->nvcsw;
>                 r->ru_nivcsw += sig->nivcsw;
>                 r->ru_minflt += sig->min_flt;
> @@ -1839,19 +1838,24 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
>         }
>         unlock_task_sighand(p, &flags);
>
> -out:
> -       r->ru_utime = ns_to_kernel_old_timeval(utime);
> -       r->ru_stime = ns_to_kernel_old_timeval(stime);
> +       if (who == RUSAGE_CHILDREN)
> +               goto out_children;
>
> -       if (who != RUSAGE_CHILDREN) {
> -               struct mm_struct *mm = get_task_mm(p);
> +       thread_group_cputime_adjusted(p, &tgutime, &tgstime);
> +       utime += tgutime;
> +       stime += tgstime;
>
> -               if (mm) {
> -                       setmax_mm_hiwater_rss(&maxrss, mm);
> -                       mmput(mm);
> -               }
> +out_thread:
> +       mm = get_task_mm(p);
> +       if (mm) {
> +               setmax_mm_hiwater_rss(&maxrss, mm);
> +               mmput(mm);
>         }
> +
> +out_children:
>         r->ru_maxrss = maxrss * (PAGE_SIZE / 1024); /* convert pages to KBs */
> +       r->ru_utime = ns_to_kernel_old_timeval(utime);
> +       r->ru_stime = ns_to_kernel_old_timeval(stime);
>  }
>
>  SYSCALL_DEFINE2(getrusage, int, who, struct rusage __user *, ru)
> --
> 2.25.1.362.g51ebf55
>
>

Tested-by: Dylan Hatch <dylanbhatch@...gle.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ