[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180213181352.GA60936@localhost.uwnet.wisc.edu>
Date: Tue, 13 Feb 2018 12:13:52 -0600
From: Dennis Zhou <dennisszhou@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Tejun Heo <tj@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Dmitry Vyukov <dvyukov@...gle.com>,
syzbot <syzbot+adb03f3f0bb57ce3acda@...kaller.appspotmail.com>,
Alexei Starovoitov <ast@...nel.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs@...glegroups.com
Subject: Re: lost connection to test machine (4)
On Tue, Feb 13, 2018 at 09:49:27AM -0800, Eric Dumazet wrote:
> On Tue, 2018-02-13 at 11:34 -0600, Dennis Zhou wrote:
> > Hi Eric,
> >
> > On Tue, Feb 13, 2018 at 05:35:26AM -0800, Eric Dumazet wrote:
> > >
> > > Also I would consider using this fix as I had warnings of cpus being
> > > stuck there for more than 50 ms :
> > >
> > >
> > > diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c
> > > index 9158e5a81391ced4e268e3d5dd9879c2bc7280ce..6309b01ceb357be01e857e5f899429403836f41f 100644
> > > --- a/mm/percpu-vm.c
> > > +++ b/mm/percpu-vm.c
> > > @@ -92,6 +92,7 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk,
> > > *pagep = alloc_pages_node(cpu_to_node(cpu), gfp, 0);
> > > if (!*pagep)
> > > goto err;
> > > + cond_resched();
> > > }
> > > }
> > > return 0;
> > >
> > >
> >
> > This function gets called from pcpu_populate_chunk while holding the
> > pcpu_alloc_mutex and is called from two scenarios. First, when an
> > allocation occurs to a place without backing pages, and second when the
> > workqueue item is scheduled to replenish the number of empty pages. So,
> > I don't think this is a good idea.
> >
>
> That _is_ a good idea, we do this already in vmalloc(), and vmalloc()
> can absolutely be called while some mutex(es) are held.
>
>
> > My understanding is if we're seeing warnings here, that means we're
> > struggling to find backing pages. I believe adding __GFP_NORETRY on the
> > workqueue path as Tejun mentioned above would help with warnings as
> > well, but not if they are caused by the allocation path.
> >
>
> That is a separate concern.
>
> My patch simply avoids latency spikes when huge percpu allocations are
> happening, on systems with say 1024 cpus.
>
>
I see. I misunderstood thinking this was for the same concern.
Thanks,
Dennis
Powered by blists - more mailing lists