lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Uf-4CY=wU079Y87xwzz_UDm8AqGBdt_6OuVtADR8AN0hA@mail.gmail.com>
Date:   Fri, 12 Mar 2021 10:44:08 -0800
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Chuck Lever <chuck.lever@...cle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Christoph Hellwig <hch@...radead.org>,
        Matthew Wilcox <willy@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-Net <netdev@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 5/7] SUNRPC: Refresh rq_pages using a bulk page allocator

On Fri, Mar 12, 2021 at 7:43 AM Mel Gorman <mgorman@...hsingularity.net> wrote:
>
> From: Chuck Lever <chuck.lever@...cle.com>
>
> Reduce the rate at which nfsd threads hammer on the page allocator.
> This improves throughput scalability by enabling the threads to run
> more independently of each other.
>
> Signed-off-by: Chuck Lever <chuck.lever@...cle.com>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
> ---
>  net/sunrpc/svc_xprt.c | 43 +++++++++++++++++++++++++++++++------------
>  1 file changed, 31 insertions(+), 12 deletions(-)
>
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index cfa7e4776d0e..38a8d6283801 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -642,11 +642,12 @@ static void svc_check_conn_limits(struct svc_serv *serv)
>  static int svc_alloc_arg(struct svc_rqst *rqstp)
>  {
>         struct svc_serv *serv = rqstp->rq_server;
> +       unsigned long needed;
>         struct xdr_buf *arg;
> +       struct page *page;
>         int pages;
>         int i;
>
> -       /* now allocate needed pages.  If we get a failure, sleep briefly */
>         pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
>         if (pages > RPCSVC_MAXPAGES) {
>                 pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n",
> @@ -654,19 +655,28 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
>                 /* use as many pages as possible */
>                 pages = RPCSVC_MAXPAGES;
>         }
> -       for (i = 0; i < pages ; i++)
> -               while (rqstp->rq_pages[i] == NULL) {
> -                       struct page *p = alloc_page(GFP_KERNEL);
> -                       if (!p) {
> -                               set_current_state(TASK_INTERRUPTIBLE);
> -                               if (signalled() || kthread_should_stop()) {
> -                                       set_current_state(TASK_RUNNING);
> -                                       return -EINTR;
> -                               }
> -                               schedule_timeout(msecs_to_jiffies(500));
> +

> +       for (needed = 0, i = 0; i < pages ; i++)
> +               if (!rqstp->rq_pages[i])
> +                       needed++;

I would use an opening and closing braces for the for loop since
technically the if is a multiline statement. It will make this more
readable.

> +       if (needed) {
> +               LIST_HEAD(list);
> +
> +retry:

Rather than kind of open code a while loop why not just make this
"while (needed)"? Then all you have to do is break out of the for loop
and you will automatically return here instead of having to jump to
two different labels.

> +               alloc_pages_bulk(GFP_KERNEL, needed, &list);

Rather than not using the return value would it make sense here to
perhaps subtract it from needed? Then you would know if any of the
allocation requests weren't fulfilled.

> +               for (i = 0; i < pages; i++) {

It is probably optimizing for the exception case, but I don't think
you want the "i = 0" here. If you are having to stop because the list
is empty it probably makes sense to resume where you left off. So you
should probably be initializing i to 0 before we check for needed.

> +                       if (!rqstp->rq_pages[i]) {

It might be cleaner here to just do a "continue" if rq_pages[i] is populated.

> +                               page = list_first_entry_or_null(&list,
> +                                                               struct page,
> +                                                               lru);
> +                               if (unlikely(!page))
> +                                       goto empty_list;

I think I preferred the original code that wasn't jumping away from
the loop here. With the change I suggested above that would switch the
if(needed) to while(needed) you could have it just break out of the
for loop to place itself back in the while loop.

> +                               list_del(&page->lru);
> +                               rqstp->rq_pages[i] = page;
> +                               needed--;
>                         }
> -                       rqstp->rq_pages[i] = p;
>                 }
> +       }
>         rqstp->rq_page_end = &rqstp->rq_pages[pages];
>         rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
>
> @@ -681,6 +691,15 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
>         arg->len = (pages-1)*PAGE_SIZE;
>         arg->tail[0].iov_len = 0;
>         return 0;
> +
> +empty_list:
> +       set_current_state(TASK_INTERRUPTIBLE);
> +       if (signalled() || kthread_should_stop()) {
> +               set_current_state(TASK_RUNNING);
> +               return -EINTR;
> +       }
> +       schedule_timeout(msecs_to_jiffies(500));
> +       goto retry;
>  }
>
>  static bool
> --
> 2.26.2
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ