lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59e803abae0b7441c1440ebd4657e573b1c02dd2.camel@kernel.org>
Date: Sat, 09 Nov 2024 14:26:21 -0500
From: Jeff Layton <jlayton@...nel.org>
To: Olga Kornievskaia <aglo@...ch.edu>
Cc: Chuck Lever <chuck.lever@...cle.com>, Neil Brown <neilb@...e.de>, Dai
 Ngo	 <Dai.Ngo@...cle.com>, Tom Talpey <tom@...pey.com>, Olga Kornievskaia	
 <okorniev@...hat.com>, linux-nfs@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] nfsd: allow for up to 32 callback session slots

On Sat, 2024-11-09 at 13:50 -0500, Olga Kornievskaia wrote:
> On Tue, Nov 5, 2024 at 7:31 PM Jeff Layton <jlayton@...nel.org> wrote:
> > 
> > nfsd currently only uses a single slot in the callback channel, which is
> > proving to be a bottleneck in some cases. Widen the callback channel to
> > a max of 32 slots (subject to the client's target_maxreqs value).
> > 
> > Change the cb_holds_slot boolean to an integer that tracks the current
> > slot number (with -1 meaning "unassigned").  Move the callback slot
> > tracking info into the session. Add a new u32 that acts as a bitmap to
> > track which slots are in use, and a u32 to track the latest callback
> > target_slotid that the client reports. To protect the new fields, add
> > a new per-session spinlock (the se_lock). Fix nfsd41_cb_get_slot to always
> > search for the lowest slotid (using ffs()).
> > 
> > Finally, convert the session->se_cb_seq_nr field into an array of
> > counters and add the necessary handling to ensure that the seqids get
> > reset at the appropriate times.
> > 
> > Signed-off-by: Jeff Layton <jlayton@...nel.org>
> > Signed-off-by: Chuck Lever <chuck.lever@...cle.com>
> > ---
> > v3 has a bug that Olga hit in testing. This version should fix the wait
> > when the slot table is full. Olga, if you're able to test this one, it
> > would be much appreciated.
> > ---
> > Changes in v4:
> > - Fix the wait for a slot in nfsd41_cb_get_slot()
> > - Link to v3: https://lore.kernel.org/r/20241030-bcwide-v3-0-c2df49a26c45@kernel.org
> > 
> > Changes in v3:
> > - add patch to convert se_flags to single se_dead bool
> > - fix off-by-one bug in handling of NFSD_BC_SLOT_TABLE_MAX
> > - don't reject target highest slot value of 0
> > - Link to v2: https://lore.kernel.org/r/20241029-bcwide-v2-1-e9010b6ef55d@kernel.org
> > 
> > Changes in v2:
> > - take cl_lock when fetching fields from session to be encoded
> > - use fls() instead of bespoke highest_unset_index()
> > - rename variables in several functions with more descriptive names
> > - clamp limit of for loop in update_cb_slot_table()
> > - re-add missing rpc_wake_up_queued_task() call
> > - fix slotid check in decode_cb_sequence4resok()
> > - add new per-session spinlock
> > ---
> >  fs/nfsd/nfs4callback.c | 113 ++++++++++++++++++++++++++++++++++++-------------
> >  fs/nfsd/nfs4state.c    |  11 +++--
> >  fs/nfsd/state.h        |  15 ++++---
> >  fs/nfsd/trace.h        |   2 +-
> >  4 files changed, 101 insertions(+), 40 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
> > index e38fa834b3d91333acf1425eb14c644e5d5f2601..47a678333907eaa92db305dada503704c34c15b2 100644
> > --- a/fs/nfsd/nfs4callback.c
> > +++ b/fs/nfsd/nfs4callback.c
> > @@ -406,6 +406,19 @@ encode_cb_getattr4args(struct xdr_stream *xdr, struct nfs4_cb_compound_hdr *hdr,
> >         hdr->nops++;
> >  }
> > 
> > +static u32 highest_slotid(struct nfsd4_session *ses)
> > +{
> > +       u32 idx;
> > +
> > +       spin_lock(&ses->se_lock);
> > +       idx = fls(~ses->se_cb_slot_avail);
> > +       if (idx > 0)
> > +               --idx;
> > +       idx = max(idx, ses->se_cb_highest_slot);
> > +       spin_unlock(&ses->se_lock);
> > +       return idx;
> > +}
> > +
> >  /*
> >   * CB_SEQUENCE4args
> >   *
> > @@ -432,15 +445,35 @@ static void encode_cb_sequence4args(struct xdr_stream *xdr,
> >         encode_sessionid4(xdr, session);
> > 
> >         p = xdr_reserve_space(xdr, 4 + 4 + 4 + 4 + 4);
> > -       *p++ = cpu_to_be32(session->se_cb_seq_nr);      /* csa_sequenceid */
> > -       *p++ = xdr_zero;                        /* csa_slotid */
> > -       *p++ = xdr_zero;                        /* csa_highest_slotid */
> > +       *p++ = cpu_to_be32(session->se_cb_seq_nr[cb->cb_held_slot]);    /* csa_sequenceid */
> > +       *p++ = cpu_to_be32(cb->cb_held_slot);           /* csa_slotid */
> > +       *p++ = cpu_to_be32(highest_slotid(session)); /* csa_highest_slotid */
> >         *p++ = xdr_zero;                        /* csa_cachethis */
> >         xdr_encode_empty_array(p);              /* csa_referring_call_lists */
> > 
> >         hdr->nops++;
> >  }
> > 
> > +static void update_cb_slot_table(struct nfsd4_session *ses, u32 target)
> > +{
> > +       /* No need to do anything if nothing changed */
> > +       if (likely(target == READ_ONCE(ses->se_cb_highest_slot)))
> > +               return;
> > +
> > +       spin_lock(&ses->se_lock);
> > +       if (target > ses->se_cb_highest_slot) {
> > +               int i;
> > +
> > +               target = min(target, NFSD_BC_SLOT_TABLE_MAX);
> > +
> > +               /* Growing the slot table. Reset any new sequences to 1 */
> > +               for (i = ses->se_cb_highest_slot + 1; i <= target; ++i)
> > +                       ses->se_cb_seq_nr[i] = 1;
> > +       }
> > +       ses->se_cb_highest_slot = target;
> > +       spin_unlock(&ses->se_lock);
> > +}
> > +
> >  /*
> >   * CB_SEQUENCE4resok
> >   *
> > @@ -468,7 +501,7 @@ static int decode_cb_sequence4resok(struct xdr_stream *xdr,
> >         struct nfsd4_session *session = cb->cb_clp->cl_cb_session;
> >         int status = -ESERVERFAULT;
> >         __be32 *p;
> > -       u32 dummy;
> > +       u32 seqid, slotid, target;
> > 
> >         /*
> >          * If the server returns different values for sessionID, slotID or
> > @@ -484,21 +517,22 @@ static int decode_cb_sequence4resok(struct xdr_stream *xdr,
> >         }
> >         p += XDR_QUADLEN(NFS4_MAX_SESSIONID_LEN);
> > 
> > -       dummy = be32_to_cpup(p++);
> > -       if (dummy != session->se_cb_seq_nr) {
> > +       seqid = be32_to_cpup(p++);
> > +       if (seqid != session->se_cb_seq_nr[cb->cb_held_slot]) {
> >                 dprintk("NFS: %s Invalid sequence number\n", __func__);
> >                 goto out;
> >         }
> > 
> > -       dummy = be32_to_cpup(p++);
> > -       if (dummy != 0) {
> > +       slotid = be32_to_cpup(p++);
> > +       if (slotid != cb->cb_held_slot) {
> >                 dprintk("NFS: %s Invalid slotid\n", __func__);
> >                 goto out;
> >         }
> > 
> > -       /*
> > -        * FIXME: process highest slotid and target highest slotid
> > -        */
> > +       p++; // ignore current highest slot value
> > +
> > +       target = be32_to_cpup(p++);
> > +       update_cb_slot_table(session, target);
> >         status = 0;
> >  out:
> >         cb->cb_seq_status = status;
> > @@ -1203,6 +1237,22 @@ void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *conn)
> >         spin_unlock(&clp->cl_lock);
> >  }
> > 
> > +static int grab_slot(struct nfsd4_session *ses)
> > +{
> > +       int idx;
> > +
> > +       spin_lock(&ses->se_lock);
> > +       idx = ffs(ses->se_cb_slot_avail) - 1;
> > +       if (idx < 0 || idx > ses->se_cb_highest_slot) {
> > +               spin_unlock(&ses->se_lock);
> > +               return -1;
> > +       }
> > +       /* clear the bit for the slot */
> > +       ses->se_cb_slot_avail &= ~BIT(idx);
> > +       spin_unlock(&ses->se_lock);
> > +       return idx;
> > +}
> > +
> >  /*
> >   * There's currently a single callback channel slot.
> >   * If the slot is available, then mark it busy.  Otherwise, set the
> > @@ -1211,28 +1261,32 @@ void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *conn)
> >  static bool nfsd41_cb_get_slot(struct nfsd4_callback *cb, struct rpc_task *task)
> >  {
> >         struct nfs4_client *clp = cb->cb_clp;
> > +       struct nfsd4_session *ses = clp->cl_cb_session;
> > 
> > -       if (!cb->cb_holds_slot &&
> > -           test_and_set_bit(0, &clp->cl_cb_slot_busy) != 0) {
> > +       if (cb->cb_held_slot >= 0)
> > +               return true;
> > +       cb->cb_held_slot = grab_slot(ses);
> > +       if (cb->cb_held_slot < 0) {
> >                 rpc_sleep_on(&clp->cl_cb_waitq, task, NULL);
> >                 /* Race breaker */
> > -               if (test_and_set_bit(0, &clp->cl_cb_slot_busy) != 0) {
> > -                       dprintk("%s slot is busy\n", __func__);
> > +               cb->cb_held_slot = grab_slot(ses);
> > +               if (cb->cb_held_slot < 0)
> >                         return false;
> > -               }
> >                 rpc_wake_up_queued_task(&clp->cl_cb_waitq, task);
> >         }
> > -       cb->cb_holds_slot = true;
> >         return true;
> >  }
> > 
> >  static void nfsd41_cb_release_slot(struct nfsd4_callback *cb)
> >  {
> >         struct nfs4_client *clp = cb->cb_clp;
> > +       struct nfsd4_session *ses = clp->cl_cb_session;
> > 
> > -       if (cb->cb_holds_slot) {
> > -               cb->cb_holds_slot = false;
> > -               clear_bit(0, &clp->cl_cb_slot_busy);
> > +       if (cb->cb_held_slot >= 0) {
> > +               spin_lock(&ses->se_lock);
> > +               ses->se_cb_slot_avail |= BIT(cb->cb_held_slot);
> > +               spin_unlock(&ses->se_lock);
> > +               cb->cb_held_slot = -1;
> >                 rpc_wake_up_next(&clp->cl_cb_waitq);
> >         }
> >  }
> > @@ -1249,8 +1303,8 @@ static void nfsd41_destroy_cb(struct nfsd4_callback *cb)
> >  }
> > 
> >  /*
> > - * TODO: cb_sequence should support referring call lists, cachethis, multiple
> > - * slots, and mark callback channel down on communication errors.
> > + * TODO: cb_sequence should support referring call lists, cachethis,
> > + * and mark callback channel down on communication errors.
> >   */
> >  static void nfsd4_cb_prepare(struct rpc_task *task, void *calldata)
> >  {
> > @@ -1292,7 +1346,7 @@ static bool nfsd4_cb_sequence_done(struct rpc_task *task, struct nfsd4_callback
> >                 return true;
> >         }
> > 
> > -       if (!cb->cb_holds_slot)
> > +       if (cb->cb_held_slot < 0)
> >                 goto need_restart;
> > 
> >         /* This is the operation status code for CB_SEQUENCE */
> > @@ -1306,10 +1360,10 @@ static bool nfsd4_cb_sequence_done(struct rpc_task *task, struct nfsd4_callback
> >                  * If CB_SEQUENCE returns an error, then the state of the slot
> >                  * (sequence ID, cached reply) MUST NOT change.
> >                  */
> > -               ++session->se_cb_seq_nr;
> > +               ++session->se_cb_seq_nr[cb->cb_held_slot];
> >                 break;
> >         case -ESERVERFAULT:
> > -               ++session->se_cb_seq_nr;
> > +               ++session->se_cb_seq_nr[cb->cb_held_slot];
> >                 nfsd4_mark_cb_fault(cb->cb_clp);
> >                 ret = false;
> >                 break;
> > @@ -1335,17 +1389,16 @@ static bool nfsd4_cb_sequence_done(struct rpc_task *task, struct nfsd4_callback
> >         case -NFS4ERR_BADSLOT:
> >                 goto retry_nowait;
> >         case -NFS4ERR_SEQ_MISORDERED:
> > -               if (session->se_cb_seq_nr != 1) {
> > -                       session->se_cb_seq_nr = 1;
> > +               if (session->se_cb_seq_nr[cb->cb_held_slot] != 1) {
> > +                       session->se_cb_seq_nr[cb->cb_held_slot] = 1;
> >                         goto retry_nowait;
> >                 }
> >                 break;
> >         default:
> >                 nfsd4_mark_cb_fault(cb->cb_clp);
> >         }
> > -       nfsd41_cb_release_slot(cb);
> > -
> >         trace_nfsd_cb_free_slot(task, cb);
> > +       nfsd41_cb_release_slot(cb);
> > 
> >         if (RPC_SIGNALLED(task))
> >                 goto need_restart;
> > @@ -1565,7 +1618,7 @@ void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp,
> >         INIT_WORK(&cb->cb_work, nfsd4_run_cb_work);
> >         cb->cb_status = 0;
> >         cb->cb_need_restart = false;
> > -       cb->cb_holds_slot = false;
> > +       cb->cb_held_slot = -1;
> >  }
> > 
> >  /**
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index baf7994131fe1b0a4715174ba943fd2a9882aa12..75557e7cc9265517f51952563beaa4cfe8adcc3f 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -2002,6 +2002,9 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
> >         }
> > 
> >         memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
> > +       new->se_cb_slot_avail = ~0U;
> > +       new->se_cb_highest_slot = battrs->maxreqs - 1;
> > +       spin_lock_init(&new->se_lock);
> >         return new;
> >  out_free:
> >         while (i--)
> > @@ -2132,11 +2135,14 @@ static void init_session(struct svc_rqst *rqstp, struct nfsd4_session *new, stru
> > 
> >         INIT_LIST_HEAD(&new->se_conns);
> > 
> > -       new->se_cb_seq_nr = 1;
> > +       atomic_set(&new->se_ref, 0);
> >         new->se_dead = false;
> >         new->se_cb_prog = cses->callback_prog;
> >         new->se_cb_sec = cses->cb_sec;
> > -       atomic_set(&new->se_ref, 0);
> > +
> > +       for (idx = 0; idx < NFSD_BC_SLOT_TABLE_MAX; ++idx)
> > +               new->se_cb_seq_nr[idx] = 1;
> > +
> >         idx = hash_sessionid(&new->se_sessionid);
> >         list_add(&new->se_hash, &nn->sessionid_hashtbl[idx]);
> >         spin_lock(&clp->cl_lock);
> > @@ -3159,7 +3165,6 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
> >         kref_init(&clp->cl_nfsdfs.cl_ref);
> >         nfsd4_init_cb(&clp->cl_cb_null, clp, NULL, NFSPROC4_CLNT_CB_NULL);
> >         clp->cl_time = ktime_get_boottime_seconds();
> > -       clear_bit(0, &clp->cl_cb_slot_busy);
> >         copy_verf(clp, verf);
> >         memcpy(&clp->cl_addr, sa, sizeof(struct sockaddr_storage));
> >         clp->cl_cb_session = NULL;
> > diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> > index d22e4f2c9039324a0953a9e15a3c255fb8ee1a44..848d023cb308f0b69916c4ee34b09075708f0de3 100644
> > --- a/fs/nfsd/state.h
> > +++ b/fs/nfsd/state.h
> > @@ -71,8 +71,8 @@ struct nfsd4_callback {
> >         struct work_struct cb_work;
> >         int cb_seq_status;
> >         int cb_status;
> > +       int cb_held_slot;
> >         bool cb_need_restart;
> > -       bool cb_holds_slot;
> >  };
> > 
> >  struct nfsd4_callback_ops {
> > @@ -307,6 +307,9 @@ struct nfsd4_conn {
> >         unsigned char cn_flags;
> >  };
> > 
> > +/* Highest slot index that nfsd implements in NFSv4.1+ backchannel */
> > +#define NFSD_BC_SLOT_TABLE_MAX (sizeof(u32) * 8 - 1)
> 
> Are there some values that are known not to work? I was experimenting
> with values and set it to 2 and 4 and the kernel oopsed. I understand
> it's not a configurable value but it would still be good to know the
> expectations...
>
> [  198.625021] Unable to handle kernel paging request at virtual
> address dfff800020000000
> [  198.625870] KASAN: probably user-memory-access in range
> [0x0000000100000000-0x0000000100000007]
> [  198.626444] Mem abort info:
> [  198.626630]   ESR = 0x0000000096000005
> [  198.626882]   EC = 0x25: DABT (current EL), IL = 32 bits
> [  198.627234]   SET = 0, FnV = 0
> [  198.627441]   EA = 0, S1PTW = 0
> [  198.627627]   FSC = 0x05: level 1 translation fault
> [  198.627859] Data abort info:
> [  198.628000]   ISV = 0, ISS = 0x00000005, ISS2 = 0x00000000
> [  198.628272]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
> [  198.628619]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
> [  198.628967] [dfff800020000000] address between user and kernel address ranges
> [  198.629438] Internal error: Oops: 0000000096000005 [#1] SMP
> [  198.629806] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver
> nfs netfs nfnetlink_queue nfnetlink_log nfnetlink bluetooth cfg80211
> rpcrdma rdma_cm iw_cm ib_cm ib_core nfsd auth_rpcgss nfs_acl lockd
> grace isofs uinput snd_seq_dummy snd_hrtimer vsock_loopback
> vmw_vsock_virtio_transport_common qrtr rfkill vmw_vsock_vmci_transport
> vsock sunrpc vfat fat snd_hda_codec_generic snd_hda_intel
> snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_seq uvcvideo
> videobuf2_vmalloc snd_seq_device videobuf2_memops uvc videobuf2_v4l2
> videodev snd_pcm videobuf2_common mc snd_timer snd vmw_vmci soundcore
> xfs libcrc32c vmwgfx drm_ttm_helper ttm nvme drm_kms_helper
> crct10dif_ce nvme_core ghash_ce sha2_ce sha256_arm64 sha1_ce drm
> nvme_auth sr_mod cdrom e1000e sg fuse
> [  198.633799] CPU: 5 UID: 0 PID: 6081 Comm: nfsd Kdump: loaded Not
> tainted 6.12.0-rc6+ #47
> [  198.634345] Hardware name: VMware, Inc. VMware20,1/VBSA, BIOS
> VMW201.00V.21805430.BA64.2305221830 05/22/2023
> [  198.635014] pstate: 11400005 (nzcV daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
> [  198.635492] pc : nfsd4_sequence+0x5a0/0x1f60 [nfsd]
> [  198.635798] lr : nfsd4_sequence+0x340/0x1f60 [nfsd]
> [  198.636065] sp : ffff8000884977e0
> [  198.636234] x29: ffff800088497910 x28: ffff0000b1b39280 x27: ffff0000ab508128
> [  198.636624] x26: ffff0000b1b39298 x25: ffff0000b1b39290 x24: ffff0000a65e1c64
> [  198.637049] x23: 1fffe000212e6804 x22: ffff000109734024 x21: 1ffff00011092f16
> [  198.637472] x20: ffff00010aed8000 x19: ffff000109734000 x18: 1fffe0002de20c8b
> [  198.637883] x17: 0100000000000000 x16: 1ffff0000fcef234 x15: 1fffe000212e600f
> [  198.638286] x14: ffff80007e779000 x13: ffff80007e7791a0 x12: 0000000000000000
> [  198.638697] x11: ffff0000a65e1c38 x10: ffff00010aedaca0 x9 : 1fffe000215db594
> [  198.639110] x8 : 1fffe00014cbc387 x7 : ffff0000a65e1c03 x6 : ffff0000a65e1c00
> [  198.639541] x5 : ffff0000a65e1c00 x4 : 0000000020000000 x3 : 0000000100000001
> [  198.639962] x2 : ffff000109730060 x1 : 0000000000000003 x0 : dfff800000000000
> [  198.640332] Call trace:
> [  198.640460]  nfsd4_sequence+0x5a0/0x1f60 [nfsd]
> [  198.640715]  nfsd4_proc_compound+0xb94/0x23b0 [nfsd]
> [  198.640997]  nfsd_dispatch+0x22c/0x718 [nfsd]
> [  198.641260]  svc_process_common+0x8e8/0x1968 [sunrpc]
> [  198.641566]  svc_process+0x3d4/0x7e0 [sunrpc]
> [  198.641827]  svc_handle_xprt+0x828/0xe10 [sunrpc]
> [  198.642108]  svc_recv+0x2cc/0x6a8 [sunrpc]
> [  198.642346]  nfsd+0x270/0x400 [nfsd]
> [  198.642562]  kthread+0x288/0x310
> [  198.642745]  ret_from_fork+0x10/0x20
> [  198.642937] Code: f2fbffe0 f9003be4 f94007e2 52800061 (38e06880)
> [  198.643267] SMP: stopping secondary CPUs
> 
> 
> 


Good catch. I think the problem here is that we don't currently cap the
initial value of se_cb_highest_slot at NFSD_BC_SLOT_TABLE_MAX. Does
this patch prevent the panic?

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 3afe56ab9e0a..839be4ba765a 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2011,7 +2011,7 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 
 	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
 	new->se_cb_slot_avail = ~0U;
-	new->se_cb_highest_slot = battrs->maxreqs - 1;
+	new->se_cb_highest_slot = min(battrs->maxreqs - 1, NFSD_BC_SLOT_TABLE_MAX);
 	spin_lock_init(&new->se_lock);
 	return new;
 out_free:


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ