lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Mar 2022 17:34:11 -0400
From:   Paul Moore <paul@...l-moore.com>
To:     Mickaël Salaün <mic@...ikod.net>
Cc:     James Morris <jmorris@...ei.org>,
        "Serge E . Hallyn" <serge@...lyn.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Jann Horn <jannh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Konstantin Meskhidze <konstantin.meskhidze@...wei.com>,
        Shuah Khan <shuah@...nel.org>, linux-doc@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-security-module@...r.kernel.org,
        Mickaël Salaün <mic@...ux.microsoft.com>
Subject: Re: [PATCH v1 04/11] landlock: Fix same-layer rule unions

On Thu, Mar 17, 2022 at 6:40 AM Mickaël Salaün <mic@...ikod.net> wrote:
> On 17/03/2022 02:26, Paul Moore wrote:
> > On Mon, Feb 21, 2022 at 4:15 PM Mickaël Salaün <mic@...ikod.net> wrote:
> >>
> >> From: Mickaël Salaün <mic@...ux.microsoft.com>
> >>
> >> The original behavior was to check if the full set of requested accesses
> >> was allowed by at least a rule of every relevant layer.  This didn't
> >> take into account requests for multiple accesses and same-layer rules
> >> allowing the union of these accesses in a complementary way.  As a
> >> result, multiple accesses requested on a file hierarchy matching rules
> >> that, together, allowed these accesses, but without a unique rule
> >> allowing all of them, was illegitimately denied.  This case should be
> >> rare in practice and it can only be triggered by the path_rename or
> >> file_open hook implementations.
> >>
> >> For instance, if, for the same layer, a rule allows execution
> >> beneath /a/b and another rule allows read beneath /a, requesting access
> >> to read and execute at the same time for /a/b should be allowed for this
> >> layer.
> >>
> >> This was an inconsistency because the union of same-layer rule accesses
> >> was already allowed if requested once at a time anyway.
> >>
> >> This fix changes the way allowed accesses are gathered over a path walk.
> >> To take into account all these rule accesses, we store in a matrix all
> >> layer granting the set of requested accesses, according to the handled
> >> accesses.  To avoid heap allocation, we use an array on the stack which
> >> is 2*13 bytes.  A following commit bringing the LANDLOCK_ACCESS_FS_REFER
> >> access right will increase this size to reach 84 bytes (2*14*3) in case
> >> of link or rename actions.
> >>
> >> Add a new layout1.layer_rule_unions test to check that accesses from
> >> different rules pertaining to the same layer are ORed in a file
> >> hierarchy.  Also test that it is not the case for rules from different
> >> layers.
> >>
> >> Signed-off-by: Mickaël Salaün <mic@...ux.microsoft.com>
> >> Link: https://lore.kernel.org/r/20220221212522.320243-5-mic@digikod.net
> >> ---
> >>   security/landlock/fs.c                     |  77 ++++++++++-----
> >>   security/landlock/ruleset.h                |   2 +
> >>   tools/testing/selftests/landlock/fs_test.c | 107 +++++++++++++++++++++
> >>   3 files changed, 160 insertions(+), 26 deletions(-)
> >>
> >> diff --git a/security/landlock/fs.c b/security/landlock/fs.c
> >> index 0bcb27f2360a..9662f9fb3cd0 100644
> >> --- a/security/landlock/fs.c
> >> +++ b/security/landlock/fs.c
> >> @@ -204,45 +204,66 @@ static inline const struct landlock_rule *find_rule(
> >>          return rule;
> >>   }
> >>
> >> -static inline layer_mask_t unmask_layers(
> >> -               const struct landlock_rule *const rule,
> >> -               const access_mask_t access_request, layer_mask_t layer_mask)
> >> +/*
> >> + * @layer_masks is read and may be updated according to the access request and
> >> + * the matching rule.
> >> + *
> >> + * Returns true if the request is allowed (i.e. relevant layer masks for the
> >> + * request are empty).
> >> + */
> >> +static inline bool unmask_layers(const struct landlock_rule *const rule,
> >> +               const access_mask_t access_request,
> >> +               layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
> >>   {
> >>          size_t layer_level;
> >>
> >> +       if (!access_request || !layer_masks)
> >> +               return true;
> >>          if (!rule)
> >> -               return layer_mask;
> >> +               return false;
> >>
> >>          /*
> >>           * An access is granted if, for each policy layer, at least one rule
> >> -        * encountered on the pathwalk grants the requested accesses,
> >> -        * regardless of their position in the layer stack.  We must then check
> >> +        * encountered on the pathwalk grants the requested access,
> >> +        * regardless of its position in the layer stack.  We must then check
> >>           * the remaining layers for each inode, from the first added layer to
> >> -        * the last one.
> >> +        * the last one.  When there is multiple requested accesses, for each
> >> +        * policy layer, the full set of requested accesses may not be granted
> >> +        * by only one rule, but by the union (binary OR) of multiple rules.
> >> +        * E.g. /a/b <execute> + /a <read> = /a/b <execute + read>
> >>           */
> >>          for (layer_level = 0; layer_level < rule->num_layers; layer_level++) {
> >>                  const struct landlock_layer *const layer =
> >>                          &rule->layers[layer_level];
> >>                  const layer_mask_t layer_bit = BIT_ULL(layer->level - 1);
> >> +               const unsigned long access_req = access_request;
> >> +               unsigned long access_bit;
> >> +               bool is_empty;
> >>
> >> -               /* Checks that the layer grants access to the full request. */
> >> -               if ((layer->access & access_request) == access_request) {
> >> -                       layer_mask &= ~layer_bit;
> >> -
> >> -                       if (layer_mask == 0)
> >> -                               return layer_mask;
> >> +               /*
> >> +                * Records in @layer_masks which layer grants access to each
> >> +                * requested access.
> >> +                */
> >> +               is_empty = true;
> >> +               for_each_set_bit(access_bit, &access_req,
> >> +                               ARRAY_SIZE(*layer_masks)) {
> >> +                       if (layer->access & BIT_ULL(access_bit))
> >> +                               (*layer_masks)[access_bit] &= ~layer_bit;
> >> +                       is_empty = is_empty && !(*layer_masks)[access_bit];
> >
> >>From what I can see the only reason not to return immediately once
> > @is_empty is true is the need to update @layer_masks.  However, the
> > only caller that I can see (up to patch 4/11) is check_access_path()
> > which thanks to this patch no longer needs to reference @layer_masks
> > after the call to unmask_layers() returns true.  Assuming that to be
> > the case, is there a reason we can't return immediately after finding
> > @is_empty true, or am I missing something?
>
> Because @is_empty is initialized to true, and because each access
> right/bit must be checked by this loop, we cannot return earlier than
> the following if statement. Not returning in this loop also makes this
> helper safer (for potential future use) because @layer_mask will never
> be partially updated, which could lead to an inconsistent state.
> Moreover finishing this bits check loop makes the code simpler and have
> a negligible performance impact.

My apologies, I must have spaced-out a bit and read the 'is_empty =
true;' initializer as 'is_empty = false;'.

Reviewed-by: Paul Moore <paul@...l-moore.com>

-- 
paul-moore.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ