lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Jul 2021 14:24:43 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Quentin Perret <qperret@...gle.com>
Cc:     james.morse@....com, alexandru.elisei@....com,
        suzuki.poulose@....com, catalin.marinas@....com, will@...nel.org,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org, ardb@...nel.org, qwandor@...gle.com,
        tabba@...gle.com, dbrazdil@...gle.com, kernel-team@...roid.com
Subject: Re: [PATCH v2 04/16] KVM: arm64: Optimize host memory aborts

On Mon, 26 Jul 2021 14:13:06 +0100,
Quentin Perret <qperret@...gle.com> wrote:
> 
> On Monday 26 Jul 2021 at 11:35:10 (+0100), Marc Zyngier wrote:

[...]

> > You could also use a kvm_mem_range for the iteration, and add a helper
> > that checks for the inclusion.
> 
> Something like this (untested)?
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 75273166d2c5..07d228163090 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -234,9 +234,15 @@ static inline int __host_stage2_idmap(u64 start, u64 end,
>                 __ret;                                                  \
>          })
> 
> +static inline bool range_included(struct kvm_mem_range *child,
> +                                 struct kvm_mem_range *parent)
> +{
> +       return parent->start <= child->start && child->end <= parent->end;
> +}
> +
>  static int host_stage2_find_range(u64 addr, struct kvm_mem_range *range)
>  {
> -       u64 granule, start, end;
> +       struct kvm_mem_range cur;
>         kvm_pte_t pte;
>         u32 level;
>         int ret;
> @@ -252,16 +258,15 @@ static int host_stage2_find_range(u64 addr, struct kvm_mem_range *range)
>                 return -EPERM;
> 
>         do {
> -               granule = kvm_granule_size(level);
> -               start = ALIGN_DOWN(addr, granule);
> -               end = start + granule;
> +               u64 granule = kvm_granule_size(level);
> +               cur.start = ALIGN_DOWN(addr, granule);
> +               cur.end = cur.start + granule;
>                 level++;
>         } while ((level < KVM_PGTABLE_MAX_LEVELS) &&
> -                       (!kvm_level_supports_block_mapping(level) ||
> -                        start < range->start || range->end < end));
> +                       !(kvm_level_supports_block_mapping(level) &&
> +                         range_included(&cur, parent)));
> 
> -       range->start = start;
> -       range->end = end;
> +       *range = cur;
> 
>         return 0;
>  }
> 

Beautiful.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ