lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250618112339.ezhjt25lnztck6ye@ast-epyc5.inf.ethz.ch>
Date: Wed, 18 Jun 2025 13:24:34 +0200
From: Hao Sun <sunhao.th@...il.com>
To: Harishankar Vishwanathan <harishankar.vishwanathan@...il.com>
Cc: ast@...nel.org, m.shachnai@...gers.edu, srinivas.narayana@...gers.edu,
	santosh.nagarakatte@...gers.edu,
	Daniel Borkmann <daniel@...earbox.net>,
	John Fastabend <john.fastabend@...il.com>,
	Andrii Nakryiko <andrii@...nel.org>,
	Martin KaFai Lau <martin.lau@...ux.dev>,
	Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
	Yonghong Song <yonghong.song@...ux.dev>,
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>,
	Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
	bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] bpf, verifier: Improve precision for BPF_ADD and
 BPF_SUB

On Tue, Jun 17, 2025 at 07:17:31PM -0400, Harishankar Vishwanathan wrote:
[...]
> 
> There are three cases to consider when adding two u64 ranges [dst_umin,
> dst_umax] and [src_umin, src_umax]. Consider a value x in the range
> [dst_umin, dst_umax] and another value y in the range [src_umin,
> src_umax].
> 
> (a) No overflow: No addition x + y overflows. This occurs when even the
> largest possible sum, i.e., dst_umax + src_umax <= U64_MAX.
> 
> (b) Partial overflow: Some additions x + y overflow. This occurs when
> the largest possible sum overflows (dst_umax + src_umax > U64_MAX), but
> the smallest possible sum does not overflow (dst_umin + src_umin <=
> U64_MAX).
> 
> (c) Full overflow: All additions x + y overflow. This occurs when both
> the smallest possible sum and the largest possible sum overflow, i.e.,
> both (dst_umin + src_umin) and (dst_umax + src_umax) are > U64_MAX.
> 
> The current implementation conservatively sets the output bounds to
> unbounded, i.e, [umin=0, umax=U64_MAX], whenever there is *any*
> possibility of overflow, i.e, in cases (b) and (c). Otherwise it
> computes tight bounds as [dst_umin + src_umin, dst_umax + src_umax]:
> 
> if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) ||
>     check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) {
> 	*dst_umin = 0;
> 	*dst_umax = U64_MAX;
> }
> 
> Our synthesis-based technique discovered a more precise operator.
> Particularly, in case (c), all possible additions x + y overflow and
> wrap around according to eBPF semantics, and the computation of the
> output range as [dst_umin + src_umin, dst_umax + src_umax] continues to
> work. Only in case (b), do we need to set the output bounds to
> unbounded, i.e., [0, U64_MAX].
> 
 
In case anyone is interested, the above (case c) can also be proved by
the following SMT formula directly, which may ease the reasoning here:

```
; ================================================================
;  Unsigned 32- and 64-bit interval addition & subtraction
;  with wrap-around semantics and endpoint overflow / underflow.
; ================================================================
(set-logic ALL)

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; ----------  u32  (32-bit)  ----------
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(declare-const l0_32 (_ BitVec 32))
(declare-const h0_32 (_ BitVec 32))
(declare-const l1_32 (_ BitVec 32))
(declare-const h1_32 (_ BitVec 32))

; Well-formed input ranges
(assert (bvule l0_32 h0_32))
(assert (bvule l1_32 h1_32))

; ----- Addition -----
(define-fun lowSum32  () (_ BitVec 32) (bvadd l0_32 l1_32))
(define-fun highSum32 () (_ BitVec 32) (bvadd h0_32 h1_32))

; Both endpoint sums overflow (wrap) ⇒ result < first addend
(assert (bvult lowSum32  l0_32))
(assert (bvult highSum32 h0_32))

; Soundness of addition
(assert
  (forall ((x (_ BitVec 32)) (y (_ BitVec 32)))
    (=> (and (bvule l0_32 x) (bvule x h0_32)
             (bvule l1_32 y) (bvule y h1_32))
        (and (bvule lowSum32  (bvadd x y))
             (bvule (bvadd x y) highSum32)))))

; ----- Subtraction -----
(define-fun lowDiff32  () (_ BitVec 32) (bvsub l0_32 h1_32))
(define-fun highDiff32 () (_ BitVec 32) (bvsub h0_32 l1_32))

; Both endpoint differences underflow ⇒ result > minuend
(assert (bvugt lowDiff32  l0_32))
(assert (bvugt highDiff32 h0_32))

; Soundness of subtraction
(assert
  (forall ((x (_ BitVec 32)) (y (_ BitVec 32)))
    (=> (and (bvule l0_32 x) (bvule x h0_32)
             (bvule l1_32 y) (bvule y h1_32))
        (and (bvule lowDiff32  (bvsub x y))
             (bvule (bvsub x y) highDiff32)))))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; ----------  u64  (64-bit)  ----------
;; Basically the same as above but for u64
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(declare-const l0_64 (_ BitVec 64))
(declare-const h0_64 (_ BitVec 64))
(declare-const l1_64 (_ BitVec 64))
(declare-const h1_64 (_ BitVec 64))

; Well-formed input ranges
(assert (bvule l0_64 h0_64))
(assert (bvule l1_64 h1_64))

; ----- Addition -----
(define-fun lowSum64  () (_ BitVec 64) (bvadd l0_64 l1_64))
(define-fun highSum64 () (_ BitVec 64) (bvadd h0_64 h1_64))

(assert (bvult lowSum64  l0_64))
(assert (bvult highSum64 h0_64))

(assert
  (forall ((x (_ BitVec 64)) (y (_ BitVec 64)))
    (=> (and (bvule l0_64 x) (bvule x h0_64)
             (bvule l1_64 y) (bvule y h1_64))
        (and (bvule lowSum64  (bvadd x y))
             (bvule (bvadd x y) highSum64)))))

; ----- Subtraction -----
(define-fun lowDiff64  () (_ BitVec 64) (bvsub l0_64 h1_64))
(define-fun highDiff64 () (_ BitVec 64) (bvsub h0_64 l1_64))

(assert (bvugt lowDiff64  l0_64))
(assert (bvugt highDiff64 h0_64))

(assert
  (forall ((x (_ BitVec 64)) (y (_ BitVec 64)))
    (=> (and (bvule l0_64 x) (bvule x h0_64)
             (bvule l1_64 y) (bvule y h1_64))
        (and (bvule lowDiff64  (bvsub x y))
             (bvule (bvsub x y) highDiff64)))))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(check-sat)
(exit)
```

Both cvc5 and z3 can prove the above, and one can try this and expect
it producing SAT on:
https://cvc5.github.io/app/#temp_a95e25c4-88c5-4257-96c8-0bd74125b179

In addition, the unsoundness of partial case-b can also be proved by
the following formula, and the counter examples generated may be used
as test cases if needed:
https://pastebin.com/raw/qrT7rC1P

Regards
Hao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ