[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150514204826.GA17985@quack.suse.cz>
Date: Thu, 14 May 2015 22:48:26 +0200
From: Jan Kara <jack@...e.cz>
To: "Darrick J. Wong" <darrick.wong@...cle.com>
Cc: tytso@....edu, Jan Kara <jack@...e.cz>, linux-ext4@...r.kernel.org
Subject: Re: [PATCH v2] jbd2: fix r_count overflows leading to buffer
overflow in journal recovery
On Thu 14-05-15 12:34:24, Darrick J. Wong wrote:
> The journal revoke block recovery code does not check r_count for
> sanity, which means that an evil value of r_count could result in
> the kernel reading off the end of the revoke table and into whatever
> garbage lies beyond. This could crash the kernel, so fix that.
>
> However, in testing this fix, I discovered that the code to write
> out the revoke tables also was not correctly checking to see if the
> block was full -- the current offset check is fine so long as the
> revoke table space size is a multiple of the record size, but this
> is not true when either journal_csum_v[23] are set.
>
> v2: The comparison on the revoke block writer code should allow the
> revoke block to become totally full.
>
> Signed-off-by: Darrick J. Wong <darrick.wong@...cle.com>
Looks good. You can add:
Reviewed-by: Jan Kara <jack@...e.cz>
Honza
> ---
> fs/jbd2/recovery.c | 10 +++++++++-
> fs/jbd2/revoke.c | 18 ++++++++++--------
> 2 files changed, 19 insertions(+), 9 deletions(-)
>
> diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
> index b5128c6..a9079d0 100644
> --- a/fs/jbd2/recovery.c
> +++ b/fs/jbd2/recovery.c
> @@ -842,15 +842,23 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
> {
> jbd2_journal_revoke_header_t *header;
> int offset, max;
> + int csum_size = 0;
> + __u32 rcount;
> int record_len = 4;
>
> header = (jbd2_journal_revoke_header_t *) bh->b_data;
> offset = sizeof(jbd2_journal_revoke_header_t);
> - max = be32_to_cpu(header->r_count);
> + rcount = be32_to_cpu(header->r_count);
>
> if (!jbd2_revoke_block_csum_verify(journal, header))
> return -EINVAL;
>
> + if (jbd2_journal_has_csum_v2or3(journal))
> + csum_size = sizeof(struct jbd2_journal_revoke_tail);
> + if (rcount > journal->j_blocksize - csum_size)
> + return -EINVAL;
> + max = rcount;
> +
> if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
> record_len = 8;
>
> diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
> index c6cbaef..14214da 100644
> --- a/fs/jbd2/revoke.c
> +++ b/fs/jbd2/revoke.c
> @@ -577,7 +577,7 @@ static void write_one_revoke_record(journal_t *journal,
> {
> int csum_size = 0;
> struct buffer_head *descriptor;
> - int offset;
> + int sz, offset;
> journal_header_t *header;
>
> /* If we are already aborting, this all becomes a noop. We
> @@ -594,9 +594,14 @@ static void write_one_revoke_record(journal_t *journal,
> if (jbd2_journal_has_csum_v2or3(journal))
> csum_size = sizeof(struct jbd2_journal_revoke_tail);
>
> + if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
> + sz = 8;
> + else
> + sz = 4;
> +
> /* Make sure we have a descriptor with space left for the record */
> if (descriptor) {
> - if (offset >= journal->j_blocksize - csum_size) {
> + if (offset + sz > journal->j_blocksize - csum_size) {
> flush_descriptor(journal, descriptor, offset, write_op);
> descriptor = NULL;
> }
> @@ -619,16 +624,13 @@ static void write_one_revoke_record(journal_t *journal,
> *descriptorp = descriptor;
> }
>
> - if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) {
> + if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
> * ((__be64 *)(&descriptor->b_data[offset])) =
> cpu_to_be64(record->blocknr);
> - offset += 8;
> -
> - } else {
> + else
> * ((__be32 *)(&descriptor->b_data[offset])) =
> cpu_to_be32(record->blocknr);
> - offset += 4;
> - }
> + offset += sz;
>
> *offsetp = offset;
> }
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists