ZIL: Call brt_pending_add() replaying TX_CLONE_RANGE

zil_claim_clone_range() takes references on cloned blocks before ZIL
replay.  Later zil_free_clone_range() drops them after replay or on
dataset destroy.  The total balance is neutral.  It means on actual
replay we must take additional references, which would stay in BRT.

Without this blocks could be freed prematurely when either original
file or its clone are destroyed.  I've observed BRT being emptied
and the feature being deactivated after ZIL replay completion, which
should not have happened.  With the patch I see expected stats.

Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <robn@despairlabs.com>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15603
vendor/openzfs/master
Alexander Motin 11 months ago committed by GitHub
parent 8adf2e3066
commit a03ebd9bee
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1086,8 +1086,7 @@ int dmu_offset_next(objset_t *os, uint64_t object, boolean_t hole,
int dmu_read_l0_bps(objset_t *os, uint64_t object, uint64_t offset,
uint64_t length, struct blkptr *bps, size_t *nbpsp);
int dmu_brt_clone(objset_t *os, uint64_t object, uint64_t offset,
uint64_t length, dmu_tx_t *tx, const struct blkptr *bps, size_t nbps,
boolean_t replay);
uint64_t length, dmu_tx_t *tx, const struct blkptr *bps, size_t nbps);
/*
* Initial setup and final teardown.

@ -235,13 +235,13 @@
* destination dataset is mounted and its ZIL replayed.
* To address this situation we leverage zil_claim() mechanism where ZFS will
* parse all the ZILs on pool import. When we come across TX_CLONE_RANGE
* entries, we will bump reference counters for their BPs in the BRT and then
* on mount and ZIL replay we will just attach BPs to the file without
* bumping reference counters.
* Note it is still possible that after zil_claim() we never mount the
* destination, so we never replay its ZIL and we destroy it. This way we would
* end up with leaked references in BRT. We address that too as ZFS gives us
* a chance to clean this up on dataset destroy (see zil_free_clone_range()).
* entries, we will bump reference counters for their BPs in the BRT. Then
* on mount and ZIL replay we bump the reference counters once more, while the
* first references are dropped during ZIL destroy by zil_free_clone_range().
* It is possible that after zil_claim() we never mount the destination, so
* we never replay its ZIL and just destroy it. In this case the only taken
* references will be dropped by zil_free_clone_range(), since the cloning is
* not going to ever take place.
*/
static kmem_cache_t *brt_entry_cache;

@ -2286,7 +2286,7 @@ out:
int
dmu_brt_clone(objset_t *os, uint64_t object, uint64_t offset, uint64_t length,
dmu_tx_t *tx, const blkptr_t *bps, size_t nbps, boolean_t replay)
dmu_tx_t *tx, const blkptr_t *bps, size_t nbps)
{
spa_t *spa;
dmu_buf_t **dbp, *dbuf;
@ -2360,10 +2360,8 @@ dmu_brt_clone(objset_t *os, uint64_t object, uint64_t offset, uint64_t length,
* When data in embedded into BP there is no need to create
* BRT entry as there is no data block. Just copy the BP as
* it contains the data.
* Also, when replaying ZIL we don't want to bump references
* in the BRT as it was already done during ZIL claim.
*/
if (!replay && !BP_IS_HOLE(bp) && !BP_IS_EMBEDDED(bp)) {
if (!BP_IS_HOLE(bp) && !BP_IS_EMBEDDED(bp)) {
brt_pending_add(spa, bp, tx);
}
}

@ -1327,7 +1327,7 @@ zfs_clone_range(znode_t *inzp, uint64_t *inoffp, znode_t *outzp,
}
error = dmu_brt_clone(outos, outzp->z_id, outoff, size, tx,
bps, nbps, B_FALSE);
bps, nbps);
if (error != 0) {
dmu_tx_commit(tx);
break;
@ -1461,7 +1461,7 @@ zfs_clone_range_replay(znode_t *zp, uint64_t off, uint64_t len, uint64_t blksz,
if (zp->z_blksz < blksz)
zfs_grow_blocksize(zp, blksz, tx);
dmu_brt_clone(zfsvfs->z_os, zp->z_id, off, len, tx, bps, nbps, B_TRUE);
dmu_brt_clone(zfsvfs->z_os, zp->z_id, off, len, tx, bps, nbps);
zfs_tstamp_update_setup(zp, CONTENT_MODIFIED, mtime, ctime);

Loading…
Cancel
Save