lavc/vp8dsp: rework R-V V idct_dc_add4y

DCT-related FFmpeg functions often add an unsigned 8-bit sample to a
signed 16-bit coefficient, then clip the result back to an unsigned
8-bit value. RISC-V has no signed 16-bit to unsigned 8-bit clip, so
instead our most common sequence is:
    VWADDU.WV
    set SEW to 16 bits
    VMAX.VV zero     # clip negative values to 0
    set SEW to 8 bits
    VNCLIPU.WI       # clip values over 255 to 255 and narrow

Here we use a different sequence which does not require toggling the
vector type. This assumes that the wide addend vector is biased by
-128:
    VWADDU.WV
    VNCLIP.WI    # clip values to signed 8-bit and narrow
    VXOR.VX 0x80 # flip sign bit (convert signed to unsigned)

Also the VMAX is effectively replaced by a VXOR of half-width. In this
function, this comes for free as we anyway add a constant to the wide
vector in the prologue.

On C908, this has no observable effects. On X60, this improves
microbenchmarks by about 20%.
release/7.1
Rémi Denis-Courmont 9 months ago
parent 4e120fbbbd
commit 225de53c9d
  1. 2
      libavcodec/riscv/vp7dsp_rvv.S
  2. 14
      libavcodec/riscv/vp8dsp_rvv.S

@ -134,7 +134,7 @@ func ff_vp7_idct_dc_add4y_rvv, zve32x
li t1, 23170
vlse16.v v8, (a1), t0 # block[0..3][0]
vwmul.vx v0, v8, t1
li t2, 0x20000
li t2, 0x20000 - (128 << 18)
vsetvli zero, zero, e32, m1, ta, ma
vsra.vi v0, v0, 14
vmul.vx v0, v0, t1

@ -125,31 +125,31 @@ endfunc
func ff_vp8_idct_dc_add4y_rvv, zve32x
li t0, 32
vsetivli zero, 4, e16, mf2, ta, ma
li t1, 4 - (128 << 3)
vlse16.v v8, (a1), t0
vadd.vi v8, v8, 4
vadd.vx v8, v8, t1
vsra.vi v8, v8, 3
# fall through
endfunc
.variant_cc ff_vp78_idct_dc_add4y_rvv
# v8 = [dc0, dc1, dc2, dc3]
# v8 = [dc0 - 128, dc1 - 128, dc2 - 128, dc3 - 128]
func ff_vp78_idct_dc_add4y_rvv, zve32x
vsetivli zero, 16, e16, m2, ta, ma
vid.v v4
li a4, 4
vsrl.vi v4, v4, 2
li t1, 128
vrgather.vv v0, v8, v4 # replicate each DC four times
vsetvli zero, zero, e8, m1, ta, ma
li a4, 4
1:
vle8.v v8, (a0)
addi a4, a4, -1
vwaddu.wv v16, v0, v8
sh zero, (a1)
vsetvli zero, zero, e16, m2, ta, ma
vmax.vx v16, v16, zero
vnclip.wi v8, v16, 0
addi a1, a1, 32
vsetvli zero, zero, e8, m1, ta, ma
vnclipu.wi v8, v16, 0
vxor.vx v8, v8, t1
vse8.v v8, (a0)
add a0, a0, a2
bnez a4, 1b

Loading…
Cancel
Save