This commit adds support for storing DFPWM audio in a WAV container.
It uses the WAVEFORMATEXTENSIBLE structure, following these conventions:
https://gist.github.com/MCJack123/90c24b64c8e626c7f130b57e9800962c
The implementation is very simple: it just adds the GUID to the list of
WAV GUIDs, and modifies the WAV muxer to always use WAVEFORMATEXTENSIBLE
format with that GUID.
This creates a standard container format for DFPWM besides raw data.
It will allow users to transfer DFPWM audio in a standard container
format, with the sample rate and channel count contained in the file
as opposed to being an external parameter as in the raw format.
This format is already supported in my AUKit library, which is the CC
analog to libav (albeit much smaller). Support in other applications is TBD.
Signed-off-by: Jack Bruienne <jackbruienne@gmail.com>
This patch builds on my previous DFPWM codec patch, adding a raw
audio format to be able to read/write the raw files that are most commonly
used (as no other container format supports it yet).
The muxers are mostly copied from the PCM demuxer and the raw muxers, as
DFPWM is typically stored as raw data.
Please see the previous patch for more information on DFPWM.
Signed-off-by: Jack Bruienne <jackbruienne@gmail.com>
From the wiki page (https://wiki.vexatos.com/dfpwm):
> DFPWM (Dynamic Filter Pulse Width Modulation) is an audio codec
> created by Ben “GreaseMonkey” Russell in 2012, originally to be used
> as a voice codec for asiekierka's pixmess, a C remake of 64pixels.
> It is a 1-bit-per-sample codec which uses a dynamic-strength one-pole
> low-pass filter as a predictor. Due to the fact that a raw DPFWM decoding
> creates a high-pitched whine, it is often followed by some post-processing
> filters to make the stream more listenable.
It has recently gained popularity through the ComputerCraft mod for
Minecraft, which added support for audio through this codec, as well as
the Computronics expansion which preceeded the official support. These
both implement the slightly adjusted 1a version of the codec, which is
the version I have chosen for this patch.
This patch adds a new codec (with encoding and decoding) for DFPWM1a.
The codec sources are pretty simple: they use the reference codec with
a basic wrapper to connect it to the FFmpeg AVCodec system.
To clarify, the codec does not have a specific sample rate - it is
provided by the container (or user), which is typically 48000, but has
also been known to be 32768. The codec does not specify channel info
either, and it's pretty much always used with one mono channel.
However, since it appears that libavcodec expects both sample rate and
channel count to be handled by either the codec or container, I have
made the decision to allow multiple channels interleaved, which as far
as I know has never been used, but it works fine here nevertheless. The
accompanying raw format has a channels option to set this. (I expect
most users of this will not use multiple channels, but it remains an
option just in case.)
This patch will be highly useful to ComputerCraft developers who are
working with audio, as it is the standard format for audio, and there
are few user-friendly encoders out there, and even fewer decoders. It
will streamline the process for importing and listening to audio,
replacing the need to write code or use tools that require very
specific input formats.
You may use the CraftOS-PC program (https://www.craftos-pc.cc) to test
out DFPWM playback. To use it, run the program and type this command:
"attach left speaker" Then run "speaker play <file.dfpwm>" for each file.
The app runs in a sandbox, so files have to be transferred in first;
the easiest way to do this is to simply drag the file on the window.
(Or copy files to the folder at https://www.craftos-pc.cc/docs/saves.)
Sample DFPWM files can be generated with an online tool at
https://music.madefor.cc. This is the current best way to encode DFPWM
files. Simply drag an audio file onto the page, and it will encode it,
giving a download link on the page.
I've made sure to update all of the docs as per Developer§7, and I've
tested it as per section 8. Test files encoded to DFPWM play correctly
in ComputerCraft, and other files that work in CC are correctly decoded.
I have also verified that corrupt files do not crash the decoder - this
should theoretically not be an issue as the result size is constant with
respect to the input size.
Signed-off-by: Jack Bruienne <jackbruienne@gmail.com>
This patch adds optional support for Arm Pointer Authentication Codes.
PAC support is turned on or off at compile time using additional
compiler flags. Unless any of these is enabled explicitly, no additional
code will be emitted at all.
Signed-off-by: André Kempe <andre.kempe@arm.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Fixes: signed integer overflow: 10 * 808464428 cannot be represented in type 'int'
Fixes: assertion failure
Fixes: ticket9651
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: negation of -2147483648 cannot be represented in type 'int'; cast to an unsigned type to negate this value to itself
Fixes: Ticket8486
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes hang at end of input with this command:
ffmpeg -f lavfi -i testsrc2=d=50,format=yuv444p -lavfi \
"extractplanes=y+u+v[y][u][v];[y]tpad=start=0[y];[u]tpad=start=0[u];[v]negate[v];[y][u][v]vstack=3" -f null -
While swscale can be reconfigured with sws_setColorspaceDetails,
the in/out ranges also need to be set before calling
sws_init_context, otherwise the initialization might choose
fastpaths that don't take the ranges into account.
Therefore, look at in->color_range too, when deciding on whether
the scaler needs to be reconfigured.
Add a new member variable for keeping track of this, for being
able to differentiate between whether the scale filter parameter
"in_range" has been set (which should override whatever the input
frame has set) or whether it has been configured based on the
latest frame (which should trigger reconfiguring the scaler if
the input frame ranges change).
Fixes: Ticket #9576
Signed-off-by: Martin Storsjö <martin@martin.st>
This fixes building for arm after 10c2ef1ca4.
The argument to av_clip_uintp2 must be an assembly time immediate
constant.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Reviewed-by and commit message details-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The earlier code has ignored it for all stream types except
video and subtitles, probably because audio was presumed
to only consist of keyframes. Yet this assumption is not true
for e.g. TrueHD.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This was tested with medias recorded from an iPhone XR and an iPhone 13.
Here is how a typical stream looks like in coding order:
┌────────┬─────┬─────┬──────────┐
│ sample | PTS | DTS | keyframe |
├────────┼─────┼─────┼──────────┤
┊ ┊ ┊ ┊ ┊
│ 53 │ 560 │ 510 │ No │
│ 54 │ 540 │ 520 │ No │
│ 55 │ 530 │ 530 │ No │
│ 56 │ 550 │ 540 │ No │
│ 57 │ 600 │ 550 │ Yes │
│ * 58 │ 580 │ 560 │ No │
│ * 59 │ 570 │ 570 │ No │
│ * 60 │ 590 │ 580 │ No │
│ 61 │ 640 │ 590 │ No │
│ 62 │ 620 │ 600 │ No │
┊ ┊ ┊ ┊ ┊
In composition/display order:
┌────────┬─────┬─────┬──────────┐
│ sample | PTS | DTS | keyframe |
├────────┼─────┼─────┼──────────┤
┊ ┊ ┊ ┊ ┊
│ 55 │ 530 │ 530 │ No │
│ 54 │ 540 │ 520 │ No │
│ 56 │ 550 │ 540 │ No │
│ 53 │ 560 │ 510 │ No │
│ * 59 │ 570 │ 570 │ No │
│ * 58 │ 580 │ 560 │ No │
│ * 60 │ 590 │ 580 │ No │
│ 57 │ 600 │ 550 │ Yes │
│ 63 │ 610 │ 610 │ No │
│ 62 │ 620 │ 600 │ No │
┊ ┊ ┊ ┊ ┊
Sample/frame 58, 59 and 60 are B-frames which actually depends on the
key frame (57). Here the key frame is not an IDR but a "CRA" (Clean
Random Access).
Initially, I thought I could rely on the sdtp box (independent and
disposable samples), but unfortunately:
sdtp[54] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[55] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[56] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[57] is_leading:0 sample_depends_on:2 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[58] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[59] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[60] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[61] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[62] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
The information that might have been useful here would have been
is_leading, but all the samples are set to 0 so this was unusable.
Instead, we need to rely on sgpd/sbgp tables. In my case the video track
contained 3 sgpd tables with the following grouping types: tscl, sync
and tsas. In the sync table we have the following 2 entries (only):
sgpd.sync[1]: sync nal_unit_type:0x14
sgpd.sync[2]: sync nal_unit_type:0x15
(The count starts at 1 because 0 carries the undefined semantic, we'll
see that later in the reference table).
The NAL unit types presented here correspond to:
libavcodec/hevc.h: HEVC_NAL_IDR_N_LP = 20,
libavcodec/hevc.h: HEVC_NAL_CRA_NUT = 21,
In parallel, the sbgp sync table contains the following:
┌────┬───────┬─────┐
│ id │ count │ gdi │
├────┼───────┼─────┤
│ 0 │ 1 │ 1 │
│ 1 │ 56 │ 0 │
│ 2 │ 1 │ 2 │
│ 3 │ 59 │ 0 │
│ 4 │ 1 │ 2 │
│ 5 │ 59 │ 0 │
│ 6 │ 1 │ 2 │
│ 7 │ 59 │ 0 │
│ 8 │ 1 │ 2 │
│ 9 │ 59 │ 0 │
│ 10 │ 1 │ 2 │
│ 11 │ 11 │ 0 │
└────┴───────┴─────┘
The gdi column (group description index) directly refers to the index in
the sgpd.sync table. This means the first frame is an IDR, then we have
batches of undefined frames interlaced with CRA frames. No IDR ever
appears again (tried on a 30+ seconds sample).
With that information, we can build an heuristic using the presentation
order.
A few things needed to be introduced in this commit:
1. min_sample_duration is extracted from the stts: we need the minimal
step between sample in order to PTS-step backward to a valid point
2. In order to avoid a loop over the ctts table systematically during a
seek, we build an expanded list of sample offsets which will be used
to translate from DTS to PTS
3. An open_key_samples index to keep track of all the non-IDR key
frames; for now it only supports HEVC CRA frames. We should probably
add BLA frames as well, but I don't have any sample so I prefered to
leave that for later
It is entirely possible I missed something obvious in my approach, but I
couldn't come up with a better solution. Also, as mentioned in the diff,
we could optimize is_open_key_sample(), but the linear scaling overhead
should be fine for now since it only happens in seek events.
Fixing this issue prevents sending broken packets to the decoder. With
FFmpeg hevc decoder the frames are skipped, with VideoToolbox the frames
are glitching.
sgpd means Sample Group Description Box.
For now, only the sync grouping type is parsed, but the function can
easily be adjusted to support other flavours.
The sbgp (Sample to Group Box) sync_group table built in previous commit
contains references to this table through the group_description_index
field.