Callers always use a frame and cast it to AVPicture, change
ff_msrle_decode() to accept an AVFrame directly instead.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This commit shall introduce the first step of adding support for the
Daala next generation video codec to FFmpeg. Although still in
development, the codec is showing good progress and exchanging work
through IETF drafts. The companies behind Daala are also participating
in the Alliance for Open Media, so it's likely that whatever the result
any of these collaborations produce it's probable that elements from
Daala could be used in them, or perhaps this codec itself could be the
result.
VP8E_UPD_ENTROPY, VP8E_UPD_REFERENCE, VP8E_USE_REFERENCE were removed
from libvpx and the remaining values were never used here
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Zern <jzern@google.com>
It was replaced by avpriv_ac3_parse_header2.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
The parser only reads the dca core sample rate, which is limited to a
maximum of 48000 Hz, while X96 and HD extensions can increase the sample
rate up to 192000 Hz.
This change prevents the parser and decoder fighting over the sample rate,
potentially confusing user applications. This also fixes sample rate
display of >48000Hz files with ffmpeg/ffprobe when using libdcadec.
Fixes ticket #4397
treat this the same as an over-sized superframe packet to break out of
the parser loop and allow the decoder to fail.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: James Zern <jzern@google.com>
Commit 3a0a2f33a6 claims large performance
advantages for AV_QSORT over libc's qsort. The reason is that I suspect
that libc's qsort (at least on non LTO builds, like the typical FFmpeg config)
can't inline the comparison callback:
https://stackoverflow.com/questions/5290695/is-there-any-way-a-c-c-compiler-can-inline-a-c-callback-function.
AV_QSORT has two things going for it:
1. The guaranteed inlining of qsort itself. This yields a negligible
boost that may be ignored.
2. The more serious possibility of potentially allowing the comparison
function to be inlined - this is likely responsible for the large boosts
reported.
There is a comment explaining that this is a place that could use some
performance improvement. Thus AV_QSORT is used to achieve that.
Benchmarks deemed unnecessary due to existing claims about AV_QSORT.
Tested with FATE.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
When the encoder is fed with less frames than its delay, the picture list looks like { NULL, NULL, ..., frame, frame, frame }. When flushing the encoder (input frame == NULL), we need to ensure the picture list is shifted enough so that we do not return an empty packet, which would mean the encoder has finished, while it has not encoded any frame.
Before the patch, the command:
'./ffmpeg_g -loglevel debug -f lavfi -i "testsrc=d=0.01" -bf 2 -vcodec mpeg2video out.mxf' prints:
Output stream #0:0 (video): 1 frames encoded; 0 packets muxed (0 bytes);
After:
Output stream #0:0 (video): 1 frames encoded; 1 packets muxed (8058 bytes);
Relates to ticket #4817.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
There were some errors in the calculation as well as an entire
unnecessary loop to find the gain coefficient. Merge the
two loops.
Thanks to @ubitux for the suggestions and testing.
Changes:
- strongly prefer dual filters to a single filter
- less strict about using 2 filters w.r.t. energy
- scrap the usage of threshold and spread, useless
- use odd-shaped windows to set the filter direction
- use 4 bits instead of 3 bits for short windows
- simplify and reduce the main loop to a single level
- add stricter regulations for short windows
All of this now makes the TNS implementation operate
as good as it can and it definitely shows. The frequency
thresholds are now even better defined by looking at
the spectrals and the overall sound has been improved at
the price of just a few bits that are well worth it.
Too much effort and work has been spent on such a simple function.
It simply refuses to work as the specifications say, the
transformation is NOT lossless and creates some crackling and
distortions.
Therefore disable it by default and add a couple of warnings to
scare people away from touching it or wasting their time the
way I did.
The decoder does this so I guess we better do that as well.
There's barely any difference between the autoregressive and
the moving average filters looking at spectrals though.
It didn't work out because of the exceptions that needed to be made
for the "-1" cases and was overall more confusing that just manually
checking and setting options for each profile.
Long Term Prediction allows for prediction of spectral coefficients
via the previously decoded time-dependent samples. This feature
works well with harmonic content 2 or more frames long, like speech,
human or non-human, piano music or any constant tones at very low
bitrates.
It should be noted that the current coder is highly efficient and
the rate control system is unable to encode files at extremely
low bitrates (less than 14kbps seems to be impossible) so this
extension isn't capable of optimum operation. Dramatic difference
is observable with some types of audio and speech but for the most
part the audiable differences are subtle. The spectrum looks better
however so the encoder is able to harvest the additional bits that
this feature provies, should the user choose to enable it. So
it's best to enable this feature only if encoding at the absolutely
lowest bitrate that the encoder is capable of.
Apparently it was set to be enabled by default but after the
profile commits it was reverted to be off by default because
I didn't notice.
Works well so (re)enable it.