It is done later in ff_mpv_frame_start() (and nobody uses
current_picture_ptr between setting it in ff_mpv_frame_start()).
(The reason the vsynth*-h263-obmc ref files change is because
the call to ff_find_unused_picture() now happens after the older
pictures have been unreferenced in ff_mpv_frame_start(),
so that their slots in the picture array can be immediately
reused; the obmc code is somehow buggy and changes its output
depending on the earlier contents of the motion_val buffer.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tables are only used by encoders and only for the current picture;
ergo they need not be put into the picture at all, but rather into
the encoder's context. They also don't need to be refcounted,
because there is only one owner.
In contrast to this, the earlier code refcounts them which
incurs unnecessary overhead. These references are not unreferenced
in ff_mpeg_unref_picture() (they are kept in order to have something
like a buffer pool), so that several buffers are kept at the same
time, although only one is needed, thereby wasting memory.
The code also propagates references to other pictures not part of
the pictures array (namely the copy of the current/next/last picture
in the MpegEncContext which get references of their own). These
references are not unreferenced in ff_mpeg_unref_picture() (the
buffers are probably kept in order to have something like a pool),
yet if the current picture is a B-frame, it gets unreferenced
at the end of ff_mpv_encode_picture() and its slot in the picture
array will therefore be reused the next time; but the copy of the
current picture also still has its references and therefore
these buffers will be made duplicated in order to make them writable
in the next call to ff_mpv_encode_picture(). This is of course
unnecessary.
Finally, ff_find_unused_picture() is supposed to just return
any unused picture and the code is supposed to work with it;
yet for the vsynth*-mpeg4-adap tests the result depends upon
the content of these buffers; given that this patchset
changes the content of these buffers (the initial content is now
the state of these buffers after encoding the last frame;
before this patch the buffers used came from the last picture
that occupied the same slot in the picture array) their ref-files
needed to be changed. This points to a bug somewhere (if one removes
the initialization, one gets uninitialized reads in
adaptive_quantization in ratecontrol.c).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
For 422 frames we should not use hard coded 8 to calculate mb size for
uv plane. Chroma shift should be taken into consideration to be
compatiple with different sampling format.
The error is reported by fate test when av_cpu_max_align() return 64
on the platform supporting AVX512. This is a hidden error and it is
exposed after commit 17a59a634c.
mpeg2enc has a mechanism to reuse frames. When it computes SSE (sum of
squared error) on current mb, reconstructed mb will be wrote to the
previous mb space, so that the memory can be saved. However if the align
is 64, the frame is shared in somewhere else, so the frame cannot be
reused and a new frame to store reconstrued data is created. Because the
height of mb is wrong when compute sse on 422 frame, starting from the
second line of macro block, changed data is read when frame is reused
(we need to read row 16 rather than row 8 if frame is 422), and unchanged
data is read when frame is not reused (a new frame is created so the
original frame will not be changed).
That is why commit 17a59a634c exposes this
issue, because it add av_cpu_max_align() and this function return 64 on
platform supporting AVX512 which lead to creating a frame in mpeg2enc,
and this lead to the different outputs.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
log2() remains, this can either be replaced by a integer implementation or the table
hardcoded if needed
Tested-by: Anton Khirnov <anton@khirnov.net>
Tested-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Accidentally resurrected in fc49f22c3b
and 7711f19eda,
forgotten in 6ebc71847e and
1a6a088f7c or never needed
(filter-aemphasis).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These conversion appears to be exhibiting the same rounding error as the rgbf32 formats where.
I seperated the rounding value from the 16 and 128 offsets, I think it makes it a little more clear.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This is utilized by various media ingests to figure out the bit
rate of the content you are pushing towards it, so write it for
video, audio and subtitle tracks in case at least one nonzero value
is available. It is only mentioned for timed metadata sample
descriptions in QTFF, so limit it only to ISOBMFF (MODE_MP4) mode.
Updates the FATE tests which have their results changed due to the
20 extra bytes being written per track.
SMPTE 12M timecode can only count frames up to 39, because the tens-of-frames
value is stored in 2 bit. In order to resolve this 50/60 fps SMPTE timecode is
using the field bit (which is the same bit as the phase correction bit) to
signal the least significant bit of a 50/60 fps timecode. See SMPTE ST
12-1:2014 section 12.1.
Therefore we slightly change the format of the return value of
av_timecode_get_smpte_from_framenum and AV_FRAME_DATA_S12M_TIMECODE and start
using the previously unused Phase Correction bit as Field bit. (As the SMPTE
standard suggests)
We add 50/60 fps support to av_timecode_get_smpte_from_framenum by calling the
recently added av_timecode_get_smpte function in it which already handles this
properly.
This change affects the decklink indev and the DV and MXF muxers. MXF has no
fate test for 50/60fps content, DV does, therefore the changes.
MediaInfo (a recent version) confirms that half-frame timecode must be inserted
to DV. MXFInspect confirms valid timecode insertion to the System Item of MXF
files. For MXF, also see EBU R122.
Note that for DV the field flag is not used because in the HDV specs (SMPTE
370M) it is still defined as biphase mark polarity correction flag. So it
should not matter that the DV muxer overrides the field bit.
Signed-off-by: Marton Balint <cus@passwd.hu>
The implementation of the tag tree did not
set the correct reset value for the encoder.
This lead to inefficent tag tree being encoded.
This patch fixes the implementation of the
ff_tag_tree_zero() function.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This patch allows setting a compression ratio and to
set multiple layers. The user has to input a compression
ratio for each layer.
The per layer compression ration can be set as follows:
-layer_rates "r1,r2,...rn"
for to create 'n' layers.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Since there is no information about the source format, "unspecified"
is the correct value to write here.
All tests using the MPEG-2 encoder are updated, as this changes the
header on all outputs.
As it gives excellent encoding gains at an insignificant speed increase
and passes fate without problems, it should now be safe to enable by
default.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Allows to get a more realistic total bitrate (and estimated file size)
in avi_write_header. Previously a static default value of 200k was
assumed.
Adds an internal helper function for bitrate guessing.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
From
https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229%28v=vs.85%29.aspx:
"If biCompression equals BI_RGB and the bitmap uses 8 bpp or less, the
bitmap has a color table immediatelly following the BITMAPINFOHEADER
structure. The color table consists of an array of RGBQUAD values. The
size of the array is given by the biClrUsed member. If biClrUsed is
zero, the array contains the maximum number of colors for the given
bitdepth; that is, 2^biBitCount colors."
Nothing about "monochrome" here. Unfortunately, pal8 to monow conversion
seems a bit flaky, but that's another story.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>