After the AVFrame has been wrapped into a buffer,
it is owned by the buffer and must not be freed manually
any more. Yet this happens on subsequent errors.
This bug was introduced in 6ca43a9675.
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
av_image_copy() accepts const uint8_t* const * as source;
lots of user have uint8_t* const * and therefore either
cast (the majority) or copy the array of pointers.
This commit changes this by adding a static inline wrapper
for av_image_copy() that casts between the two types
so that we do not need to add casts everywhere else.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Broken in 0c6e5f321b.
Also add it to decklink_enc.cpp in order not to rely
on an implicit inclusion via libavfilter/ccfifo.h.
Reviewed-by: あんこ <mailcrystaldiskinfo@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
AVCodec is only ever used as an incomplete type (i.e. via a pointer
to an AVCodec) in avformat.h and it is not really part of the core
of avformat.h or libavformat; almost none of our internal users
make use of it (and none make use of hwcontext.h, which is implicitly
included). So switch to use struct AVCodec, but continue to include
codec.h for external users for compatibility.
Also, do the same for AVFrame and frame.h, which is implicitly included
by codec.h (via lavu/hwcontext.h).
Also, remove an unnecessary inclusion of <time.h>.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Including winsock2.h or windows.h without WIN32_LEAN_AND_MEAN cause
bzlib.h to parse as nonsense, due to an instance of #define char small
in rpcndr.h.
See:
https://stackoverflow.com/a/27794577
Signed-off-by: L. E. Segovia <amy@amyspark.me>
Signed-off-by: Martin Storsjö <martin@martin.st>
glcontext was added under CONFIG_SDL2
libavdevice/opengl_enc.c: In function ‘opengl_draw’:
libavdevice/opengl_enc.c:1204:15: error: ‘OpenGLContext’ has no member
named ‘glcontext’
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Since “2d924b3a63 fftools/ffmpeg: move each muxer to a separate thread”,
opengl_write_packet() is called from a different thread than
opengl_write_header() and would nothing for lack of a selected context.
BMDTimeValue is defined as LONGLONG on Windows, but int64_t on Linux/Mac.
Fixes format string warnings:
libavdevice/decklink_enc.cpp: In function ‘void construct_cc(AVFormatContext*, decklink_ctx*, AVPacket*, klvanc_line_set_s*)’:
libavdevice/decklink_enc.cpp:424:48: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 4 has type ‘BMDTimeValue {aka long int}’ [-Wformat=]
ctx->bmd_tb_num, ctx->bmd_tb_den);
~~~~~~~~~~~~~~~ ^
libavdevice/decklink_enc.cpp:424:48: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 5 has type ‘BMDTimeValue {aka long int}’ [-Wformat=]
Signed-off-by: Marton Balint <cus@passwd.hu>
Support decoding and embedding VANC packets delivered via SMPTE 2038
into the SDI output. We leverage an intermediate queue because
data packets are announced separately from video but we need to embed
the data into the video frame when it is output.
Note that this patch has some additional abstraction for data
streams in general as opposed to just SMPTE 2038 packets. This is
because subsequent patches will introduce support for other
data codecs.
Thanks to Marton Balint for review/feedback.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing queue initialization function would always sets it's
maximum queue size to ctx->queue_size. But because we are introducing
more queues we may want the sizes to differ between them.
Move the specification of the queue size into an argument, which can
be passed from the caller.
This patch makes no functional change to the behavior. It is being
made to accommodate Marton Balin's request to split out the queue
size for the new VANC queue being introduced in a later patch.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing DecklinkQueue implementation was using the PacketList
structure but wasn't using the standard avpriv_packet_list_get and
avpriv_packet_list_put functions. Convert to using them so we
eliminate the duplicate logic, per Marton Balint's suggestion.
Updated to reflect feedback from Marton Balint provided on 05/11/23.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This is not public API, no it has no need for an alloc() and free()
functions. The struct can reside on stack.
Signed-off-by: James Almer <jamrial@gmail.com>
Move the AVPacketQueue functionality that is currently only used
for the decklink decode module into decklink_common, so it can
be shared by the decklink encoder (i.e. for VANC insertion when
we receive data packets separate from video).
The threadsafe queue used within the decklink module was named
"AVPacketQueue" which implies that it is part of the public API,
which it is not.
Rename the functions and the name of the queue struct to make
clear it is used exclusively by decklink, per Marton Balint's
suggestion.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Unlike other cases where the closed captions are embedded in the
video stream as MPEG-2 userdata or H.264 SEI data, with MOV files
the captions are often found on a separate "e608" subtitle track.
Add support for playout of such files, leveraging the new ccfifo
mechanism to ensure that they are embedded into VANC at the correct
rate (since e608 packets often contain batches of multiple 608 pairs).
Note this patch includes a new file named libavdevice/ccfifo.c, which
allows the ccfifo functionality in libavfilter to be reused even if
doing shared builds. This is the same approach used for log2_tab.c.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Extend the decklink output to include support for compressed AC-3,
encapsulated using the SMPTE ST 377:2015 standard.
This functionality can be exercised by using the "copy" codec when
the input audio stream is AC-3. For example:
./ffmpeg -i ~/foo.ts -codec:a copy -f decklink 'UltraStudio Mini Monitor'
Note that the default behavior continues to be to do PCM output,
which means without specifying the copy codec a stream containing
AC-3 will be decoded and downmixed to stereo audio before output.
Thanks to Marton Balint for providing feedback.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Implement support for including AFD in decklink output when putting
out 10-bit VANC data.
Updated to reflect feedback in 2018 from Marton Balint <cus@passwd.hu>,
Carl Eugen Hoyos <ceffmpeg@gmail.com> and Aaron Levinson
<alevinsn_dev@levland.net>. Also includes fixes to set the AR field
based on the SAR, as well as now sending the AFD info in both fields
for interlaced formats.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
These fields are supposed to store information about the packet the
frame was decoded from, specifically the byte offset it was stored at
and its size.
However,
- the fields are highly ad-hoc - there is no strong reason why
specifically those (and not any other) packet properties should have a
dedicated field in AVFrame; unlike e.g. the timestamps, there is no
fundamental link between coded packet offset/size and decoded frames
- they only make sense for frames produced by decoding demuxed packets,
and even then it is not always the case that the encoded data was
stored in the file as a contiguous sequence of bytes (in order for pos
to be well-defined)
- pkt_pos was added without much explanation, apparently to allow
passthrough of this information through lavfi in order to handle byte
seeking in ffplay. That is now implemented using arbitrary user data
passthrough in AVFrame.opaque_ref.
- several filters use pkt_pos as a variable available to user-supplied
expressions, but there seems to be no established motivation for using them.
- pkt_size was added for use in ffprobe, but that too is now handled
without using this field. Additonally, the values of this field
produced by libavcodec are flawed, as described in the previous
ffprobe conversion commit.
In summary - these fields are ill-defined and insufficiently motivated,
so deprecate them.
Due to refactoring, the ctx/cctx variables are never actually used
in ff_decklink_write_packet(), so just remove them.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The ff_decklink_write_packet() was always caching the last pts
received, to be used when calling StopScheduledPlayback(). However
because audio and video are on different timebases and the call to
StopScheduledPlayback() expects the video timebase, we'll end up
sending a weird value to the stop routine if the last packet
received contained audio.
Move the setting of last_pts to just be for the video stream.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing code assumed that the first frame received by the decklink
output would always be PTS zero. However if running in other timing
modes than the default of CBR, items such as frame dropping at the
beginning may result in starting at a non-zero PTS.
For example, in our setup because we discard probing data and run
with "-vsync 2" the first video frame scheduled to the decklink
output will have a PTS around 170. Scheduling frames too far into
the future will either fail or cause a backlog of frames scheduled
far enough into the future that the entire pipeline will stall.
Issue can be reproduced with the following command-line:
./ffmpeg -copyts -i foo.ts -f decklink -vcodec v210 -ac 2 'DeckLink Duo (4)'
Keep track of the PTS of the first frame received, so that when
we enable start playback we can provide that value to the decklink
driver.
Thanks to Marton Balint for review and suggestion to use
AV_NOPTS_VALUE rather than zero for the initial value.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This commit does for AVOutputFormat what commit
20f9727018 did for AVCodec:
It adds a new type FFOutputFormat, moves all the internals
of AVOutputFormat to it and adds a now reduced AVOutputFormat
as first member.
This does not affect/improve extensibility of both public
or private fields for muxers (it is still a mess due to lavd).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The general demuxing API uses parsers and decoders. Therefore
FFStream contains pointers to AVCodecContexts and
AVCodecParserContext and lavf/internal.h includes lavc/avcodec.h.
Yet actually only a few files files really use these; and it is best
when this number stays small. Therefore this commit uses opaque
structs in lavf/internal.h for these contexts and stops including
avcodec.h.
This also avoids including lavc/codec_desc.h implicitly. All other
headers are implicitly included as now (mostly through codec.h).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids an extra copy of potentially quite big video frames.
Instead of copying the entire frames data into a rawvideo packet it
packs the frame into a wrapped avframe packet and passes it through
as-is.
Unfortunately, wrapped avframes are set up to be video frames, so the
audio frames continue to be copied.
Additionally, this enabled passing through video frames that previously
were impossible to process, like hardware frames or other special
formats that couldn't be packed into a rawvideo packet.
According to API docs avdevice_list_devices(), avdevice_list_input_sources()
and avdevice_list_input_sinks() should return the number of autodetected
devices on success. This is redundant with AVDeviceInfoList->nb_devices so it
was not noticed earlier that none of the underlying device list functions work
like that.
Let's fix it in generic code to make it in line with the API docs.
Fixes ticket #9820.
Signed-off-by: Marton Balint <cus@passwd.hu>
The packets given to muxers need not be writable,
so it is best to access them via const uint8_t*.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Don't assume each sample is one byte in size. Doing so results in wrong and
occasionally non-monotonically-increasing timestamps.
Fix nearby cosmetic typo.
Signed-off-by: Marton Balint <cus@passwd.hu>
Reduces default fragment size from the pulse audio default of 2 sec to 50 ms.
This also has an effect on the size of the returned frames, which will be
around 50 ms as well, making timestamps more accurate.
This should fix the regression in ticket #9776.
Pulseaudio timestamps for monitor sources are still pretty inaccurate for me,
but I don't see how else should we query latencies from the library.
Signed-off-by: Marton Balint <cus@passwd.hu>