This can be achieved by using AV_OPT_TYPE_FLAGS instead of
AV_OPT_TYPE_STRING. It also avoids strcmp() when accessing
the option.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It implements BSD-specific support for very old analog capture cards,
which are highly unlikely to be useful today. After being added in 2005,
there were never any commits to it beyond compilation fixes and generic
maintenance. There have also been zero trac tickets for this device, and
the only related web search result I found concludes that it does not
work.
The code also does some unacceptable things, like messing with signal
handlers and storing its state in global variables.
Unnecessary since acf63d5350adeae551d412db699f8ca03f7e76b9;
also avoids relocations.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Converting from an integer to HWND (which is a pointer) requires
an explicit cast, otherwise Clang errors out like this:
src/libavdevice/gdigrab.c:280:14: error: incompatible integer to pointer conversion assigning to 'HWND' (aka 'struct HWND__ *') from 'long' [-Wint-conversion]
280 | hwnd = strtol(name, &p, 0);
| ^ ~~~~~~~~~~~~~~~~~~~
(With GCC and MSVC, this was a mere warning, but with recent Clang,
this is an error.)
After adding a cast, all compilers also warn something like this:
src/libavdevice/gdigrab.c:280:16: warning: cast to 'HWND' (aka 'struct HWND__ *') from smaller integer type 'long' [-Wint-to-pointer-cast]
280 | hwnd = (HWND) strtol(name, &p, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
On Windows, long types are 32 bit, so to get a usable pointer, we
need to use long long. And interpret it as unsigned long long
while at it - i.e. using strtoull.
Finally, right above it, the code triggered the following warning:
src/libavdevice/gdigrab.c:278:15: warning: mixing declarations and code is incompatible with standards before C99 [-Wdeclaration-after-statement]
278 | char *p;
| ^
Signed-off-by: Martin Storsjö <martin@martin.st>
x11grab can capture windows by their ID, but gdigrab can only capture
windows by their names, internally calling FindWindowW to lookup its
handle.
This patch simply allows the user to specify a window handle directly.
Signed-off-by: Lena <lena@nihil.gay>
<OS>_VERSION_MAX_ALLOWED indicates what version is available in
the SDK, while <OS>_VERSION_MIN_REQUIRED is the version we can
assume is available, i.e. similar to what is set with e.g.
-miphoneos-version-min on the command line.
This fixes build errors like these:
src/libavdevice/avfoundation.m:788:37: error: 'AVCaptureDeviceTypeContinuityCamera' is only available on macOS 14.0 or newer [-Werror,-Wunguarded-availability-new]
[deviceTypes addObject: AVCaptureDeviceTypeContinuityCamera];
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureDevice.h:551:38: note: 'AVCaptureDeviceTypeContinuityCamera' has been marked as being introduced in macOS 14.0 here, but the deployment target is macOS 13.0.0
AVF_EXPORT AVCaptureDeviceType const AVCaptureDeviceTypeContinuityCamera API_AVAILABLE(macos(14.0), ios(17.0), macCatalyst(17.0), tvos(17.0)) API_UNAVAILABLE(visionos) API_UNAVAILABLE(watchos);
^
Alternatively, we could use these more modern APIs, if enclosed
in suitable @available() checks.
Building with macOS platform, the compiler has a warning: 'kAudioObjectPropertyElementMaster' is deprecated in macOS 12.0
Signed-off-by: xufuji456 <839789740@qq.com>
Building with iOS platform, the compiler has a warning: "'devicesWithMediaType:' is deprecated: first deprecated in iOS 10.0 - Use AVCaptureDeviceDiscoverySession instead"
Signed-off-by: xufuji456 <839789740@qq.com>
Deprecate AVStream.side_data and its helpers in favor of the AVStream's
codecpar.coded_side_data.
This will considerably simplify the propagation of global side data to decoders
and from encoders. Instead of having to do it inside packets, it will be
available during init().
Global and frame specific side data will therefore be distinct.
Signed-off-by: James Almer <jamrial@gmail.com>
It is of no value to the user, because every muxer can always
be flushed with a NULL packet. As its documentation shows
("If not set, the muxer will not receive a NULL packet in
the write_packet function") it is actually an internal flag
that has been publically exposed because there was no internal
flags field for output formats for a long time. But now there is
and so use it by replacing the public flag with a private one.
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
After the AVFrame has been wrapped into a buffer,
it is owned by the buffer and must not be freed manually
any more. Yet this happens on subsequent errors.
This bug was introduced in 6ca43a9675.
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
av_image_copy() accepts const uint8_t* const * as source;
lots of user have uint8_t* const * and therefore either
cast (the majority) or copy the array of pointers.
This commit changes this by adding a static inline wrapper
for av_image_copy() that casts between the two types
so that we do not need to add casts everywhere else.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Broken in 0c6e5f321b.
Also add it to decklink_enc.cpp in order not to rely
on an implicit inclusion via libavfilter/ccfifo.h.
Reviewed-by: あんこ <mailcrystaldiskinfo@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
AVCodec is only ever used as an incomplete type (i.e. via a pointer
to an AVCodec) in avformat.h and it is not really part of the core
of avformat.h or libavformat; almost none of our internal users
make use of it (and none make use of hwcontext.h, which is implicitly
included). So switch to use struct AVCodec, but continue to include
codec.h for external users for compatibility.
Also, do the same for AVFrame and frame.h, which is implicitly included
by codec.h (via lavu/hwcontext.h).
Also, remove an unnecessary inclusion of <time.h>.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Including winsock2.h or windows.h without WIN32_LEAN_AND_MEAN cause
bzlib.h to parse as nonsense, due to an instance of #define char small
in rpcndr.h.
See:
https://stackoverflow.com/a/27794577
Signed-off-by: L. E. Segovia <amy@amyspark.me>
Signed-off-by: Martin Storsjö <martin@martin.st>
glcontext was added under CONFIG_SDL2
libavdevice/opengl_enc.c: In function ‘opengl_draw’:
libavdevice/opengl_enc.c:1204:15: error: ‘OpenGLContext’ has no member
named ‘glcontext’
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Since “2d924b3a63 fftools/ffmpeg: move each muxer to a separate thread”,
opengl_write_packet() is called from a different thread than
opengl_write_header() and would nothing for lack of a selected context.
BMDTimeValue is defined as LONGLONG on Windows, but int64_t on Linux/Mac.
Fixes format string warnings:
libavdevice/decklink_enc.cpp: In function ‘void construct_cc(AVFormatContext*, decklink_ctx*, AVPacket*, klvanc_line_set_s*)’:
libavdevice/decklink_enc.cpp:424:48: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 4 has type ‘BMDTimeValue {aka long int}’ [-Wformat=]
ctx->bmd_tb_num, ctx->bmd_tb_den);
~~~~~~~~~~~~~~~ ^
libavdevice/decklink_enc.cpp:424:48: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 5 has type ‘BMDTimeValue {aka long int}’ [-Wformat=]
Signed-off-by: Marton Balint <cus@passwd.hu>
Support decoding and embedding VANC packets delivered via SMPTE 2038
into the SDI output. We leverage an intermediate queue because
data packets are announced separately from video but we need to embed
the data into the video frame when it is output.
Note that this patch has some additional abstraction for data
streams in general as opposed to just SMPTE 2038 packets. This is
because subsequent patches will introduce support for other
data codecs.
Thanks to Marton Balint for review/feedback.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing queue initialization function would always sets it's
maximum queue size to ctx->queue_size. But because we are introducing
more queues we may want the sizes to differ between them.
Move the specification of the queue size into an argument, which can
be passed from the caller.
This patch makes no functional change to the behavior. It is being
made to accommodate Marton Balin's request to split out the queue
size for the new VANC queue being introduced in a later patch.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing DecklinkQueue implementation was using the PacketList
structure but wasn't using the standard avpriv_packet_list_get and
avpriv_packet_list_put functions. Convert to using them so we
eliminate the duplicate logic, per Marton Balint's suggestion.
Updated to reflect feedback from Marton Balint provided on 05/11/23.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This is not public API, no it has no need for an alloc() and free()
functions. The struct can reside on stack.
Signed-off-by: James Almer <jamrial@gmail.com>
Move the AVPacketQueue functionality that is currently only used
for the decklink decode module into decklink_common, so it can
be shared by the decklink encoder (i.e. for VANC insertion when
we receive data packets separate from video).
The threadsafe queue used within the decklink module was named
"AVPacketQueue" which implies that it is part of the public API,
which it is not.
Rename the functions and the name of the queue struct to make
clear it is used exclusively by decklink, per Marton Balint's
suggestion.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Unlike other cases where the closed captions are embedded in the
video stream as MPEG-2 userdata or H.264 SEI data, with MOV files
the captions are often found on a separate "e608" subtitle track.
Add support for playout of such files, leveraging the new ccfifo
mechanism to ensure that they are embedded into VANC at the correct
rate (since e608 packets often contain batches of multiple 608 pairs).
Note this patch includes a new file named libavdevice/ccfifo.c, which
allows the ccfifo functionality in libavfilter to be reused even if
doing shared builds. This is the same approach used for log2_tab.c.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Extend the decklink output to include support for compressed AC-3,
encapsulated using the SMPTE ST 377:2015 standard.
This functionality can be exercised by using the "copy" codec when
the input audio stream is AC-3. For example:
./ffmpeg -i ~/foo.ts -codec:a copy -f decklink 'UltraStudio Mini Monitor'
Note that the default behavior continues to be to do PCM output,
which means without specifying the copy codec a stream containing
AC-3 will be decoded and downmixed to stereo audio before output.
Thanks to Marton Balint for providing feedback.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Implement support for including AFD in decklink output when putting
out 10-bit VANC data.
Updated to reflect feedback in 2018 from Marton Balint <cus@passwd.hu>,
Carl Eugen Hoyos <ceffmpeg@gmail.com> and Aaron Levinson
<alevinsn_dev@levland.net>. Also includes fixes to set the AR field
based on the SAR, as well as now sending the AFD info in both fields
for interlaced formats.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
These fields are supposed to store information about the packet the
frame was decoded from, specifically the byte offset it was stored at
and its size.
However,
- the fields are highly ad-hoc - there is no strong reason why
specifically those (and not any other) packet properties should have a
dedicated field in AVFrame; unlike e.g. the timestamps, there is no
fundamental link between coded packet offset/size and decoded frames
- they only make sense for frames produced by decoding demuxed packets,
and even then it is not always the case that the encoded data was
stored in the file as a contiguous sequence of bytes (in order for pos
to be well-defined)
- pkt_pos was added without much explanation, apparently to allow
passthrough of this information through lavfi in order to handle byte
seeking in ffplay. That is now implemented using arbitrary user data
passthrough in AVFrame.opaque_ref.
- several filters use pkt_pos as a variable available to user-supplied
expressions, but there seems to be no established motivation for using them.
- pkt_size was added for use in ffprobe, but that too is now handled
without using this field. Additonally, the values of this field
produced by libavcodec are flawed, as described in the previous
ffprobe conversion commit.
In summary - these fields are ill-defined and insufficiently motivated,
so deprecate them.
Due to refactoring, the ctx/cctx variables are never actually used
in ff_decklink_write_packet(), so just remove them.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The ff_decklink_write_packet() was always caching the last pts
received, to be used when calling StopScheduledPlayback(). However
because audio and video are on different timebases and the call to
StopScheduledPlayback() expects the video timebase, we'll end up
sending a weird value to the stop routine if the last packet
received contained audio.
Move the setting of last_pts to just be for the video stream.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing code assumed that the first frame received by the decklink
output would always be PTS zero. However if running in other timing
modes than the default of CBR, items such as frame dropping at the
beginning may result in starting at a non-zero PTS.
For example, in our setup because we discard probing data and run
with "-vsync 2" the first video frame scheduled to the decklink
output will have a PTS around 170. Scheduling frames too far into
the future will either fail or cause a backlog of frames scheduled
far enough into the future that the entire pipeline will stall.
Issue can be reproduced with the following command-line:
./ffmpeg -copyts -i foo.ts -f decklink -vcodec v210 -ac 2 'DeckLink Duo (4)'
Keep track of the PTS of the first frame received, so that when
we enable start playback we can provide that value to the decklink
driver.
Thanks to Marton Balint for review and suggestion to use
AV_NOPTS_VALUE rather than zero for the initial value.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>