|
|
\input texinfo @c -*- texinfo -*- |
|
|
|
|
|
@settitle FFmpeg Documentation |
|
|
@titlepage |
|
|
@sp 7 |
|
|
@center @titlefont{FFmpeg Documentation} |
|
|
@sp 3 |
|
|
@end titlepage |
|
|
|
|
|
|
|
|
@chapter Introduction |
|
|
|
|
|
FFmpeg is a very fast video and audio converter. It can also grab from |
|
|
a live audio/video source. |
|
|
|
|
|
The command line interface is designed to be intuitive, in the sense |
|
|
that ffmpeg tries to figure out all the parameters, when |
|
|
possible. You have usually to give only the target bitrate you want. |
|
|
|
|
|
FFmpeg can also convert from any sample rate to any other, and resize |
|
|
video on the fly with a high quality polyphase filter. |
|
|
|
|
|
@chapter Quick Start |
|
|
|
|
|
@section Video and Audio grabbing |
|
|
|
|
|
FFmpeg can use a video4linux compatible video source and any Open Sound |
|
|
System audio source: |
|
|
@example |
|
|
ffmpeg /tmp/out.mpg |
|
|
@end example |
|
|
|
|
|
Note that you must activate the right video source and channel before |
|
|
launching ffmpeg. You can use any TV viewer such as xawtv |
|
|
(@url{http://bytesex.org/xawtv/}) by Gerd Knorr which I find very |
|
|
good. You must also set correctly the audio recording levels with a |
|
|
standard mixer. |
|
|
|
|
|
@section Video and Audio file format conversion |
|
|
|
|
|
* ffmpeg can use any supported file format and protocol as input: |
|
|
|
|
|
Examples: |
|
|
|
|
|
* You can input from YUV files: |
|
|
|
|
|
@example |
|
|
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg |
|
|
@end example |
|
|
|
|
|
It will use the files: |
|
|
@example |
|
|
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V, |
|
|
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc... |
|
|
@end example |
|
|
|
|
|
The Y files use twice the resolution of the U and V files. They are |
|
|
raw files, without header. They can be generated by all decent video |
|
|
decoders. You must specify the size of the image with the '-s' option |
|
|
if ffmpeg cannot guess it. |
|
|
|
|
|
* You can input from a RAW YUV420P file: |
|
|
|
|
|
@example |
|
|
ffmpeg -i /tmp/test.yuv /tmp/out.avi |
|
|
@end example |
|
|
|
|
|
The RAW YUV420P is a file containing RAW YUV planar, for each frame first |
|
|
come the Y plane followed by U and V planes, which are half vertical and |
|
|
horizontal resolution. |
|
|
|
|
|
* You can output to a RAW YUV420P file: |
|
|
|
|
|
@example |
|
|
ffmpeg -i mydivx.avi -o hugefile.yuv |
|
|
@end example |
|
|
|
|
|
* You can set several input files and output files: |
|
|
|
|
|
@example |
|
|
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg |
|
|
@end example |
|
|
|
|
|
Convert the audio file a.wav and the raw yuv video file a.yuv |
|
|
to mpeg file a.mpg |
|
|
|
|
|
* You can also do audio and video conversions at the same time: |
|
|
|
|
|
@example |
|
|
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2 |
|
|
@end example |
|
|
|
|
|
Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio. |
|
|
|
|
|
* You can encode to several formats at the same time and define a |
|
|
mapping from input stream to output streams: |
|
|
|
|
|
@example |
|
|
ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0 |
|
|
@end example |
|
|
|
|
|
Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map |
|
|
file:index' specify which input stream is used for each output |
|
|
stream, in the order of the definition of output streams. |
|
|
|
|
|
* You can transcode decrypted VOBs |
|
|
|
|
|
@example |
|
|
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi |
|
|
@end example |
|
|
|
|
|
This is a typical DVD ripper example, input from a VOB file, output |
|
|
to an AVI file with MPEG-4 video and MP3 audio, note that in this |
|
|
command we use B frames so the MPEG-4 stream is DivX5 compatible, GOP |
|
|
size is 300 that means an INTRA frame every 10 seconds for 29.97 fps |
|
|
input video. Also the audio stream is MP3 encoded so you need LAME |
|
|
support which is enabled using @code{--enable-mp3lame} when |
|
|
configuring. The mapping is particularly useful for DVD transcoding |
|
|
to get the desired audio language. |
|
|
|
|
|
NOTE: to see the supported input formats, use @code{ffmpeg -formats}. |
|
|
|
|
|
@chapter Invocation |
|
|
|
|
|
@section Syntax |
|
|
|
|
|
The generic syntax is: |
|
|
|
|
|
@example |
|
|
ffmpeg [[options][-i input_file]]... {[options] output_file}... |
|
|
@end example |
|
|
If no input file is given, audio/video grabbing is done. |
|
|
|
|
|
As a general rule, options are applied to the next specified |
|
|
file. For example, if you give the '-b 64' option, it sets the video |
|
|
bitrate of the next file. Format option may be needed for raw input |
|
|
files. |
|
|
|
|
|
By default, ffmpeg tries to convert as losslessly as possible: it |
|
|
uses the same audio and video parameter for the outputs as the one |
|
|
specified for the inputs. |
|
|
|
|
|
@section Main options |
|
|
|
|
|
@table @samp |
|
|
@item -L |
|
|
show license |
|
|
@item -h |
|
|
show help |
|
|
@item -formats |
|
|
show available formats, codecs, protocols, ... |
|
|
@item -f fmt |
|
|
force format |
|
|
@item -i filename |
|
|
input file name |
|
|
|
|
|
@item -y |
|
|
overwrite output files |
|
|
|
|
|
@item -t duration |
|
|
set the recording time in seconds. @code{hh:mm:ss[.xxx]} syntax is also |
|
|
supported. |
|
|
|
|
|
@item -title string |
|
|
set the title |
|
|
|
|
|
@item -author string |
|
|
set the author |
|
|
|
|
|
@item -copyright string |
|
|
set the copyright |
|
|
|
|
|
@item -comment string |
|
|
set the comment |
|
|
|
|
|
@item -b bitrate |
|
|
set video bitrate (in kbit/s) |
|
|
@end table |
|
|
|
|
|
@section Video Options |
|
|
|
|
|
@table @samp |
|
|
@item -s size |
|
|
set frame size [160x128] |
|
|
@item -r fps |
|
|
set frame rate [25] |
|
|
@item -b bitrate |
|
|
set the video bitrate in kbit/s [200] |
|
|
@item -vn |
|
|
disable video recording [no] |
|
|
@item -bt tolerance |
|
|
set video bitrate tolerance (in kbit/s) |
|
|
@item -sameq |
|
|
use same video quality as source (implies VBR) |
|
|
|
|
|
@item -pass n |
|
|
select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video at the exact requested bit rate is generated in the second pass. |
|
|
|
|
|
@item -passlogfile file |
|
|
select two pass log file name |
|
|
|
|
|
@end table |
|
|
|
|
|
@section Audio Options |
|
|
|
|
|
@table @samp |
|
|
@item -ab bitrate |
|
|
set audio bitrate (in kbit/s) |
|
|
@item -ar freq |
|
|
set the audio sampling freq [44100] |
|
|
@item -ab bitrate |
|
|
set the audio bitrate in kbit/s [64] |
|
|
@item -ac channels |
|
|
set the number of audio channels [1] |
|
|
@item -an |
|
|
disable audio recording [no] |
|
|
@end table |
|
|
|
|
|
@section Advanced options |
|
|
|
|
|
@table @samp |
|
|
@item -map file:stream |
|
|
set input stream mapping |
|
|
@item -g gop_size |
|
|
set the group of picture size |
|
|
@item -intra |
|
|
use only intra frames |
|
|
@item -qscale q |
|
|
use fixed video quantiser scale (VBR) |
|
|
@item -qmin q |
|
|
min video quantiser scale (VBR) |
|
|
@item -qmax q |
|
|
max video quantiser scale (VBR) |
|
|
@item -qdiff q |
|
|
max difference between the quantiser scale (VBR) |
|
|
@item -qblur blur |
|
|
video quantiser scale blur (VBR) |
|
|
@item -qcomp compression |
|
|
video quantiser scale compression (VBR) |
|
|
@item -vd device |
|
|
set video device |
|
|
@item -vcodec codec |
|
|
force video codec |
|
|
@item -me method |
|
|
set motion estimation method |
|
|
@item -bf frames |
|
|
use 'frames' B frames (only MPEG-4) |
|
|
@item -hq |
|
|
activate high quality settings |
|
|
@item -4mv |
|
|
use four motion vector by macroblock (only MPEG-4) |
|
|
@item -ad device |
|
|
set audio device |
|
|
@item -acodec codec |
|
|
force audio codec |
|
|
@item -deinterlace |
|
|
deinterlace pictures |
|
|
@item -benchmark |
|
|
add timings for benchmarking |
|
|
@item -hex |
|
|
dump each input packet |
|
|
@item -psnr |
|
|
calculate PSNR of compressed frames |
|
|
@item -vstats |
|
|
dump video coding statistics to file |
|
|
@end table |
|
|
|
|
|
@section Protocols |
|
|
|
|
|
The filename can be @file{-} to read from the standard input or to write |
|
|
to the standard output. |
|
|
|
|
|
ffmpeg handles also many protocols specified with the URL syntax. |
|
|
|
|
|
Use 'ffmpeg -formats' to have a list of the supported protocols. |
|
|
|
|
|
The protocol @code{http:} is currently used only to communicate with |
|
|
ffserver (see the ffserver documentation). When ffmpeg will be a |
|
|
video player it will also be used for streaming :-) |
|
|
|
|
|
@chapter Tips |
|
|
|
|
|
@itemize |
|
|
@item For streaming at very low bit rate application, use a low frame rate |
|
|
and a small gop size. This is especially true for real video where |
|
|
the Linux player does not seem to be very fast, so it can miss |
|
|
frames. An example is: |
|
|
|
|
|
@example |
|
|
ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm |
|
|
@end example |
|
|
|
|
|
@item The parameter 'q' which is displayed while encoding is the current |
|
|
quantizer. The value of 1 indicates that a very good quality could |
|
|
be achieved. The value of 31 indicates the worst quality. If q=31 |
|
|
too often, it means that the encoder cannot compress enough to meet |
|
|
your bit rate. You must either increase the bit rate, decrease the |
|
|
frame rate or decrease the frame size. |
|
|
|
|
|
@item If your computer is not fast enough, you can speed up the |
|
|
compression at the expense of the compression ratio. You can use |
|
|
'-me zero' to speed up motion estimation, and '-intra' to disable |
|
|
completely motion estimation (you have only I frames, which means it |
|
|
is about as good as JPEG compression). |
|
|
|
|
|
@item To have very low bitrates in audio, reduce the sampling frequency |
|
|
(down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3). |
|
|
|
|
|
@item To have a constant quality (but a variable bitrate), use the option |
|
|
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst |
|
|
quality). |
|
|
|
|
|
@item When converting video files, you can use the '-sameq' option which |
|
|
uses in the encoder the same quality factor than in the decoder. It |
|
|
allows to be almost lossless in encoding. |
|
|
|
|
|
@end itemize |
|
|
|
|
|
@chapter Supported File Formats and Codecs |
|
|
|
|
|
You can use the @code{-formats} option to have an exhaustive list. |
|
|
|
|
|
@section File Formats |
|
|
|
|
|
FFmpeg supports the following file formats through the @code{libavformat} |
|
|
library: |
|
|
|
|
|
@multitable @columnfractions .4 .1 .1 |
|
|
@item Supported File Format @tab Encoding @tab Decoding @tab Comments |
|
|
@item MPEG audio @tab X @tab X |
|
|
@item MPEG1 systems @tab X @tab X |
|
|
@tab muxed audio and video |
|
|
@item MPEG2 PS @tab X @tab X |
|
|
@tab also known as @code{VOB} file |
|
|
@item MPEG2 TS @tab @tab X |
|
|
@tab also known as DVB Transport Stream |
|
|
@item ASF@tab X @tab X |
|
|
@item AVI@tab X @tab X |
|
|
@item WAV@tab X @tab X |
|
|
@item Macromedia Flash@tab X @tab X |
|
|
@tab Only embedded audio is decoded |
|
|
@item Real Audio and Video @tab X @tab X |
|
|
@item Raw AC3 @tab X @tab X |
|
|
@item Raw MJPEG @tab X @tab X |
|
|
@item Raw MPEG video @tab X @tab X |
|
|
@item Raw PCM8/16 bits, mulaw/Alaw@tab X @tab X |
|
|
@item SUN AU format @tab X @tab X |
|
|
@item Quicktime @tab @tab X |
|
|
@item MPEG4 @tab @tab X |
|
|
@tab MPEG4 is a variant of Quicktime |
|
|
@item Raw MPEG4 video @tab X @tab X |
|
|
@item DV @tab @tab X |
|
|
@tab Only the video track is decoded. |
|
|
@end multitable |
|
|
|
|
|
@code{X} means that the encoding (resp. decoding) is supported. |
|
|
|
|
|
@section Image Formats |
|
|
|
|
|
FFmpeg can read and write images for each frame of a video sequence. The |
|
|
following image formats are supported: |
|
|
|
|
|
@multitable @columnfractions .4 .1 .1 |
|
|
@item Supported Image Format @tab Encoding @tab Decoding @tab Comments |
|
|
@item PGM, PPM @tab X @tab X |
|
|
@item PGMYUV @tab X @tab X @tab PGM with U and V components in 420 |
|
|
@item JPEG @tab X @tab X @tab Progressive JPEG is not supported |
|
|
@item .Y.U.V @tab X @tab X @tab One raw file per component |
|
|
@item Animated GIF @tab X @tab @tab Only uncompressed GIFs are generated |
|
|
@end multitable |
|
|
|
|
|
@code{X} means that the encoding (resp. decoding) is supported. |
|
|
|
|
|
@section Video Codecs |
|
|
|
|
|
@multitable @columnfractions .4 .1 .1 .7 |
|
|
@item Supported Codec @tab Encoding @tab Decoding @tab Comments |
|
|
@item MPEG1 video @tab X @tab X |
|
|
@item MPEG2 video @tab @tab X |
|
|
@item MPEG4 @tab X @tab X @tab Also known as DIVX4/5 |
|
|
@item MSMPEG4 V1 @tab X @tab X |
|
|
@item MSMPEG4 V2 @tab X @tab X |
|
|
@item MSMPEG4 V3 @tab X @tab X @tab Also known as DIVX3 |
|
|
@item WMV7 @tab X @tab X |
|
|
@item H263(+) @tab X @tab X @tab Also known as Real Video 1.0 |
|
|
@item MJPEG @tab X @tab X |
|
|
@item DV @tab @tab X |
|
|
@item Huff YUV @tab X @tab X |
|
|
@end multitable |
|
|
|
|
|
@code{X} means that the encoding (resp. decoding) is supported. |
|
|
|
|
|
Check at @url{http://www.mplayerhq.hu/~michael/codec-features.html} to |
|
|
get a precise comparison of FFmpeg MPEG4 codec compared to the other |
|
|
solutions. |
|
|
|
|
|
@section Audio Codecs |
|
|
|
|
|
@multitable @columnfractions .4 .1 .1 .1 .7 |
|
|
@item Supported Codec @tab Encoding @tab Decoding @tab Comments |
|
|
@item MPEG audio layer 2 @tab IX @tab IX |
|
|
@item MPEG audio layer 1/3 @tab IX @tab IX |
|
|
@tab MP3 encoding is supported through the external library LAME |
|
|
@item AC3 @tab IX @tab X |
|
|
@tab liba52 is used internally for decoding. |
|
|
@item Vorbis @tab X @tab X |
|
|
@tab supported through the external library libvorbis. |
|
|
@item WMA V1/V2 @tab @tab X |
|
|
|
|
|
@end multitable |
|
|
|
|
|
@code{X} means that the encoding (resp. decoding) is supported. |
|
|
|
|
|
@code{I} means that an integer only version is available too (ensures highest |
|
|
performances on systems without hardware floating point support). |
|
|
|
|
|
@chapter Platform Specific information |
|
|
|
|
|
@section Linux |
|
|
|
|
|
ffmpeg should be compiled with at least GCC 2.95.3. GCC 3.2 is the |
|
|
preferred compiler now for ffmpeg. All future optimizations will depend on |
|
|
features only found in GCC 3.2. |
|
|
|
|
|
@section BSD |
|
|
|
|
|
@section Windows |
|
|
|
|
|
@section MacOS X |
|
|
|
|
|
@section BeOS |
|
|
|
|
|
The configure script should guess the configuration itself. |
|
|
Networking support is currently not finished. |
|
|
errno issues fixed by Andrew Bachmann. |
|
|
|
|
|
Old stuff: |
|
|
|
|
|
Fran<EFBFBD>ois Revol - revol at free dot fr - April 2002 |
|
|
|
|
|
The configure script should guess the configuration itself, |
|
|
however I still didn't tested building on net_server version of BeOS. |
|
|
|
|
|
ffserver is broken (needs poll() implementation). |
|
|
|
|
|
There is still issues with errno codes, which are negative in BeOs, and |
|
|
that ffmpeg negates when returning. This ends up turning errors into |
|
|
valid results, then crashes. |
|
|
(To be fixed) |
|
|
|
|
|
@chapter Developers Guide |
|
|
|
|
|
@section API |
|
|
@itemize |
|
|
@item libavcodec is the library containing the codecs (both encoding and |
|
|
decoding). See @file{libavcodec/apiexample.c} to see how to use it. |
|
|
|
|
|
@item libavformat is the library containing the file formats handling (mux and |
|
|
demux code for several formats). (no example yet, the API is likely to |
|
|
evolve). |
|
|
@end itemize |
|
|
|
|
|
@section Integrating libavcodec or libavformat in your program |
|
|
|
|
|
You can integrate all the source code of the libraries to link them |
|
|
statically to avoid any version problem. All you need is to provide a |
|
|
'config.mak' and a 'config.h' in the parent directory. See the defines |
|
|
generated by ./configure to understand what is needed. |
|
|
|
|
|
You can use libavcodec or libavformat in your commercial program, but |
|
|
@emph{any patch you make must be published}. The best way to proceed is |
|
|
to send your patches to the ffmpeg mailing list. |
|
|
|
|
|
@section Coding Rules |
|
|
|
|
|
ffmpeg is programmed in ANSI C language. GCC extensions are |
|
|
tolerated. Indent size is 4. The TAB character should not be used. |
|
|
|
|
|
The presentation is the one specified by 'indent -i4 -kr'. |
|
|
|
|
|
Main priority in ffmpeg is simplicity and small code size (=less |
|
|
bugs). |
|
|
|
|
|
Comments: for functions visible from other modules, use the JavaDoc |
|
|
format (see examples in @file{libav/utils.c}) so that a documentation |
|
|
can be generated automatically. |
|
|
|
|
|
@section Submitting patches |
|
|
|
|
|
When you submit your patch, try to send a unified diff (diff '-u' |
|
|
option). I cannot read other diffs :-) |
|
|
|
|
|
Run the regression tests before submitting a patch so that you can |
|
|
verify that there are no big problems. |
|
|
|
|
|
Patches should be posted as base64 encoded attachments (or any other |
|
|
encoding which ensures that the patch wont be trashed during |
|
|
transmission) to the ffmpeg-devel mailinglist, see |
|
|
@url{http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel} |
|
|
|
|
|
@section Regression tests |
|
|
|
|
|
Before submitting a patch (or committing with CVS), you should at least |
|
|
test that you did not break anything. |
|
|
|
|
|
The regression test build a synthetic video stream and a synthetic |
|
|
audio stream. Then these are encoded then decoded with all codecs or |
|
|
formats. The CRC (or MD5) of each generated file is recorded in a |
|
|
result file. Then a 'diff' is launched with the reference results and |
|
|
the result file. |
|
|
|
|
|
The regression test then goes on to test the ffserver code with a |
|
|
limited set of streams. It is important that this step runs correctly |
|
|
as well. |
|
|
|
|
|
Run 'make test' to test all the codecs. |
|
|
|
|
|
Run 'make libavtest' to test all the codecs. |
|
|
|
|
|
[Of course, some patches may change the regression tests results. In |
|
|
this case, the regression tests reference results shall be modified |
|
|
accordingly]. |
|
|
|
|
|
@bye
|
|
|
|