new texinfo documentation - HTML version also included

Originally committed as revision 1085 to svn://svn.ffmpeg.org/ffmpeg/trunk
This commit is contained in:
Fabrice Bellard 2002-10-27 22:00:34 +00:00
parent 1c0a593ac8
commit 9181577ccb
7 changed files with 1283 additions and 354 deletions

4
doc/Makefile Normal file
View File

@ -0,0 +1,4 @@
all: ffmpeg-doc.html
%.html: %.texi
texi2html -monolithic -number $<

View File

@ -1,71 +0,0 @@
1) API
------
* libavcodec is the library containing the codecs (both encoding and
decoding). See libavcodec/apiexample.c to see how to use it.
* libav is the library containing the file formats handling (mux and
demux code for several formats). (no example yet, the API is likely
to evolve).
2) Integrating libavcodec or libav in your GPL'ed program
---------------------------------------------------------
You can integrate all the source code of the libraries to link them
statically to avoid any version problem. All you need is to provide a
'config.mak' and a 'config.h' in the parent directory. See the defines
generated by ./configure to understand what is needed.
3) Coding Rules
---------------
ffmpeg is programmed in ANSI C language. GCC extensions are
tolerated. Indent size is 4. The TAB character should not be used.
The presentation is the one specified by 'indent -i4 -kr'.
Main priority in ffmpeg is simplicity and small code size (=less
bugs).
Comments: for functions visible from other modules, use the JavaDoc
format (see examples in libav/utils.c) so that a documentation can be
generated automatically.
4) Submitting patches
---------------------
When you submit your patch, try to send a unified diff (diff '-u'
option). I cannot read other diffs :-)
Run the regression tests before submitting a patch so that you can
verify that there is no big problems.
Except if your patch is really big and adds an important feature, by
submitting it to me, you accept implicitely to put it under my
copyright. I prefer to do this to avoid potential problems if
licensing of ffmpeg changes.
Patches should be posted as base64 encoded attachments (or any other
encoding which ensures that the patch wont be trashed during
transmission) to the ffmpeg-devel mailinglist, see
http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel
5) Regression tests
-------------------
Before submitting a patch (or commiting with CVS), you should at least
test that you did not break anything.
The regression test build a synthetic video stream and a synthetic
audio stream. Then there are encoded then decoded with all codecs or
formats. The CRC (or MD5) of each generated file is recorded in a
result file. Then a 'diff' is launched with the reference results and
the result file.
Run 'make test' to test all the codecs.
Run 'make libavtest' to test all the codecs.
[Of course, some patches may change the regression tests results. In
this case, the regression tests reference results shall be modified
accordingly].

View File

@ -1,49 +0,0 @@
Technical notes:
---------------
Video:
-----
- The decision intra/predicted macroblock is the algorithm suggested
by the mpeg 1 specification.
- only Huffman based H263 is supported, mainly because of patent
issues.
- MPEG4 is supported, as an extension of the H263 encoder. MPEG4 DC
prediction is used, but not AC prediction. Specific VLC are used for
intra pictures. The output format is compatible with Open DIVX
version 47.
- MJPEG is supported, but in the current version the huffman tables
are not optimized. It could be interesting to add this feature for
the flash format.
- To increase speed, only motion vectors (0,0) are tested for real
time compression. NEW: now motion compensation is done with several
methods : none, full, log, and phods. The code is mmx/sse optimized.
- In high quality mode, full search is used for motion
vectors. Currently, only fcode = 1 is used for both H263/MPEG1. Half
pel vectors are used.
Audio:
-----
- The mpeg audio layer 2 compatible encoder was rewritten from
scratch. It is one of the simplest encoder you can imagine (800
lines of C code !). It is also one of the fastest because of its
simplicity. There are still some problems of overflow. A minimal
psycho acoustic model could be added. Currently, stereo is
supported, but not joint stereo.
- The AC3 audio encoder was rewritten from scratch. It is fairly
naive, but the result are quiet interesting at 64 kbit/s. It
includes extensions for low sampling rates used in some Internet
formats. Differential and coupled stereo is not handled. Stereo
channels are simply handled as two mono channels.
- The mpeg audio layer 3 decoder was rewritten from scratch. It uses
only integers and can be 16 bit precision for the synthesis filter
at the expense of a slight precision loss. A slower bit exact mode
is available too for compliance testing.

View File

@ -4,23 +4,20 @@ ffmpeg TODO list:
(in approximate decreasing priority order)
Short term fixes:
- mpeg audio fix
- ffserver fix
- fix stream selection (aka map) syntax. Start stream numbers at 1 in
listing. Find a syntax for stream ids (such as TS pids).
- AV sync fix
- put ffserver patches
- reconstruct mpeg header frame rate in telecine case so that we do
not need to infer the real rate if it is not possible.
- remove unused DCT code.
- AV sync fix
- RTP/RTSP streaming support in ffserver and in libav
- minimal support of video in ffplay
Planned in next releases:
- remove unused DCT code.
- fix stream selection (aka map) syntax. Start stream numbers at 1 in
listing. Find a syntax for stream ids (such as TS pids).
- add DV codec/format support
- fix bugs when stream begins with a P/B frame
- fix ffserver (partially done)
- add raw h263 decoding support, see vivo streams (partially done)
- add qscale out.
- fix -sameq in grabbing
- add vivo format support (may need long term prediction support)

803
doc/ffmpeg-doc.html Normal file
View File

@ -0,0 +1,803 @@
<HTML>
<HEAD>
<!-- Created by texi2html 1.56k from ffmpeg-doc.texi on 27 October 2002 -->
<TITLE>FFmpeg Documentation</TITLE>
</HEAD>
<BODY>
<H1>FFmpeg Documentation</H1>
<P>
<P><HR><P>
<H1>Table of Contents</H1>
<UL>
<LI><A NAME="TOC1" HREF="ffmpeg-doc.html#SEC1">1. Introduction</A>
<LI><A NAME="TOC2" HREF="ffmpeg-doc.html#SEC2">2. Quick Start</A>
<UL>
<LI><A NAME="TOC3" HREF="ffmpeg-doc.html#SEC3">2.1 Video and Audio grabbing</A>
<LI><A NAME="TOC4" HREF="ffmpeg-doc.html#SEC4">2.2 Video and Audio file format convertion</A>
</UL>
<LI><A NAME="TOC5" HREF="ffmpeg-doc.html#SEC5">3. Invocation</A>
<UL>
<LI><A NAME="TOC6" HREF="ffmpeg-doc.html#SEC6">3.1 Syntax</A>
<LI><A NAME="TOC7" HREF="ffmpeg-doc.html#SEC7">3.2 Main options</A>
<LI><A NAME="TOC8" HREF="ffmpeg-doc.html#SEC8">3.3 Video Options</A>
<LI><A NAME="TOC9" HREF="ffmpeg-doc.html#SEC9">3.4 Audio Options</A>
<LI><A NAME="TOC10" HREF="ffmpeg-doc.html#SEC10">3.5 Advanced options</A>
<LI><A NAME="TOC11" HREF="ffmpeg-doc.html#SEC11">3.6 Protocols</A>
</UL>
<LI><A NAME="TOC12" HREF="ffmpeg-doc.html#SEC12">4. Tips</A>
<LI><A NAME="TOC13" HREF="ffmpeg-doc.html#SEC13">5. Supported File Formats and Codecs</A>
<UL>
<LI><A NAME="TOC14" HREF="ffmpeg-doc.html#SEC14">5.1 File Formats</A>
<LI><A NAME="TOC15" HREF="ffmpeg-doc.html#SEC15">5.2 Video Codecs</A>
<LI><A NAME="TOC16" HREF="ffmpeg-doc.html#SEC16">5.3 Audio Codecs</A>
</UL>
<LI><A NAME="TOC17" HREF="ffmpeg-doc.html#SEC17">6. Developpers Guide</A>
<UL>
<LI><A NAME="TOC18" HREF="ffmpeg-doc.html#SEC18">6.1 API</A>
<LI><A NAME="TOC19" HREF="ffmpeg-doc.html#SEC19">6.2 Integrating libavcodec or libavformat in your program</A>
<LI><A NAME="TOC20" HREF="ffmpeg-doc.html#SEC20">6.3 Coding Rules</A>
<LI><A NAME="TOC21" HREF="ffmpeg-doc.html#SEC21">6.4 Submitting patches</A>
<LI><A NAME="TOC22" HREF="ffmpeg-doc.html#SEC22">6.5 Regression tests</A>
</UL>
</UL>
<P><HR><P>
<P>
FFmpeg Documentation
<H1><A NAME="SEC1" HREF="ffmpeg-doc.html#TOC1">1. Introduction</A></H1>
<P>
FFmpeg is a very fast video and audio converter. It can also grab from
a live audio/video source.
The command line interface is designed to be intuitive, in the sense
that ffmpeg tries to figure out all the parameters, when
possible. You have usually to give only the target bitrate you want.
<P>
FFmpeg can also convert from any sample rate to any other, and resize
video on the fly with a high quality polyphase filter.
<H1><A NAME="SEC2" HREF="ffmpeg-doc.html#TOC2">2. Quick Start</A></H1>
<H2><A NAME="SEC3" HREF="ffmpeg-doc.html#TOC3">2.1 Video and Audio grabbing</A></H2>
<P>
FFmpeg can use a video4linux compatible video source and any Open Sound
System audio source:
<PRE>
ffmpeg /tmp/out.mpg
</PRE>
<P>
Note that you must activate the right video source and channel
before launching ffmpeg. You can use any TV viewer such as xawtv by
Gerd Knorr which I find very good. You must also set correctly the
audio recording levels with a standard mixer.
<H2><A NAME="SEC4" HREF="ffmpeg-doc.html#TOC4">2.2 Video and Audio file format convertion</A></H2>
<P>
* ffmpeg can use any supported file format and protocol as input:
<P>
Examples:
<P>
* You can input from YUV files:
<PRE>
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
</PRE>
<P>
It will use the files:
<PRE>
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
</PRE>
<P>
The Y files use twice the resolution of the U and V files. They are
raw files, without header. They can be generated by all decent video
decoders. You must specify the size of the image with the '-s' option
if ffmpeg cannot guess it.
<P>
* You can input from a RAW YUV420P file:
<PRE>
ffmpeg -i /tmp/test.yuv /tmp/out.avi
</PRE>
<P>
The RAW YUV420P is a file containing RAW YUV planar, for each frame first
come the Y plane followed by U and V planes, which are half vertical and
horizontal resolution.
<P>
* You can output to a RAW YUV420P file:
<PRE>
ffmpeg -i mydivx.avi -o hugefile.yuv
</PRE>
<P>
* You can set several input files and output files:
<PRE>
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
</PRE>
<P>
Convert the audio file a.wav and the raw yuv video file a.yuv
to mpeg file a.mpg
<P>
* You can also do audio and video convertions at the same time:
<PRE>
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
</PRE>
<P>
Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.
<P>
* You can encode to several formats at the same time and define a
mapping from input stream to output streams:
<PRE>
ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0
</PRE>
<P>
Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map
file:index' specify which input stream is used for each output
stream, in the order of the definition of output streams.
<P>
* You can transcode decrypted VOBs
<PRE>
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi
</PRE>
<P>
This is a typicall DVD ripper example, input from a VOB file, output
to an AVI file with MPEG-4 video and MP3 audio, note that in this
command we use B frames so the MPEG-4 stream is DivX5 compatible, GOP
size is 300 that means an INTRA frame every 10 seconds for 29.97 fps
input video. Also the audio stream is MP3 encoded so you need LAME
support which is enabled using <CODE>--enable-mp3lame</CODE> when
configuring. The mapping is particullary usefull for DVD transcoding
to get the desired audio language.
<P>
NOTE: to see the supported input formats, use <CODE>ffmpeg -formats</CODE>.
<H1><A NAME="SEC5" HREF="ffmpeg-doc.html#TOC5">3. Invocation</A></H1>
<H2><A NAME="SEC6" HREF="ffmpeg-doc.html#TOC6">3.1 Syntax</A></H2>
<P>
The generic syntax is:
<PRE>
ffmpeg [[options][-i input_file]]... {[options] output_file}...
</PRE>
<P>
If no input file is given, audio/video grabbing is done.
<P>
As a general rule, options are applied to the next specified
file. For example, if you give the '-b 64' option, it sets the video
bitrate of the next file. Format option may be needed for raw input
files.
<P>
By default, ffmpeg tries to convert as losslessly as possible: it
uses the same audio and video parameter fors the outputs as the one
specified for the inputs.
<H2><A NAME="SEC7" HREF="ffmpeg-doc.html#TOC7">3.2 Main options</A></H2>
<DL COMPACT>
<DT><SAMP>`-L'</SAMP>
<DD>
show license
<DT><SAMP>`-h'</SAMP>
<DD>
show help
<DT><SAMP>`-formats'</SAMP>
<DD>
show available formats, codecs, protocols, ...
<DT><SAMP>`-f fmt'</SAMP>
<DD>
force format
<DT><SAMP>`-i filename'</SAMP>
<DD>
input file name
<DT><SAMP>`-y'</SAMP>
<DD>
overwrite output files
<DT><SAMP>`-t duration'</SAMP>
<DD>
set the recording time in seconds. <CODE>hh:mm:ss[.xxx]</CODE> syntax is also
supported.
<DT><SAMP>`-title string'</SAMP>
<DD>
set the title
<DT><SAMP>`-author string'</SAMP>
<DD>
set the author
<DT><SAMP>`-copyright string'</SAMP>
<DD>
set the copyright
<DT><SAMP>`-comment string'</SAMP>
<DD>
set the comment
<DT><SAMP>`-b bitrate'</SAMP>
<DD>
set video bitrate (in kbit/s)
</DL>
<H2><A NAME="SEC8" HREF="ffmpeg-doc.html#TOC8">3.3 Video Options</A></H2>
<DL COMPACT>
<DT><SAMP>`-s size'</SAMP>
<DD>
set frame size [160x128]
<DT><SAMP>`-r fps'</SAMP>
<DD>
set frame rate [25]
<DT><SAMP>`-b bitrate'</SAMP>
<DD>
set the video bitrate in kbit/s [200]
<DT><SAMP>`-vn'</SAMP>
<DD>
disable video recording [no]
<DT><SAMP>`-bt tolerance'</SAMP>
<DD>
set video bitrate tolerance (in kbit/s)
<DT><SAMP>`-sameq'</SAMP>
<DD>
use same video quality as source (implies VBR)
<DT><SAMP>`-pass n'</SAMP>
<DD>
select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video at the exact requested bit rate is generated in the second pass.
<DT><SAMP>`-passlogfile file'</SAMP>
<DD>
select two pass log file name
</DL>
<H2><A NAME="SEC9" HREF="ffmpeg-doc.html#TOC9">3.4 Audio Options</A></H2>
<DL COMPACT>
<DT><SAMP>`-ab bitrate'</SAMP>
<DD>
set audio bitrate (in kbit/s)
<DT><SAMP>`-ar freq'</SAMP>
<DD>
set the audio sampling freq [44100]
<DT><SAMP>`-ab bitrate'</SAMP>
<DD>
set the audio bitrate in kbit/s [64]
<DT><SAMP>`-ac channels'</SAMP>
<DD>
set the number of audio channels [1]
<DT><SAMP>`-an'</SAMP>
<DD>
disable audio recording [no]
</DL>
<H2><A NAME="SEC10" HREF="ffmpeg-doc.html#TOC10">3.5 Advanced options</A></H2>
<DL COMPACT>
<DT><SAMP>`-map file:stream'</SAMP>
<DD>
set input stream mapping
<DT><SAMP>`-g gop_size'</SAMP>
<DD>
set the group of picture size
<DT><SAMP>`-intra'</SAMP>
<DD>
use only intra frames
<DT><SAMP>`-qscale q'</SAMP>
<DD>
use fixed video quantiser scale (VBR)
<DT><SAMP>`-qmin q'</SAMP>
<DD>
min video quantiser scale (VBR)
<DT><SAMP>`-qmax q'</SAMP>
<DD>
max video quantiser scale (VBR)
<DT><SAMP>`-qdiff q'</SAMP>
<DD>
max difference between the quantiser scale (VBR)
<DT><SAMP>`-qblur blur'</SAMP>
<DD>
video quantiser scale blur (VBR)
<DT><SAMP>`-qcomp compression'</SAMP>
<DD>
video quantiser scale compression (VBR)
<DT><SAMP>`-vd device'</SAMP>
<DD>
set video device
<DT><SAMP>`-vcodec codec'</SAMP>
<DD>
force video codec
<DT><SAMP>`-me method'</SAMP>
<DD>
set motion estimation method
<DT><SAMP>`-bf frames'</SAMP>
<DD>
use 'frames' B frames (only MPEG-4)
<DT><SAMP>`-hq'</SAMP>
<DD>
activate high quality settings
<DT><SAMP>`-4mv'</SAMP>
<DD>
use four motion vector by macroblock (only MPEG-4)
<DT><SAMP>`-ad device'</SAMP>
<DD>
set audio device
<DT><SAMP>`-acodec codec'</SAMP>
<DD>
force audio codec
<DT><SAMP>`-deinterlace'</SAMP>
<DD>
deinterlace pictures
<DT><SAMP>`-benchmark'</SAMP>
<DD>
add timings for benchmarking
<DT><SAMP>`-hex'</SAMP>
<DD>
dump each input packet
<DT><SAMP>`-psnr'</SAMP>
<DD>
calculate PSNR of compressed frames
<DT><SAMP>`-vstats'</SAMP>
<DD>
dump video coding statistics to file
</DL>
<H2><A NAME="SEC11" HREF="ffmpeg-doc.html#TOC11">3.6 Protocols</A></H2>
<P>
The output file can be "-" to output to a pipe. This is only possible
with mpeg1 and h263 formats.
<P>
ffmpeg handles also many protocols specified with the URL syntax.
<P>
Use 'ffmpeg -formats' to have a list of the supported protocols.
<P>
The protocol <CODE>http:</CODE> is currently used only to communicate with
ffserver (see the ffserver documentation). When ffmpeg will be a
video player it will also be used for streaming :-)
<H1><A NAME="SEC12" HREF="ffmpeg-doc.html#TOC12">4. Tips</A></H1>
<UL>
<LI>For streaming at very low bit rate application, use a low frame rate
and a small gop size. This is especially true for real video where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
<PRE>
ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm
</PRE>
<LI>The parameter 'q' which is displayed while encoding is the current
quantizer. The value of 1 indicates that a very good quality could
be achieved. The value of 31 indicates the worst quality. If q=31
too often, it means that the encoder cannot compress enough to meet
your bit rate. You must either increase the bit rate, decrease the
frame rate or decrease the frame size.
<LI>If your computer is not fast enough, you can speed up the
compression at the expense of the compression ratio. You can use
'-me zero' to speed up motion estimation, and '-intra' to disable
completly motion estimation (you have only I frames, which means it
is about as good as JPEG compression).
<LI>To have very low bitrates in audio, reduce the sampling frequency
(down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3).
<LI>To have a constant quality (but a variable bitrate), use the option
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
quality).
<LI>When converting video files, you can use the '-sameq' option which
uses in the encoder the same quality factor than in the decoder. It
allows to be almost lossless in encoding.
</UL>
<H1><A NAME="SEC13" HREF="ffmpeg-doc.html#TOC13">5. Supported File Formats and Codecs</A></H1>
<P>
You can use the <CODE>-formats</CODE> option to have an exhaustive list.
<H2><A NAME="SEC14" HREF="ffmpeg-doc.html#TOC14">5.1 File Formats</A></H2>
<P>
FFmpeg supports the following file formats thru the <CODE>libavformat</CODE>
library.
<TABLE BORDER>
<TR><TD>Supported File Format </TD><TD> Encoding </TD><TD> Decoding </TD><TD> Comments</TD>
</TR>
<TR><TD>MPEG audio </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>MPEG1 systems </TD><TD> X </TD><TD> X</TD>
</TD><TD> muxed audio and video
</TR>
<TR><TD>MPEG2 PS </TD><TD> X </TD><TD> X</TD>
</TD><TD> also known as <CODE>VOB</CODE> file
</TR>
<TR><TD>MPEG2 TS </TD><TD> </TD><TD> X</TD>
</TD><TD> also known as DVB Transport Stream
</TR>
<TR><TD>ASF</TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>AVI</TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>WAV</TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Macromedia Flash</TD><TD> X </TD><TD> X</TD>
</TD><TD> Only embedded audio is decoded
</TR>
<TR><TD>Real Audio and Video </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>PGM, YUV, PPM, JPEG images </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Animated GIF </TD><TD> X </TD><TD></TD>
</TD><TD> Only uncompressed GIFs are generated
</TR>
<TR><TD>Raw AC3 </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Raw MJPEG </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Raw MPEG video </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Raw PCM8/16 bits, mulaw/Alaw</TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>SUN AU format </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>Quicktime </TD><TD> </TD><TD> X</TD>
</TR>
<TR><TD>MPEG4 </TD><TD> </TD><TD> X</TD>
</TD><TD> MPEG4 is a variant of Quicktime
</TR>
<TR><TD>Raw MPEG4 video </TD><TD> </TD><TD> X</TD>
</TD><TD> Only small files are supported.
</TR>
<TR><TD>DV </TD><TD> </TD><TD> X</TD>
</TD><TD> Only the video track is decoded.
</TR></TABLE>
<P>
<CODE>X</CODE> means that the encoding (resp. decoding) is supported.
<H2><A NAME="SEC15" HREF="ffmpeg-doc.html#TOC15">5.2 Video Codecs</A></H2>
<TABLE BORDER>
<TR><TD>Supported Codec </TD><TD> Encoding </TD><TD> Decoding </TD><TD> Comments</TD>
</TR>
<TR><TD>MPEG1 video </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>MPEG2 video </TD><TD> </TD><TD> X</TD>
</TR>
<TR><TD>MPEG4 </TD><TD> X </TD><TD> X </TD><TD> Also known as DIVX4/5</TD>
</TR>
<TR><TD>MSMPEG4 V1 </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>MSMPEG4 V2 </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>MSMPEG4 V3 </TD><TD> X </TD><TD> X </TD><TD> Also known as DIVX3</TD>
</TR>
<TR><TD>WMV7 </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>H263(+) </TD><TD> X </TD><TD> X </TD><TD> Also known as Real Video 1.0</TD>
</TR>
<TR><TD>MJPEG </TD><TD> X </TD><TD> X</TD>
</TR>
<TR><TD>DV </TD><TD> </TD><TD> X</TD>
</TR></TABLE>
<P>
<CODE>X</CODE> means that the encoding (resp. decoding) is supported.
<H2><A NAME="SEC16" HREF="ffmpeg-doc.html#TOC16">5.3 Audio Codecs</A></H2>
<TABLE BORDER>
<TR><TD>Supported Codec </TD><TD> Encoding </TD><TD> Decoding </TD><TD> Comments</TD>
</TR>
<TR><TD>MPEG audio layer 2 </TD><TD> IX </TD><TD> IX</TD>
</TR>
<TR><TD>MPEG audio layer 1/3 </TD><TD> IX </TD><TD> IX</TD>
</TD><TD> MP3 encoding is supported thru the external library LAME
</TR>
<TR><TD>AC3 </TD><TD> IX </TD><TD> X</TD>
</TD><TD> liba52 is used internally for decoding.
</TR>
<TR><TD>Vorbis </TD><TD> X </TD><TD></TD>
</TD><TD> encoding is supported thru the external library libvorbis.
</TR></TABLE>
<P>
<CODE>X</CODE> means that the encoding (resp. decoding) is supported.
<P>
<CODE>I</CODE> means that an integer only version is available too (ensures highest
performances on systems without hardware floating point support).
<H1><A NAME="SEC17" HREF="ffmpeg-doc.html#TOC17">6. Developpers Guide</A></H1>
<H2><A NAME="SEC18" HREF="ffmpeg-doc.html#TOC18">6.1 API</A></H2>
<UL>
<LI>libavcodec is the library containing the codecs (both encoding and
decoding). See <TT>`libavcodec/apiexample.c'</TT> to see how to use it.
<LI>libavformat is the library containing the file formats handling (mux and
demux code for several formats). (no example yet, the API is likely to
evolve).
</UL>
<H2><A NAME="SEC19" HREF="ffmpeg-doc.html#TOC19">6.2 Integrating libavcodec or libavformat in your program</A></H2>
<P>
You can integrate all the source code of the libraries to link them
statically to avoid any version problem. All you need is to provide a
'config.mak' and a 'config.h' in the parent directory. See the defines
generated by ./configure to understand what is needed.
<P>
You can use libavcodec or libavformat in your commercial program, but
<EM>any patch you make must be published</EM>. The best way to proceed is
to send your patches to the ffmpeg mailing list.
<H2><A NAME="SEC20" HREF="ffmpeg-doc.html#TOC20">6.3 Coding Rules</A></H2>
<P>
ffmpeg is programmed in ANSI C language. GCC extensions are
tolerated. Indent size is 4. The TAB character should not be used.
<P>
The presentation is the one specified by 'indent -i4 -kr'.
<P>
Main priority in ffmpeg is simplicity and small code size (=less
bugs).
<P>
Comments: for functions visible from other modules, use the JavaDoc
format (see examples in <TT>`libav/utils.c'</TT>) so that a documentation
can be generated automatically.
<H2><A NAME="SEC21" HREF="ffmpeg-doc.html#TOC21">6.4 Submitting patches</A></H2>
<P>
When you submit your patch, try to send a unified diff (diff '-u'
option). I cannot read other diffs :-)
<P>
Run the regression tests before submitting a patch so that you can
verify that there is no big problems.
<P>
Except if your patch is really big and adds an important feature, by
submitting it to me, you accept implicitely to put it under my
copyright. I prefer to do this to avoid potential problems if
licensing of ffmpeg changes.
<P>
Patches should be posted as base64 encoded attachments (or any other
encoding which ensures that the patch wont be trashed during
transmission) to the ffmpeg-devel mailinglist, see
<A HREF="http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel">http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel</A>
<H2><A NAME="SEC22" HREF="ffmpeg-doc.html#TOC22">6.5 Regression tests</A></H2>
<P>
Before submitting a patch (or commiting with CVS), you should at least
test that you did not break anything.
<P>
The regression test build a synthetic video stream and a synthetic
audio stream. Then there are encoded then decoded with all codecs or
formats. The CRC (or MD5) of each generated file is recorded in a
result file. Then a 'diff' is launched with the reference results and
the result file.
<P>
Run 'make test' to test all the codecs.
<P>
Run 'make libavtest' to test all the codecs.
<P>
[Of course, some patches may change the regression tests results. In
this case, the regression tests reference results shall be modified
accordingly].
<P><HR><P>
This document was generated on 27 October 2002 using
<A HREF="http://wwwinfo.cern.ch/dis/texi2html/">texi2html</A>&nbsp;1.56k.
</BODY>
</HTML>

471
doc/ffmpeg-doc.texi Normal file
View File

@ -0,0 +1,471 @@
\input texinfo @c -*- texinfo -*-
@settitle FFmpeg Documentation
@titlepage
@sp 7
@center @titlefont{FFmpeg Documentation}
@sp 3
@end titlepage
@chapter Introduction
FFmpeg is a very fast video and audio converter. It can also grab from
a live audio/video source.
The command line interface is designed to be intuitive, in the sense
that ffmpeg tries to figure out all the parameters, when
possible. You have usually to give only the target bitrate you want.
FFmpeg can also convert from any sample rate to any other, and resize
video on the fly with a high quality polyphase filter.
@chapter Quick Start
@section Video and Audio grabbing
FFmpeg can use a video4linux compatible video source and any Open Sound
System audio source:
@example
ffmpeg /tmp/out.mpg
@end example
Note that you must activate the right video source and channel
before launching ffmpeg. You can use any TV viewer such as xawtv by
Gerd Knorr which I find very good. You must also set correctly the
audio recording levels with a standard mixer.
@section Video and Audio file format convertion
* ffmpeg can use any supported file format and protocol as input:
Examples:
* You can input from YUV files:
@example
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
@end example
It will use the files:
@example
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
@end example
The Y files use twice the resolution of the U and V files. They are
raw files, without header. They can be generated by all decent video
decoders. You must specify the size of the image with the '-s' option
if ffmpeg cannot guess it.
* You can input from a RAW YUV420P file:
@example
ffmpeg -i /tmp/test.yuv /tmp/out.avi
@end example
The RAW YUV420P is a file containing RAW YUV planar, for each frame first
come the Y plane followed by U and V planes, which are half vertical and
horizontal resolution.
* You can output to a RAW YUV420P file:
@example
ffmpeg -i mydivx.avi -o hugefile.yuv
@end example
* You can set several input files and output files:
@example
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
@end example
Convert the audio file a.wav and the raw yuv video file a.yuv
to mpeg file a.mpg
* You can also do audio and video convertions at the same time:
@example
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
@end example
Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.
* You can encode to several formats at the same time and define a
mapping from input stream to output streams:
@example
ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0
@end example
Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map
file:index' specify which input stream is used for each output
stream, in the order of the definition of output streams.
* You can transcode decrypted VOBs
@example
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi
@end example
This is a typicall DVD ripper example, input from a VOB file, output
to an AVI file with MPEG-4 video and MP3 audio, note that in this
command we use B frames so the MPEG-4 stream is DivX5 compatible, GOP
size is 300 that means an INTRA frame every 10 seconds for 29.97 fps
input video. Also the audio stream is MP3 encoded so you need LAME
support which is enabled using @code{--enable-mp3lame} when
configuring. The mapping is particullary usefull for DVD transcoding
to get the desired audio language.
NOTE: to see the supported input formats, use @code{ffmpeg -formats}.
@chapter Invocation
@section Syntax
The generic syntax is:
@example
ffmpeg [[options][-i input_file]]... {[options] output_file}...
@end example
If no input file is given, audio/video grabbing is done.
As a general rule, options are applied to the next specified
file. For example, if you give the '-b 64' option, it sets the video
bitrate of the next file. Format option may be needed for raw input
files.
By default, ffmpeg tries to convert as losslessly as possible: it
uses the same audio and video parameter fors the outputs as the one
specified for the inputs.
@section Main options
@table @samp
@item -L
show license
@item -h
show help
@item -formats
show available formats, codecs, protocols, ...
@item -f fmt
force format
@item -i filename
input file name
@item -y
overwrite output files
@item -t duration
set the recording time in seconds. @code{hh:mm:ss[.xxx]} syntax is also
supported.
@item -title string
set the title
@item -author string
set the author
@item -copyright string
set the copyright
@item -comment string
set the comment
@item -b bitrate
set video bitrate (in kbit/s)
@end table
@section Video Options
@table @samp
@item -s size
set frame size [160x128]
@item -r fps
set frame rate [25]
@item -b bitrate
set the video bitrate in kbit/s [200]
@item -vn
disable video recording [no]
@item -bt tolerance
set video bitrate tolerance (in kbit/s)
@item -sameq
use same video quality as source (implies VBR)
@item -pass n
select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video at the exact requested bit rate is generated in the second pass.
@item -passlogfile file
select two pass log file name
@end table
@section Audio Options
@table @samp
@item -ab bitrate
set audio bitrate (in kbit/s)
@item -ar freq
set the audio sampling freq [44100]
@item -ab bitrate
set the audio bitrate in kbit/s [64]
@item -ac channels
set the number of audio channels [1]
@item -an
disable audio recording [no]
@end table
@section Advanced options
@table @samp
@item -map file:stream
set input stream mapping
@item -g gop_size
set the group of picture size
@item -intra
use only intra frames
@item -qscale q
use fixed video quantiser scale (VBR)
@item -qmin q
min video quantiser scale (VBR)
@item -qmax q
max video quantiser scale (VBR)
@item -qdiff q
max difference between the quantiser scale (VBR)
@item -qblur blur
video quantiser scale blur (VBR)
@item -qcomp compression
video quantiser scale compression (VBR)
@item -vd device
set video device
@item -vcodec codec
force video codec
@item -me method
set motion estimation method
@item -bf frames
use 'frames' B frames (only MPEG-4)
@item -hq
activate high quality settings
@item -4mv
use four motion vector by macroblock (only MPEG-4)
@item -ad device
set audio device
@item -acodec codec
force audio codec
@item -deinterlace
deinterlace pictures
@item -benchmark
add timings for benchmarking
@item -hex
dump each input packet
@item -psnr
calculate PSNR of compressed frames
@item -vstats
dump video coding statistics to file
@end table
@section Protocols
The output file can be "-" to output to a pipe. This is only possible
with mpeg1 and h263 formats.
ffmpeg handles also many protocols specified with the URL syntax.
Use 'ffmpeg -formats' to have a list of the supported protocols.
The protocol @code{http:} is currently used only to communicate with
ffserver (see the ffserver documentation). When ffmpeg will be a
video player it will also be used for streaming :-)
@chapter Tips
@itemize
@item For streaming at very low bit rate application, use a low frame rate
and a small gop size. This is especially true for real video where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
@example
ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm
@end example
@item The parameter 'q' which is displayed while encoding is the current
quantizer. The value of 1 indicates that a very good quality could
be achieved. The value of 31 indicates the worst quality. If q=31
too often, it means that the encoder cannot compress enough to meet
your bit rate. You must either increase the bit rate, decrease the
frame rate or decrease the frame size.
@item If your computer is not fast enough, you can speed up the
compression at the expense of the compression ratio. You can use
'-me zero' to speed up motion estimation, and '-intra' to disable
completly motion estimation (you have only I frames, which means it
is about as good as JPEG compression).
@item To have very low bitrates in audio, reduce the sampling frequency
(down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3).
@item To have a constant quality (but a variable bitrate), use the option
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
quality).
@item When converting video files, you can use the '-sameq' option which
uses in the encoder the same quality factor than in the decoder. It
allows to be almost lossless in encoding.
@end itemize
@chapter Supported File Formats and Codecs
You can use the @code{-formats} option to have an exhaustive list.
@section File Formats
FFmpeg supports the following file formats thru the @code{libavformat}
library.
@multitable @columnfractions .4 .1 .1
@item Supported File Format @tab Encoding @tab Decoding @tab Comments
@item MPEG audio @tab X @tab X
@item MPEG1 systems @tab X @tab X
@tab muxed audio and video
@item MPEG2 PS @tab X @tab X
@tab also known as @code{VOB} file
@item MPEG2 TS @tab @tab X
@tab also known as DVB Transport Stream
@item ASF@tab X @tab X
@item AVI@tab X @tab X
@item WAV@tab X @tab X
@item Macromedia Flash@tab X @tab X
@tab Only embedded audio is decoded
@item Real Audio and Video @tab X @tab X
@item PGM, YUV, PPM, JPEG images @tab X @tab X
@item Animated GIF @tab X @tab
@tab Only uncompressed GIFs are generated
@item Raw AC3 @tab X @tab X
@item Raw MJPEG @tab X @tab X
@item Raw MPEG video @tab X @tab X
@item Raw PCM8/16 bits, mulaw/Alaw@tab X @tab X
@item SUN AU format @tab X @tab X
@item Quicktime @tab @tab X
@item MPEG4 @tab @tab X
@tab MPEG4 is a variant of Quicktime
@item Raw MPEG4 video @tab @tab X
@tab Only small files are supported.
@item DV @tab @tab X
@tab Only the video track is decoded.
@end multitable
@code{X} means that the encoding (resp. decoding) is supported.
@section Video Codecs
@multitable @columnfractions .4 .1 .1 .7
@item Supported Codec @tab Encoding @tab Decoding @tab Comments
@item MPEG1 video @tab X @tab X
@item MPEG2 video @tab @tab X
@item MPEG4 @tab X @tab X @tab Also known as DIVX4/5
@item MSMPEG4 V1 @tab X @tab X
@item MSMPEG4 V2 @tab X @tab X
@item MSMPEG4 V3 @tab X @tab X @tab Also known as DIVX3
@item WMV7 @tab X @tab X
@item H263(+) @tab X @tab X @tab Also known as Real Video 1.0
@item MJPEG @tab X @tab X
@item DV @tab @tab X
@end multitable
@code{X} means that the encoding (resp. decoding) is supported.
@section Audio Codecs
@multitable @columnfractions .4 .1 .1 .1 .7
@item Supported Codec @tab Encoding @tab Decoding @tab Comments
@item MPEG audio layer 2 @tab IX @tab IX
@item MPEG audio layer 1/3 @tab IX @tab IX
@tab MP3 encoding is supported thru the external library LAME
@item AC3 @tab IX @tab X
@tab liba52 is used internally for decoding.
@item Vorbis @tab X @tab
@tab encoding is supported thru the external library libvorbis.
@end multitable
@code{X} means that the encoding (resp. decoding) is supported.
@code{I} means that an integer only version is available too (ensures highest
performances on systems without hardware floating point support).
@chapter Developpers Guide
@section API
@itemize
@item libavcodec is the library containing the codecs (both encoding and
decoding). See @file{libavcodec/apiexample.c} to see how to use it.
@item libavformat is the library containing the file formats handling (mux and
demux code for several formats). (no example yet, the API is likely to
evolve).
@end itemize
@section Integrating libavcodec or libavformat in your program
You can integrate all the source code of the libraries to link them
statically to avoid any version problem. All you need is to provide a
'config.mak' and a 'config.h' in the parent directory. See the defines
generated by ./configure to understand what is needed.
You can use libavcodec or libavformat in your commercial program, but
@emph{any patch you make must be published}. The best way to proceed is
to send your patches to the ffmpeg mailing list.
@section Coding Rules
ffmpeg is programmed in ANSI C language. GCC extensions are
tolerated. Indent size is 4. The TAB character should not be used.
The presentation is the one specified by 'indent -i4 -kr'.
Main priority in ffmpeg is simplicity and small code size (=less
bugs).
Comments: for functions visible from other modules, use the JavaDoc
format (see examples in @file{libav/utils.c}) so that a documentation
can be generated automatically.
@section Submitting patches
When you submit your patch, try to send a unified diff (diff '-u'
option). I cannot read other diffs :-)
Run the regression tests before submitting a patch so that you can
verify that there is no big problems.
Except if your patch is really big and adds an important feature, by
submitting it to me, you accept implicitely to put it under my
copyright. I prefer to do this to avoid potential problems if
licensing of ffmpeg changes.
Patches should be posted as base64 encoded attachments (or any other
encoding which ensures that the patch wont be trashed during
transmission) to the ffmpeg-devel mailinglist, see
@url{http://lists.sourceforge.net/lists/listinfo/ffmpeg-devel}
@section Regression tests
Before submitting a patch (or commiting with CVS), you should at least
test that you did not break anything.
The regression test build a synthetic video stream and a synthetic
audio stream. Then there are encoded then decoded with all codecs or
formats. The CRC (or MD5) of each generated file is recorded in a
result file. Then a 'diff' is launched with the reference results and
the result file.
Run 'make test' to test all the codecs.
Run 'make libavtest' to test all the codecs.
[Of course, some patches may change the regression tests results. In
this case, the regression tests reference results shall be modified
accordingly].
@bye

View File

@ -1,226 +0,0 @@
*************** FFMPEG soft VCR documentation *****************
0) Introduction
---------------
FFmpeg is a very fast video and audio encoder. It can grab from
files or from a live audio/video source.
The command line interface is designed to be intuitive, in the sense
that ffmpeg tries to figure out all the parameters, when
possible. You have usually to give only the target bitrate you want.
FFmpeg can also convert from any sample rate to any other, and
resize video on the fly with a high quality polyphase filter.
1) Video and Audio grabbing
---------------------------
* FFmpeg can use a video4linux compatible video source and any Open
Sound System audio source:
ffmpeg /tmp/out.mpg
Note that you must activate the right video source and channel
before launching ffmpeg. You can use any TV viewer such as xawtv by
Gerd Knorr which I find very good. You must also set correctly the
audio recording levels with a standard mixer.
2) Video and Audio file format convertion
-----------------------------------------
* ffmpeg can use any supported file format and protocol as input:
Examples:
* You can input from YUV files:
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
It will use the files:
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
The Y files use twice the resolution of the U and V files. They are
raw files, without header. They can be generated by all decent video
decoders. You must specify the size of the image with the '-s' option
if ffmpeg cannot guess it.
* You can input from a RAW YUV420P file:
ffmpeg -i /tmp/test.yuv /tmp/out.avi
The RAW YUV420P is a file containing RAW YUV planar, for each frame first
come the Y plane followed by U and V planes, which are half vertical and
horizontal resolution.
* You can output to a RAW YUV420P file:
ffmpeg -i mydivx.avi -o hugefile.yuv
* You can set several input files and output files:
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
Convert the audio file a.wav and the raw yuv video file a.yuv
to mpeg file a.mpg
* You can also do audio and video convertions at the same time:
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.
* You can encode to several formats at the same time and define a
mapping from input stream to output streams:
ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0
Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map
file:index' specify which input stream is used for each output
stream, in the order of the definition of output streams.
* You can transcode decrypted VOBs
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec
mp3 -ab 128 snatch.avi
This is a typicall DVD ripper example, input from a VOB file, output to
an AVI file with MPEG-4 video and MP3 audio, note that in this command we
use B frames so the MPEG-4 stream is DivX5 compatible, GOP size is 300
that means an INTRA frame every 10 seconds for 29.97 fps input video.
Also the audio stream is MP3 encoded so you need LAME support which is
enabled using '--enable-mp3lame' when configuring.
The mapping is particullary usefull for DVD transcoding to get the desired
audio language.
NOTE: to see the supported input formats, use 'ffmpeg -formats'.
2) Invocation
-------------
* The generic syntax is :
ffmpeg [[options][-i input_file]]... {[options] output_file}...
If no input file is given, audio/video grabbing is done.
As a general rule, options are applied to the next specified
file. For example, if you give the '-b 64' option, it sets the video
bitrate of the next file. Format option may be needed for raw input
files.
By default, ffmpeg tries to convert as losslessly as possible: it
uses the same audio and video parameter fors the outputs as the one
specified for the inputs.
* Main options are:
-L show license
-h show help
-formats show available formats, codecs, protocols, ...
-f fmt force format
-i filename input file name
-y overwrite output files
-t duration set the recording time
-title string set the title
-author string set the author
-copyright string set the copyright
-comment string set the comment
-b bitrate set video bitrate (in kbit/s)
* Video Options are:
-s size set frame size [160x128]
-r fps set frame rate [25]
-b bitrate set the video bitrate in kbit/s [200]
-vn disable video recording [no]
-bt tolerance set video bitrate tolerance (in kbit/s)
-sameq use same video quality as source (implies VBR)
-ab bitrate set audio bitrate (in kbit/s)
* Audio Options are:
-ar freq set the audio sampling freq [44100]
-ab bitrate set the audio bitrate in kbit/s [64]
-ac channels set the number of audio channels [1]
-an disable audio recording [no]
* Advanced options are:
-map file:stream set input stream mapping
-g gop_size set the group of picture size
-intra use only intra frames
-qscale q use fixed video quantiser scale (VBR)
-qmin q min video quantiser scale (VBR)
-qmax q max video quantiser scale (VBR)
-qdiff q max difference between the quantiser scale (VBR)
-qblur blur video quantiser scale blur (VBR)
-qcomp compression video quantiser scale compression (VBR)
-vd device set video device
-vcodec codec force video codec
-me method set motion estimation method
-bf frames use 'frames' B frames (only MPEG-4)
-hq activate high quality settings
-4mv use four motion vector by macroblock (only MPEG-4)
-ad device set audio device
-acodec codec force audio codec
-deinterlace deinterlace pictures
-benchmark add timings for benchmarking
-hex dump each input packet
-psnr calculate PSNR of compressed frames
-vstats dump video coding statistics to file
The output file can be "-" to output to a pipe. This is only possible
with mpeg1 and h263 formats.
3) Protocols
ffmpeg handles also many protocols specified with the URL syntax.
Use 'ffmpeg -formats' to have a list of the supported protocols.
The protocol 'http:' is currently used only to communicate with
ffserver (see the ffserver documentation). When ffmpeg will be a
video player it will also be used for streaming :-)
4) File formats and codecs
--------------------------
Use 'ffmpeg -formats' to have a list of the supported output
formats. Only some formats are handled as input, but it will improve
in the next versions.
5) Tips
-------
- For streaming at very low bit rate application, use a low frame rate
and a small gop size. This is especially true for real video where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm
- The parameter 'q' which is displayed while encoding is the current
quantizer. The value of 1 indicates that a very good quality could
be achieved. The value of 31 indicates the worst quality. If q=31
too often, it means that the encoder cannot compress enough to meet
your bit rate. You must either increase the bit rate, decrease the
frame rate or decrease the frame size.
- If your computer is not fast enough, you can speed up the
compression at the expense of the compression ratio. You can use
'-me zero' to speed up motion estimation, and '-intra' to disable
completly motion estimation (you have only I frames, which means it
is about as good as JPEG compression).
- To have very low bitrates in audio, reduce the sampling frequency
(down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3).
- To have a constant quality (but a variable bitrate), use the option
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
quality).
- When converting video files, you can use the '-sameq' option which
uses in the encoder the same quality factor than in the decoder. It
allows to be almost lossless in encoding.