Gallery
Handy to know shizzle
Share
Explore

icon picker
CLI video editing - ffmpeg and co

This page is my a log of my knowledge, research and methods for editing video via the command line aka non-visual or sans-GUI video editing.

2012

StackOverflow:
mencoder uses ffmpeg. Mencoder is actually a universal interface to a number of different codecs or codec libraries like ffmpeg.

ffmpeg ordering of options

The order of options is important, for in and out files!
ffmpeg [global options] [[infile options][‘-i’ infile]]... {[outfile options] outfile}...

Make stills from a video, every 30 seconds

mplayer.exe -vf scale=1920:1080 -nosound -demuxer mov -vo png -ss 00:02:12 -sstep 30 ..\MOVI0039.MOV ..\MO010039.MOV ..\MOVI0040.MOV

Make a video from stills

mencoder.exe mf://*.png -mf fps=25:type=png -ovc lavc -nosound -o temp.mp4

mpeg container, x264 AVC codec

mencoder.exe \
mf://*.jpg -mf type=jpg:fps=30 \
-nosound \
-of lavf -lavfopts format=mp4 -ovc x264 \
-x264encopts bitrate=2000:global_header \
-o output.mp4

Make a timelapse of an existing video

time "ffmpeg.exe" -i "E:\DCIM\100MEDIA\MOVI0006.MOV" -vf select='not(mod(n\,50))',setpts='N/(29.97*TB)' -an -y temp.mp4

This works nice for first person timelapse

-vf select='not(mod(n\,15))',setpts='N/(29.97*TB)'

Combine / concatenate videos

mencoder.exe -ovc copy -nosound -o somewhere FILE [FILE]

Output stills that transition with fade

This utilises ImageMagick
convert *.JPG -delay 20 -morph 10 morph\image%05d.jpg

Output blank video

Interesting history of ffmpeg and avconv StackOverflow: Blog from Clément Bœsch aka aka
:
avconv -y -s 1280x720 -f rawvideo -pix_fmt rgb24 -r 25 -i /dev/zero -vcodec libx264 -preset medium -tune stillimage -crf 24 -acodec copy -shortest -t 360 -threads 8 black-blank.mp4

2023

HEVC profiles

A useful table on Wikipedia covers the HEVC profiles, levels and tiers -
.

Extract audio from container

# for a container with single audio stream
ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac

Encode time-lapse - source: existing video

The following command was designed for a GoPro HERO10 source video, and the options were an attempt to make the resulting time-lapse video stream as close as possible to the source stream. Thus enabling the time-lapse stream to be concatenated with the source stream in a lossless manor (remuxed) without having to transcode/re-encode the source video stream(s).
ffmpeg version 4.3.5-0+deb11u1
x265 [info]: HEVC encoder version 3.4
x265 [info]: build info [Linux][GCC 9.3.0][64 bit] 8bit+10bit+12bit

ffmpeg -an -vf setpts='30/194*PTS' -i source.mp4 -c:v libx265 -x265-params level=5.1:crf=23:no-high-tier=1:bframes=0 -profile:v main -color_range pc -colorspace bt709 -color_primaries bt709 -color_trc bt709 -r 30000/1001 -pix_fmt yuvj420p -tag:v hvc1 timelapse.mp4
💡 what about -vsync 2 is it useful or just leave it -1 for auto?
For a simpler approach when the output video stream doesn’t need to match the source, for e.g. when the time-lapse is planned as a standalone edit and/or will be re-encoded anyway:
ffmpeg -an -vf setpts='30/194*PTS' -i source.mp4 -c:v libx265 -x265-params crf=23 -r 30 timelapse.mp4
In both commands setpts='30/194*PTS' must be customised - 30 is the desired length of the time-lapse, and 194 was the source video length - both in seconds. Ffmpeg will then drop the required frames to create the time-lapse effect. crf=23 can be adjusted to change the video quality. Lower numbers = higher quality and vice versa.

Adding a silent audio track during time-lapse encoding

ffmpeg -f lavfi -i anullsrc -i source.mp4 -vf setpts='30/194*PTS' -c:v libx265 -x265-params crf=23 -r 30 -acodec aac -map 1:v -map 0:a -shortest timelapse-w-silent-audio.mp4

Add a silent audio track to video that has no audio

This approach is copies the video stream (lossless).
ffmpeg -i timelapse.mp4 -f lavfi -i anullsrc -vcodec copy -acodec aac -shortest timelapse-w-silent-audio.mp4

Lossless concat of videos that have audio streams

#1 copy the sources to mpegts intermediate container

This approach utilises the ffmpeg .
On versions of ffmpeg <4:
ffmpeg -i timelapse-w-silent-audio.mp4 -c copy -bsf:v hevc_mp4toannexb -bsf:a aac_adtstoasc -f mpegts intermediate1.ts

ffmpeg -i source.mp4 -c copy -bsf:v hevc_mp4toannexb -bsf:a aac_adtstoasc -f mpegts intermediate2.ts
On versions of ffmpeg >=4:
ffmpeg -i timelapse-w-silent-audio.mp4 -c copy intermediate1.ts

ffmpeg -i source.mp4 -c copy intermediate2.ts

#2 concatenate

ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -vcodec copy -acodec aac concat.mkv
Note that the video concatenation is lossless, and the audio streams are re-encoded using aac codec. The audio re-encoding solves audio instability issues arising from the concatenation.
Note the previous command outputs an mkv file. I’ve had issues concatenating directly into mp4 containers. The mkv container can be easily switched to mp4 after the encoding is verified as valid and stable.
For best results:
The videos being concatenated should have very similar video properties and encoding properties - if the wrong properties mismatch then the concatenation will fail or result in an unstable and low compatibility video stream.
TODO add a list of key properties that should match
Its easier to concatenate videos that either all have no audio, or all have audio. Mixing this up requests careful consideration and mapping which increases complexity.
It is possible to concatenate (copy) audio streams but its also relatively cost-free to re-encode audio streams and avoid stability or compatibility issues. Only certain audio stream formats can be concatenated (link here).

What didn’t work - ffmpeg concat demuxer

The approach uses the and its promising (lossless without intermediate files) but didn’t work for my source videos. The ffmpeg concatenation process itself doesn’t throw errors but the video stop working as expected (in VLC) after the first concatenation point.
💡 It would be worth trying this again in the future to see if there have been ffmpeg improvements or bug fixes.
ffmpeg -f concat -safe 0 -i <(for f in timelapse-w-silent-audio.mp4 source.mp4; do echo "file '$PWD/$f'"; done) -vcodec copy -acodec aac concat.mkv

TODO: can I use process substitution to perform the interim steps?

It might fail due to race conditions but might be worth trying e.g.
ffmpeg -i "concat:<(cmds to create time-lapse intermediate stream)|<(cmds to create next intermediate stream)" -vcodec copy -acodec aac concat.mkv

Adding text fade in/out effects

Generator: [] See latest steps for a relatively complex example.

Skiing footage - latest steps

# PTS transformation / timelapse
# Observation: -r 2997/100 (fps: 29.97) modifies ffprobe: codec_time_base - the -r value is inverted e.g. 100/2997
# -time_base is not explicity needed
time ffmpeg -an -i GX010235-clip-tow-lift-no-audio2-progressive.mp4 -vf setpts='30/194*PTS' -c:v libx265 -x265-params no-info=1:level=5.1:crf=23:no-high-tier=1:bframes=0 -profile:v main -color_range pc -colorspace bt709 -color_primaries bt709 -color_trc bt709 -pix_fmt yuvj420p -tag:v hvc1 -r 2997/100 timelapse-no-audio.mp4

# add silent audio track and set fps and time base to match source stream
ffmpeg -i timelapse-no-audio.mp4 -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=48000 -vcodec copy -r 2997/100 -time_base 1/30000 -acodec aac -shortest timelapse-w-silent-audio.mp4

# transport stream format
ffmpeg -i timelapse-w-silent-audio.mp4 -c copy intermediate1.ts
ffmpeg -i GX010235-clip-progressive.mp4 -c copy intermediate2.ts

# concat video streams and add sound track, merging and fading audio streams
ffmpeg -fflags +discardcorrupt -i "concat:intermediate1.ts|intermediate2.ts" -i 09._Ronald_Jenkees_-_Outer_Space.mp3 -filter_complex '[0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,afade=out:st=41:d=1[a1];[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=0.5,afade=in:st=0:d=10,afade=out:st=85:d=6[a2]; [a1][a2]amerge=inputs=2[a]' -map 0:v -map '[a]' -shortest -acodec aac -vcodec copy -field_order progressive -video_track_timescale 30k -tag:v hvc1 -ac 2 merge.mp4

# downscale and transcode to x264 1080p and add text overlays
ffmpeg -i merge.mp4 -filter_complex '[0:v]scale=1080:-2,setsar=1:1,fade=in:st=0:d=5,fade=out:st=86:d=5,'"drawtext=text='Our first ski sesh of the year at the local mountain':box=1: boxcolor=black@0.5: boxborderw=5:fontsize=24:fontcolor=white:alpha='if(lt(t,4),0,if(lt(t,5),(t-4)/1,if(lt(t,15),1,if(lt(t,16),(1-(t-15))/1,0))))':x=50:y=50-th,drawtext=text='Visibility and light conditions were... not great\\, not terrible\!':box=1: boxcolor=black@0.5: boxborderw=5:fontsize=24:fontcolor=white:alpha='if(lt(t,18),0,if(lt(t,19),(t-18)/1,if(lt(t,29),1,if(lt(t,30),(1-(t-29))/1,0))))':x=50:y=50-th,drawtext=text='First time Finley has the GoPro on while skiing \:\)':box=1: boxcolor=black@0.5: boxborderw=5:fontsize=24:fontcolor=white:alpha='if(lt(t,31),0,if(lt(t,32),(t-31)/1,if(lt(t,38),1,if(lt(t,39),(1-(t-38))/1,0))))':x=50:y=50-th,drawtext=text=He was lovin\\\\\\' it dispite the conditions\!:box=1: boxcolor=black@0.5: boxborderw=5:fontsize=24:fontcolor=white:alpha='if(lt(t,41),0,if(lt(t,42),(t-41)/1,if(lt(t,48),1,if(lt(t,49),(1-(t-48))/1,0))))':x=50:y=50-th,drawtext=text=Next time we\\\\\\'ll use a hard mount to get better footage\!:box=1: boxcolor=black@0.5: boxborderw=5:fontsize=24:fontcolor=white:alpha='if(lt(t,51),0,if(lt(t,52),(t-51)/1,if(lt(t,58),1,if(lt(t,59),(1-(t-58))/1,0))))':x=50:y=50-th,drawtext=text=Music\\\\: Outer Space by Ronald Jenkees © 2009:box=1: boxcolor=black@0.5: boxborderw=5:fontsize=14:fontcolor=white:alpha='if(lt(t,0),0,if(lt(t,1),(t-0)/1,if(lt(t,7),1,if(lt(t,8),(1-(t-7))/1,0))))':x=50:y=H-th-25" -c:v libx264 -crf 18 -preset slow -c:a copy merge.x264.1080p.mp4

Check a video file for errors

This can be useful in many cases, especially if you want to check if files have issues prior to concatenation.
If the file is already in mpegts intermediate format you can run:
ffmpeg -v error -i pv video.ts -f null -
If you’d like a progress bar - add pv:
ffmpeg -v error -i <(pv video.ts) -f null -
If the files are not already in a transport stream - you can do this on the fly as follows:
ffmpeg -v error -i <(ffmpeg -i source.mp4 -c copy -f mpegts -) -f null -
If you’d like a progress bar - add pv and redirect stderr of the inner ffmpeg:
ffmpeg -v error -i <(ffmpeg -i <(pv source.mp4) -c copy -f mpegts - 2>/dev/null) -f null -
💡 Keep in mind that without the pv progress bar the checking process can take a while to run, especially on files larger than a few hundred MiB.

I pressed CTRL+C to abort the process - now my terminal is weird!

If you CTRL+C the commands that include process substitution <( command ) it can cause issues with the terminal - similar to when binary output sometimes fubar’s your terminal, you can resolve that:
# note - the terminal might not react to your keystrokes, but complete the sequence:
CTRL+C
CTRL+C
# then paste the next line e.g. SHIFT+INS
reset; stty sane; tput rs1; clear; echo -e "\033c"
ENTER

Reference

Filter setpts and asetpts

Change the PTS (presentation timestamp) of the input frames.
setpts works on video frames, asetpts on audio frames.
Some common constants include:
setpts constants
Name
description
1
N
The count of the input frame for video or the number of consumed samples, not including the current frame for audio, starting from 0.
2
PTS
The presentation timestamp in input
3
TB
The timebase of the input timestamps
There are no rows in this table
Rates & Times Glossary
abbrev
full
variable
explanation
1
PTS
Presentation Time Stamp
AVPacket::pts
Presentation timestamp in AVStream->time_base units; the time at which the decompressed packet will be presented to the user.
2
DTS
Decompression timestamp
AVPacket::dts
Decompression timestamp in AVStream->time_base units; the time at which the packet is decompressed.
3
fps
frames per second
AVStream.avg_frame_rate
average frame rate = total frames / total seconds. A variable frame rate video may have an fps of 57.16
4
tbr
time base, real (?)
AVStream.r_frame_rate
TBR Real base framerate of the stream.
Defined in
This is the lowest framerate with which all timestamps can be represented accurately (it is the least common multiple of all framerates in the stream). Note, this value is just a guess!
? tbr is the framerate that the demuxer should use ?
5
tbn
time base number (?)
AVStream.time_base
TBN (the stream/container timebase)
Defined in
This is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented.
timescale (ticks per second). This number could be 90000, 15360, etc. It is used to calculate the actual time from a PTS (Presentation Time Stamp). If timescale is 90000, and PTS for a frame is 45000, that frame is displayed at 0.5 seconds.

(Note: this abbreviation and variable name is a misnomer because it is actually timescale, not timebase. Timebase should be the reciprocal, such as 1/90000, 1/15360, etc.)
6
tbc
time base, codec (?)
AVCodecContext.time_base
TBC (the codec timebase (timescale))
Defined in
codec timescale. Same as tbn, but for the codec. This has been deprecated and removed.
KM: Not sure about the deprecation. See comment.
7
There are no rows in this table
Related ffmpeg options
Name
description
1
-time_base
from codec options.
set the desired time base hint for output stream
-time_base is a rational number and sets the codec time base. It is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented.
For fixed-fps content, timebase should be 1 / frame_rate and timestamp increments should be identically 1.
2
-video_track_timescale
AFAIK this controls TBN
,
Muxers → MOV/MPEG-4/ISOMBFF → Options
Set the timescale used for video tracks. Range is 0 to INT_MAX. If set to 0, the timescale is automatically set based on the native stream time base. Default is 0.
3
-r
override input framerate/convert to given output framerate.
For output, set the target frame rate - container implementation specific CFR / VFR
video encoding: Duplicate or drop frames right before encoding them to achieve constant output frame rate fps.
video streamcopy: Indicate to the muxer that fps is the stream frame rate. No data is dropped or duplicated in this case. This may produce invalid files if fps does not match the actual stream frame rate as determined by packet timestamps.
For input, ignore any timestamps stored in the file and instead generate timestamps assuming constant frame rate fps.
4
-enc_time_base
Set the desired time base for the encoder.
defaults to 0 which assigns a default value according to the media type. The default encoder time base is the inverse of the output framerate but may be set otherwise via -enc_time_base.
5
-vsync
-fps_mode
set video sync method globally
vsync is deprecated use -fps_mode
Set video sync method / framerate mode. vsync is applied to all output video streams but can be overridden for a stream by setting fps_mode.
There are no rows in this table
💡 mp4 containers default to cfr (constant frame rate). mkv default to vfr (variable frame rate). ffmpeg -vsync defaults to auto : Choosing between cfr and vfr depending on muxer capabilities.
💡 For an output format like MP4, which defaults to constant frame rate (CFR), -r will generate a CFR stream. For variable frame rate formats, like Matroska, the -r value acts as a ceiling, so that a lower frame rate input stream will pass through, and a higher frame rate stream, will have frames dropped, in order to match the target rate.

ffprobe

# single file format and stream info
ffprobe -hide_banner -show_format -show_streams file.mkv

# single file, json output, jq parsing, csv output
ffprobe -loglevel quiet -hide_banner -print_format json -select_streams v:0 -show_entries stream=height:format=filename file.mkv | jq -r '[.format.filename,.streams[].height] | @csv'

# for use in a loop, json output, jq parsing, tsv output
ffprobe -loglevel quiet -hide_banner -print_format json -select_streams v:0 -show_entries stream=height:format=filename "$1" | jq -r '[.format.filename,.streams[].height] | @tsv'

# find mp4 files, print height
find . -type f -a -name '*.mp4' -a -print -a -exec ffprobe -loglevel quiet -hide_banner -print_format flat -select_streams v:0 -show_entries stream=height {} \;

Subtitles

Append srt to mkv container that does not have subs

ffmpeg -i src.mkv -i forced.srt -c copy -metadata:s:s:0 language=eng -metadata:s:s:0 title="Forced" -disposition:s:0 default dst.mkv

Append srt to mkv container that already has subs

# -map 0 preserves the mapping of Input #0
# -map 1 appends/preserves the mapping of Input #1
ffmpeg -i src.mkv -i forced.srt -map 0 -map 1 -c copy -metadata:s:s:1 language=eng -metadata:s:s:1 title="Forced" -disposition:s:1 default dst.mkv

2024 - Timelapses

I know of at least two methods of creating a timelapse. Both methods require a video filter to specify the PTS (presentation timestamp) and this means that the input video has to be re-encoded, which is a lossy process. 2-pass encoding can help to minimise the loss.
ffmpeg has a good guide on their wiki: . This includes a raw bitstream method which is lossless. It also introduces video filters (lossy re-encoding) that can smooth out the duration change in the output video. It is worthwhile to be familiar with the knowledge.

Method 1

💡 Note that this method will drop frames to achieve the desired timelapse unless a higher output frame rate is specified than the input. For example, to go from an input of 4 FPS to one that is sped up to 4x - 16 FPS:
💡 With this method you can create a timelapse:
... of a specific duration
... of a specific percent of the original input video duration
... using a slow down multiplier x times slower
... using a speed up multiplier x times faster
Prerequisites
Obtain the fps and duration of the input video stream:
# Print duration for the first input video stream
ffprobe -v error -select_streams v:0 -show_entries stream=duration,avg_frame_rate,r_frame_rate -print_format flat input.mp4

# For automation one could extract specific values with jq
ffprobe -v error -select_streams v:0 -show_entries stream=duration,avg_frame_rate,r_frame_rate -print_format json input.mp4 | jq '.streams[].duration'
To create a timelapse with a specific duration
The formula looks like this:
output_timelapse_duration (seconds) ÷ input_video_duration (seconds) × PTS
The input video duration can then be used as follows:
# Output a 30 second timelapse from a 194 second input video
# In this case, the output video duration is reduced to 15.46% of the input video
-vf setpts='30/194*PTS'
Full example
# x264 timelapse - reduce the duration to ~7.5% of the input video, output fps: 15
ffmpeg -an -i input.mp4 -vf setpts='30/194*PTS' -c:v libx264 -crf 18 -r 15 output.mp4
To speed up a video and reduce the duration to X % of the input duration
If the timelapse doesn’t need to be a specific duration, one can specify a fraction (percentage) and the output video duration will be reduced to x % of the input video duration.
# Reduce ouput video duration/frames to 7.5% of the input video
-vf setpts='0.075*PTS'

# If you wanted to double the speed of the input video
-vf setpts='0.5*PTS'
To slow down a video by a factor of X times slower
# The inverse logic works to slow down the input video
# In this case, the number on the left of the equation is a multiplier for the video duration
# For example, to double the input video duration
-vf setpts='2*PTS'
To speed up a video by a factor of X times faster
The formula is as follows, where multiplier is the number of times faster the output video should be:
( input_video_duration ÷ multiplier ÷ input_video_duration ) × PTS
# example 2 times faster
-vf setpts='(input_video_duration/2/input_video_duration)*PTS'

# example 4 times faster
-vf setpts='(input_video_duration/4/input_video_duration)*PTS'
Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.