lundi 26 août 2019

redCV and FFmpeg: Using pipes

As indicated in FFmpeg documentation, FFmpeg reads from an arbitrary number of input files (which can be regular files, pipes, network streams, grabbing devices, etc.), specified by the -i option, and writes to an arbitrary number of output files, which are specified by a plain output url.
A very intresting property of FFmepg is that we can use pipes inside the command. A pipe is a mechanism for interprocess communication; data written to the pipe by one process can be read by another process. The data is handled in a first-in, first-out (FIFO) order. The pipe has no name; it is created for one use and both ends of process must be inherited from the single process which created the pipe.
You can find on the Internet some very interesting examples, that are using pipes, for accessing audio and video data with FFmepg from 

Pipes with Red language

Actually, Red does not support pipe mechanism, but the problem can be solved with Red/System DSL, which provides low-level system programming capabilities. Basically, pipe mechanism is defined in the standard libc, and Red/System DSL knows how to communicate with libc. We have just to add a few functions (/lib/ffmpeg.reds):
In fact, only p-open and p-close are new. The other functions are defined by Red in red/system/runtime/libc.reds, but the idea is to let this file unchanged. This is why, p-read, p-write and p-flush are implemented in ffmpeg.reds. This also makes the code clearer.
The p-open function is closely related to the system function: It executes the shell command as a subprocess. However, instead of waiting for the command to complete, it creates a pipe to the subprocess and returns a stream that corresponds to that pipe. If you specify a r mode argument, you can read data from the stream. If you specify a w mode argument, you can write data to the stream.

Writing audio file with Red and FFmpeg

The idea is to launch FFmpeg via a pipe, which then converts pure raw samples to the required format for writing to the ouput file (see /pipe/sound.red).
This code is simple. First of all, we have to load the Red/System code to use new functions.
#system [ #include %../lib/ffmpeg.reds ]
Then, the generateSound function generates 1 second of sine wave audio data. Generated values are simply stored in a red vector! array of 16-bit integer values. All the job is then done by the makePipe routine with 2 parameters : command: a string with all required FFmpeg commands buf: the array containing the generated sound values. 

As usual with Red/System routines, command string is transformed as c-string! type in order to facilitate the interaction with C library. ptr is a byte-pointer which gives the starting address of the array of values, and n is the size of the buffer. Then, we call the p-open function. Here, we have to write sound values, and thus we use w mode:
pipeout: p-open cmd "w".
Then we just have to write the array into the stream, passing as arguments the pointer to the array of values, the size of each entry in the array (2 for 16-bit signed integer), the number of entries, and the stream:
p-write ptr 2 n pipeout.
Once the job is done, we close the subprocess:
p-close pipeout.
The main program is trivial, and only FFmpeg options passed to the p-open function need some explanation.
-y is used to overwrite the output file if it already exists.
-f s16le option tells FFmpeg that the format of the audio data is raw, signed integer, 32-bit and little-endian. You can use s16be for big-endian according to you OS.
-ar 44100 means that the sampling frequency of the audio data is 44.1 kHz.
-ac 1 is the number of channels in the signal. 
-i - 'beep.wav', the output filename FFmpeg will use.
Finally, the Red code calls ffplay to play the sound and display the result. Of course, since we use Red/System, the code must be compiled.

Modifying video file with Red and FFmpeg

Same technique can be used for video as illustrated in /pipe/video1.red. In this sample, we just want to invert image color using pipes.

The only difference with the previous example, is that we are using 2 subprocesses: one for reading the source data, and the other for writing the modified data.
For reading data:


For writing data:

Then, main program is really simple. Once the video is processed, we can also process sound channel for adding sound to the ouput file. Lastly, we display the result. 

Here is the result: source: 
and the transform: 

Some tips

This very important to know the size of the orginal movie before making transformations. This why you'll find here (/videos/mediainfo.red), a tool which can help you for retreiving information. Then, I am very found of Red vector data type for this kind of programming, since we can exactly choose the size of the data we need for the pipe process. Thanks to the Red Team :)

From movie to Red image

Here (/pipe/video2.red), the idea is get the data from FFmpeg to make a Red image! that can be displayed by a Red face. If the video has a size of 720x480 pixels, then the first 720x480x3 bytes outputed by FFMPEG will give the RGB values of the pixels of the first frame, line by line, top to bottom. The next 720x480x3 bytes after that will represent the second frame, etc. 
Before using a routine, we need a command-line for FFmpeg:

The format image2pipe and the - at the end signal to FFMPEG that it is being used with a pipe by another program. Then, the routine getImages transforms the FFmpeg data to a Red image! 

pixD: image/acquire-buffer rimage :handle creates pointer to get the data provided by FFmpeg. Then we read all FFmpeg data as rgb integer value and we update the image.
pixD/value: (255 << 24) OR (r << 16 ) OR (b << 8) OR g
When the all image is processed, we release the memory for the next frame image/release-buffer rimage handle yes, before calling 2 simple Red functions to control the delay between images and to display the result. If the movie contains an audio channel, the movie player plays the audio if required.

With this technique, images are not stored on disk, but just processed on-the-fly in memory, giving a very fast access to video movies with Red.

Attention: this code crashes sometimes and must be improved! In this case, kill all ffplay processes, and launch the program again. The origin of the problem is probably related to the use of #call.

All sources can be found here: https://github.com/ldci/ffmpeg/pipe

redCV and FFmpeg: Video reading

redCV and FFmpeg: Video reading

As previoulsy explained, Red and FFmpeg collborate rather well for video and audio recording. In this new post, we'll focus on video reading.

A simple approach: Use ffplay tool

A simple, but efficient way, is to call ffplay tool which is included in FFmpeg framework. Just open a terminal session, and give the name of the movie to read.
ffplay Wildlife.wmv


While running movie, different options can be used form controlling the ouput:
f: for fullscreen
m: toggle audio
p, space: pause/resume the movie
/, *: decrease and increase volume respectively
During the reading of the movie, ffplay returns a lot of information about the video:


Of course, this command can be integrated in Red code as parameter of call function, and GUI program can be developped to avoid command-line use (see /video/ffplay.red code). It is important to use /shell refinement for call, since ffplay uses the terminal to display different information:

 call/shell "ffplay Wildlife.wmv"


This simple approach is very confortable for reading and listening all supported video files and, this is the way I prefer for reading movies without using VLC or Quick Time Player.

A second approach: Extract frames from a movie

In many applications, I don't need tp process audio channel, but I have to focus on images in order to make redCV image processing on time-lapse videos. 
To turn a video to number of images, run the command below. The command generates the files named image0001.png, image0002.png and so on.
ffmpeg -i filename image0d.png
This command is included in video/movies.red program and, associated to Red code for creating an efficient video reader with a clean GUI interface. T

he code is very simple, but contains some interesting functions.
First of all, to create elegant navigation buttons, we profite the fact that Red supports unicode string :)


The second important function concerns the generation of line-command for FFmpeg.


FFmpeg options are very simple:
"/usr/local/bin/ffmpeg" ;location of ffmepg binary
" -y" ;automatically replace image files
"-f image2" ;The image file muxer writes video frames to image files 
" -s 720x480" ;output size
" -q:v 1" ;use fixed quality scale (1 to 31, 1 is highest)
" -r " frames per sec ;fps (mandatory for .vmw files)
" " dataDir ;temp destination directory"img-%05d.jpg" 
;automatic file name numbering
The rest of the code is pure Red and very easy to understand. First of all, you have to select a movie file. When done, FFmpeg is called to create, in a temprary directory, all jpg images corresponding to the number of frames contained in the movie. You can also play with the FPS to create more or less images. Navigation buttons, slider and, text-list faces can be used to directely access any image. When you press the Play/Stop button, for a complete reading, text-list face is disabled. 

redCV and FFmpeg: Video and audio capture

FFmpeg is a fabulous command-line framework for multimedia processing. Among variety of features, FFmpeg can capture video and audio from your computer's camera and microphone. Since Red language does support video capture, it was really easy to connect both programs and realize a nice Red video recorder. Red is used to display camera images on screen and, FFmpeg to record movie.

Wiith this Red code, you can select video and audio inputs, change the quality and size of the recorded video. You can also control the frequency of recorded frames (FPS).

Supported video files for recording are mpg, mp4, mkv, avi, wmv, and mov.

Before we start, you must have Red language (http://www.red-lang.org) and FFmpeg (https://www.ffmpeg.org/) installed on your computer and you must know the path of the FFmpeg binary such as /usr/local/bin/ffmpeg for Mac or Linux.

As Red, FFmpeg is cross-platform and can be used vith various operating systems. The Red getPlateform function is called to select the running OS and then to use the correct FFmpeg input device.


Then, the second operation is to generate the command-line that will be passed as parameter to FFmpeg binary. This is done by Red function generateCommands.


In the code above, a few of FFmpeg options and Red words are used:

-f inputDevice: to use the OS device for grabbing video (on macOS, we use the avfoundation device).
-framerate frameRate: the FPS (1..30) for recording.
-video_size videoSize: required video size (a pair WxH, on my MacBook Pro: "1280x720" or "640x480").
-i  vDevice:aDevice: the video and audio device used for recording. By default, vDevice = 0 corresponds to the first camera found on computer (e.g. Apple FaceTime Camera on macOS), and aDevice = 0 to computer microphone.  
-target target: this is a combination of 2 values for determining the quality of the video (e.g film_dvd). 
lastly the fileName is provided to store the video.

When FFmpeg command-line is generated, we just need Red call function to start or stop the movie grabbing.



In less than 150 lines of code, we get a very efficient movie grabber which records both video and audio channels.  

You'll find the code here: https://github.com/ldci/ffmpeg/video/camera.red. Enjoy:)