samedi 26 juillet 2025

Using FLIR cameras for research

The IR cameras from FLIR (https://www.flir.fr) are little marvels of technology that can acquire quality IR images. What I like about FLIR is that the data format remains the same regardless of the camera used. For my work in neonatal medicine, I use either the C3 model (basic) or the 650SC model (much more expensive and more powerful).

FLIR generates four types of image. The first is the IR image, whose resolution varies from 320x240 to 640x480 pixels, depending on the camera model. The second is an RGB image, up to six times larger than the IR image. The third image, which can be the same size as the IR image or smaller (80x60 pixels).  It contains temperatures in degrees Celsius. Finally, the last image is the color palette of the IR image. So you can imagine all the calculations that have to be made to obtain comparable images. You'll find various toolkits in Python, MatLab, R... that allow you to process these different images. Unfortunately, these libraries are not universal and often depend on other libraries that are not easy to install.

That's why, as part of the Virginia project (https://uniter2p2.fr/projets/), I designed an easy-to-use FLIR image processing module for the Red and Rebol 3 languages.

THE FLIR MODULE

This module has been tested with various FLIR cameras. Its main function is to decode the metadata contained in a radiometric file and extract the visible image (RGB), the infrared image (IR), the color palette associated with the IR image and the temperatures associated with each pixel.

This module calls on two external programs that are installed by default on macOS and Linux.

ExifTool (https://exiftool.org), written and maintained by Phil Harvey, is a fabulous program written in Perl that lets you read and write the metadata of a large number of computer files. ExifTool supports FLIR files. It runs on all MacOs, Linux and Windows platforms.


ImageMagick (https://imagemagick.org/index.php) is an open-source software package comprising a library and a set of command-line utilities for creating, converting, modifying and displaying images in a wide range of formats. The FLIR module essentially uses the magick utility for MacOs and Linux versions. For Windows, use a portable version that supports 16-bit images (https://imagemagick.org/archive/binaries/ImageMagick-7.1.0-60-portable-Q16-x64.zip) and the magick command.

The module calls for:

rcvGetFlirMetaData: This function takes the name of the FLIR file as a parameter (in the form of a character string). It returns all the information in the patient's irtmp/exif.red file in a format that can be directly processed by Red or Rebol 3.

rcvGetVisibleImage: This function extracts the RGB image from the FLIR file and saves it in the irtmp/rgb.jpg file.

rcvGetFlirPalette: Extracts the color palette contained in the FLIR file and samples it for a linear range of values [0..255]. The extracted image is saved as irtmp/palette.png.

rcvMakeRedPalette: Exports the color palette as a block for fast processing with Red or Rebol 3.

rcvGetFlirRawData: Extracts raw temperature data (in 16-bit format) into the irtmp/rawimg.png file.

rcvGetPlanckValues: Retrieves all constants required for accurate temperature calculations.

rcvGetImageTemperatures: This function uses the previous two functions to calculate the temperature of each image pixel as an integer value. It creates the image tmp/celsius.pgm. This is a 16-bit image with a maximum value of 65535. It's a simple text file containing the image size and the 16-bit values of each pixel.

rcvGetTemperatureAsBlock : The temperatures contained in the irtmp/celsius.pgm image are returned as a real value (e.g. 37.2) in the block passed as a parameter to the function. This is a dynamic calculation.

WHY IMAGES ALIGNMENT IS FUNDAMENTAL?

The neural networks we use to identify babies' bodies have not been trained on thermal images, which are difficult to process, but work very well with RGB images. Once the baby's body is correctly identified in the RGB image, we can use the resulting body mask to retrieve the temperatures in the thermal image.  Obviously, we can't use the RGB image directly, but the RGB image aligned with the thermal image. 

In previous versions of Virginia, I wrote a rather complicated algorithm for aligning thermal and IR images. Studying the code, I found that it was possible to make it simpler. There are three values that will help us: Real2IR, offsetX and offsetY, which come from the rcvGetFlirMetaData function. Real2IR allows us to calculate the ratio between the RGB image and the thermal image.  OffsetX and offsetY are the X and Y offset coordinates to be applied to find the origins of the ROI in the RGB image. If these values are equal to 0, alignment is not required.

The result is perfect!



The code for Rebol 3 is here:

https://github.com/ldci/FLIR

The code for Red is here:

https://github.com/ldci/redCV/blob/master/samples/image_thermal/Flir/align.red


jeudi 3 juillet 2025

Gnuplot

I really like Gnuplot (http://www.gnuplot.info), a command-line utility for creating sophisticated graphics. It's in line with Red and Rebol's philosophy: Keep It Simple (KIS). Here's an example:

#!/usr/local/bin/gnuplot -persist
set hidden3d
set isosamples 50,50
set ticslevel 0
set pm3d
set palette defined (0 "black", 0.25 "blue", 0.5 "green", 0.75 "yellow", 1 "red")
splot sin(sqrt(x**2+y**2))/sqrt(x**2+y**2)

And the result:


Just in a few lines of code. Great!
 


samedi 28 juin 2025

Statistics on image

With Red or Rebol R3, the vector! type is ideal for fast numerical calculations. 

Recently, Oldes has introduced new properties for vectors in R3 that allow you to obtain the descriptive statistics of a vector in one basic step. Great work!

An example

#!/usr/local/bin/r3
REBOL [ 
]
vect: #(float64! [1.62 1.72 1.64 1.7 1.78 1.64 1.65 1.64 1.66 1.74])
print query vect object!

The result:

signed: #(true)

type: decimal!

size: 64

length: 10

minimum: 1.62

maximum: 1.78

range: 0.16

sum: 16.79

mean: 1.679

median: 1.655

variance: 0.02529

population-deviation: 0.0502891638427206

sample-deviation: 0.0530094331227943


But this can also be applied to images!
An example 

#!/usr/local/bin/r3
REBOL [ 
]
cv: import 'opencv
with cv [
filename: %../images/lena.png         ; --use your own image
mat: imread/with filename 2 ;--read as grayscale image with one channel
imshow/name mat filename ;--display the image  with file name as title
moveWindow filename 200x10 ;--move window
vect: get-property mat MAT_VECTOR ;--get matrix values as a vector    
print query vect object!
print "A key to quit"
waitKey 0
]

The result:

signed: #(false)

type: integer!

size: 8

length: 65536

minimum: 2

maximum: 225

range: 223

sum: 4377641

mean: 66.7975006103516

median: 64.0

variance: 126148517.630557

population-deviation: 43.87338169177

sample-deviation: 43.873716422939


Efficient :)



 

vendredi 13 juin 2025

K-means algorithm

 The K-means algorithm is a well-known unsupervised algorithm for clustering that can be used for data analysis, image segmentation, semi-supervised learning... The k-means clustering algorithm is an exclusive method: a data point can exist in only one cluster.

K-means is an iterative centroid-based clustering algorithm that partitions a dataset into similar groups based on the distance between their centroids. The centroid (or cluster center) is either the mean or the median of all points.

Given a set of points and an integer k, the algorithm aims to divide the points into k groups, called clusters, that are homogeneous.

In this sample we generate a set of aleatory points in an image.


For processing data, we create a Red/Rebol object such as 

;--an object for storing values (points and clusters)
point: object [
x: 0.0 ;--x position
y: 0.0 ;--y position
group: 0 ;--cluster number (label)
]
The first step is to randomly define k centroids and associate them with k labels. Then, for each point, we calculate x and y Euclidian distances to the centroids and associate the point with the closest centroid and its corresponding label. This labels our data.

Secondly, we recalculate centroids, which will be the center of gravity of each labeled cluster of points. We repeat these steps until a convergence criterion is reached: centroids no longer move from the previous ones.




You will find the documented code for Red and Rebol 3 here:

 https://github.com/ldci/R3_OpenCV_Samples/tree/main/image_kmeans


samedi 7 juin 2025

Compress and Uncompress Images

A few years ago, I presented a way of compressing images with the Red zlib proposed by Bruno Anselme (https://redlcv.blogspot.com/2018/01/image-compression-with-red.html). Since then, Red and Oldes's Rebol 3 have implemented different compression methods that are faster and simpler to use. 

Both languages feature a compress function. Input data can be string or binary values, which is useful for RGB images. Returned values are binary. Both languages use lossless compression methods. 

Red and R3 share the following methods: 
deflate: A lossless data compression format that combines the LZ77 algorithm with Huffman coding.
zlib: Implements the deflate compression algorithm and can create files in gzip format. This library is widely used, due to its small size, efficiency and flexibility.
gzip: gzip is based on the deflate algorithm.

R3 adds a few more algorithms: 
br: Compression Brotli. A fast alternative to GZIP compression proposed by Google.
crush: A lossless compression package developed by the NASA.
lzma: Lempel-Ziv-Markov chain algorithm, is a lossless data compression algorithm.

As these methods are variations on deflate compression, the compression ratio doesn't vary much from one method to another. The difference is in the speed of compression.
 
Of course, both languages have a decompress function. Input data is binary, and the method used must be the same as that chosen for compression.   

Here's a minimalist example of code for Red and R3.  

method: 'zlib ;--a word
img: load %../pictures/in.png         ;--use your own image
bin: img/rgb ;--image as RGB binary
print ["Method    :" form method]
print ["Image size:" img/size]
print ["Before compression:" nU: length? bin]
t: dt [cImg: compress bin method]         ;--R3/Red compress
print ["After  compression:" nC: length? cImg]
ratio: round/to 1.0 - (nC / nU) * 100 0.01                 ;--compression ratio
print ["Compression :" form ratio "%"]
print ["Compress    :" third t * 1000  "ms"]                 ;--in msec
t: dt [uImg: decompress cImg method]         ;--R3/Red decompress
print ["Decompress  :" third t * 1000  "ms"]                 ;--in msec
print ["After decompression:" length? uImg]

The result:

Method    : zlib

Image size: 1920x1280

Before compression: 7372800

After  compression: 4011092

Compression : 45.6 %

Compress    : 46.298 ms

Decompress  : 26.706 ms

After decompression: 7372800


Fast and efficient!

samedi 19 avril 2025

Braille Translator with Rebol and Red

I've always been impressed to see how blind children and adults are able to read Braille. It requires unparalleled tactile sensitivity and cognitive skills. In the early days, the braille cell consisted of 6 dots in a 2x3 matrix, representing 64 characters.  Later, this matrix became 2x4 with 8 dots, enabling 256 characters to be represented. 

[dots order 
1 4 
2 5 
3 6 
7 8
]

All these dots characters are now accessible in Unicode with values ranging from 10240 to 10495 (integer values). I've written a little ANSI->Braille->ANSI translator. The code is written in Rebol 3.19.0, but can be easily adapted to Red 0.6.6. There are some differences about the map! datatype.

The idea is simple. We build 2 dictionaries, one for ANSI->Braille coding and the second for Braille->ANSI coding. Maps are high performance dictionaries that associate keys with values and are very fast.

Classically, the first 32 ANSI codes do not represent characters, but escape codes used for communication with a terminal or printer. On the other hand, these 32 codes are used in Braille to facilitate document layout. 

This is the code:

#!/usr/local/bin/r3
Rebol [
]
;--generate ANSI and Braille codes
generateCodes: does [
i: 0 ;--we use all chars
codesA: #[] ;--a map object ANSI->Braille
codesB: #[] ;--a map object Braille->ANSI
while [i <= 255] [
idx: i + 10240 ;--for Braille code value
key: form to-char i ;--map key is ANSI value
value: form to-char idx ;--map value is Braille code
append codesA reduce [key value];--update map as string values
append codesB reduce [value key];--idem but reverse order key value
++ i
]
]

processString: func [
"Processes ANSI string or Braille string"
string [string!]
/ansi /braille
][
str: copy ""
;--for ansi use select/case, characters are case-sensitive
if ansi [foreach c string [append str select/case codesA form c]] 
if braille [foreach c string [append str select codesB form c]]
str

generateCodes
print-horizontal-line
print a: "Hello Fantastic Red and Rebol Worlds!"  
print-horizontal-line
print b: processString/ansi a
print-horizontal-line
print c: processString/braille b
print-horizontal-line

And the result: 


-------------------------------------------------------------------------------

Hello Fantastic Red and Rebol Worlds!

-------------------------------------------------------------------------------

⡈⡥⡬⡬⡯⠠⡆⡡⡮⡴⡡⡳⡴⡩⡣⠠⡒⡥⡤⠠⡡⡮⡤⠠⡒⡥⡢⡯⡬⠠⡗⡯⡲⡬⡤⡳⠡

-------------------------------------------------------------------------------

Hello Fantastic Red and Rebol Worlds!

-------------------------------------------------------------------------------


Thanks to the help of Oldes, we have a faster version that doesn't use the map! datatype.

encode-braille: function [
    "Process ANSI string and returns Braille string"
    text [string!]
][  
    out: copy ""
    foreach char text [
        if char <= 255 [char: char + 10240]
        append out char
    ]
    out
]
decode-braille: function [
    "Process string while decoding Braille's characters"
    text [string!]
][
    out: copy ""
    foreach char text [
        if all [char >= 10240 char <= 10495] [char: char - 10240]
        append out char
    ]
    out
]





mercredi 16 avril 2025

What tools are available for image processing with Red and Rebol?

 For Rebol 2 we have: https://github.com/ldci/OpenCV3-rebol

This version is old (2015) but still operational. There are around 600 basic OpenCV functions available with Rebol 2.

For Rebol 3, there's the fabulous module created by Oldes: 

https://github.com/Oldes/Rebol-OpenCV

Although incomplete, this module is fanatstic as it allows you to use the latest versions of OpenCV on different X86 or ARM64 platforms. 

You'll find a lot of samples here: https://github.com/ldci/R3_OpenCV_Samples

For Red, we have https://github.com/ldci/OpenCV3-red, which is still active. Although written more than 10 years ago, the code is compatible with the latest versions of Red (0.6.6).

And of course for Red, we have RedCV: https://github.com/ldci/redCV. Most of the code is written in Red/System and offers over 600 basic functions or routines for image processing with Red. 

With the exception of the Oldes code, I'm the only one to maintain all this, and I'm not sure that many people other than me use these codes. In any case, it has enabled me to write some very nice professional applications used at R2P2 (https://uniter2p2.fr).

mardi 15 avril 2025

Motiongrams

A few years ago, I discovered the work of Alexander Refsum Jensenius (https://www.uio.no/ritmo/english/people/management/alexanje/) and really appreciated his work on motiongrams. In my memory, the code was written with Processing (https://processing.org). 

As you know, at R2P2 we make extensive use of video motion analysis to create algorithms for screening babies for motor disorders, using sophisticated neural networks.

But sometimes a simple actimetric analysis is all that's needed, and that's where motiongrams come into their own, because they're so easy to use. 

A few days ago, I resumed the analysis of films of premature babies that we had collected in various Parisian university hospitals (thanks to them). The videos were acquired with a GoPro camera with an FPS of 120 frames by second.

The code is very simple and can be used with Red and redCV or Rebol 3 and OpenCV.

The first step is to define a ROI in the first image. This prevents the movement of the caregivers from adding noise to the image. 



Once this has been done, we proceed to analyze the video. The simple idea is to have two images at T and T+1. Then, a simple difference between the two images lets us know if there has been any movement.



As a precaution, I add a binary filter to remove the background noise present in the image. Then simply average the binary image to obtain a direct assessment of the rate of movement.







samedi 15 mars 2025

YOLO and Redbol

YOLO (You Only Look Once) is a wonderful tool for object detection and image segmentation (https://docs.ultralytics.com/).
Of course, this works very well with Python.  You'll find here a very clear documentation (https://pyimagesearch.com/2025/01/13/getting-started-with-yolo11/). 
But YOLO also offers a CLI mode that can be used with Rebol3 or Red.
With Rebol3 we use the opencv module made by Oldes.  The result with articulations detection in around 1 sec!


And for Red: just Red language functions.

#! /usr/local/bin/red-view
Red [
    Needs: View
    Author: "ldci"
]

appDir: system/options/path
change-dir appDir
imageFile: ""
iSize: 320x240
margins: 5x5
isFile?: false
tasks: ["segment" "pose" "detect"] ;--("obb" "classify" not yet)
modes: ["predict" "track" ];--("benchmark"  "train"  "export" "val" not yet)
models: ["yolo11n-seg.pt" "yolo11n-pose.pt" "yolo11n.pt" "yolov9c-seg.pt" "yolov8n-seg.pt"]
source: ""
task: tasks/1
mode: modes/1
model: rejoin ["models/" models/1]
loadImage:  does [
isFile?: false
tmpFile: request-file
unless none? tmpFile [
canvas1/image: load to-red-file tmpFile
canvas2/image: none
s: split-path tmpFile
imageFile: s/2
source: rejoin ["images/" imageFile]
sb/text: source
isFile?: true
]
]
runYOLO: does [
if isFile? [
canvas2/image: none
clear retStr/text
results: %results.txt
if exists? results [delete results]
prog: rejoin ["yolo " task " " mode " model=" model" " "source=" source]
sb/text: prog
do-events/no-wait
tt:  dt [ret: call/wait/shell/output prog results]
if ret = 0 [
retStr/text: f: read results
f: find f "runs" ;--get directory
s: split f "" ;--get complete directory
destination: rejoin [s/1 "/" imageFile]
canvas2/image: load to-red-file destination]
sb/text: rejoin ["Result: " destination "  in " round/to (tt/3) 0.01 " sec"]
]
]
mainWin: layout [
title "Red and YOLO"
origin margins space margins
base 40x22 snow "Model" 
dp1: drop-down 120 data models select 1
on-change [model: rejoin ["models/" pick face/data face/selected]]
base 40x22 snow "Tasks" 
dp2: drop-down 80 data tasks select 1
on-change [task: pick face/data face/selected]
base 40x22 snow "Mode"  
dp3: drop-down 80 data modes select 1 
on-change [mode: pick face/data face/selected]
button "Load Image" [loadImage]
button "Run YOLO" [runYOLO]
pad 280x0 button 50 "Quit" [quit]
return
canvas1: base iSize
canvas2: base iSize
retStr: area iSize wrap
return
sb: field 645
]
view mainWin


Nice job YOLO.





dimanche 9 mars 2025

Septimus: another real-world application with Red

My medical colleagues in the R2P2 unit are not all experienced developers. They've got other things to do, like saving lives. And they want immediate answers to their clinical questions. And they need easy-to-use tools.

That's why I like to use Red (or Rebol3) to develop tailor-made applications for my colleagues. No complexity (like Python), just Redbol simplicity!

That's what Septimus was designed for. We want to be able to follow the evolution of bacterial infections in our young patients according to the treatment applied. The big idea was to use an infra-red camera to detect points not visible to the naked eye.I've adapted some of redCV's functions to make it an independent module (see flir.red code). Actually, we use Septimus for following patients with acute tibial osteomyelitis.

Septimus is very simple. Once the IR image has been loaded, you can use a rectangle (of variable size or colour) to select the relevant body part in IR image. Then with a single button, you get the hottest point in that area.


Septimus is also a salute to the Blake and Mortimer comic strip, of which I've long been a fan. I think I own every album in the series.


mercredi 1 janvier 2025

Savitsky-Golay Filter

In 1964, A. Savitsky and M.J.E. Golay published an article in Analytical Chemistry describing a simple and effective smoothing technique: “Smoothing and Differentiation of Data by Simplified Least Squares Procedures”.

Their method makes it possible to smooth or derive a time series, with equidistant abscissa values, by a simple convolution with a series of coefficients corresponding to the degree of the chosen polynomial interpolation and to the desired operation: simple smoothing or derivation up to 5th order.

The convolution is performed by n multiplications, followed by the sum of the products and completed by dividing by the corresponding norm. The coefficients and norms are provided in the article. Savitzky and Golay's article is accompanied by 11 tables of coefficients suitable for smoothing or determining of the first 5 derivatives; convolutions are performed for different degrees of polynomials and over ranges from 5 to 25 points. The tables published by Savitzky and Golay contain different typo errors. They were corrected by J. Steiner, Y. Termonia and J. Deltour in 1972.

I really like this filter, as it preserves signal dynamics and effectively filters out background noise. We've used this technique a lot in recent years at R2P2 (https://uniter2p2.fr) to process  videos (of babies) who were shaking. This prevented our neural networks from correctly identifying the baby's body joints.  With this type of filter, everything is back to normal. The video images did not shake and the detection algorithms became perfect (see Taleb, A.,  Rambaud, P., Diop, S., Fauches, R., Tomasik, J., Jouen, F., Bergounioux, J.  "Spinal Muscular Amyotrophy detection using computer vision and artificial intelligence." in JAMA Pediatrics, Published online March 4, 2024.).

The main advantage of this process is that it's rather easy to program, allowing direct access to derivative values. On the other hand, abscissa values must be equidistant, and extreme points are ignored.  

You can find the filter code for Red: 

 (https://github.com/ldci/redCV/blob/master/samples/signal_processing/sgFilter.red

And for Rebol 3 here:

https://github.com/ldci/R3_tests/blob/main/signalProcessing/sgFilter.r3



Here's the SGFiltering function in Rebol 3. 


With Red, it's similar, but I used a Red/System routine to speed up the calculations. I'm not sure that rcvSGFiltering routine is any faster than code written in Red, as Red has come a long way since I wrote this routine.
You can find here:
https://github.com/ldci/Red_KIS/blob/main/Signal_Processing/sgFilter2.red a 100% Red pure version without Routines and Red/System.

These examples work with time series, but can easily be adapted to image processing. After all, an image is just a long vector of RGB values!

References:
A. Savitzky, M.J.E. Golay, ANAL. CHEM., 36,1627 (1964)
J. Steiner, Y. Termonia,  J. Deltour, ANAL. CHEM., 44,1909 (1972)