- MOUSEMOVE
- LBUTTONDOWN
- RBUTTONDOWN
- MBUTTONDOWN
- LBUTTONUP
- RBUTTONUP
- MBUTTONUP
- LBUTTONDBLCLK
- RBUTTONDBLCLK
- MBUTTONDBLCLK
- MOUSEWHEEL
- MOUSEHWHEEL
vendredi 19 décembre 2025
A simple GUI for Rebol3
vendredi 12 décembre 2025
Virginia Project: A first paper from R2P2 Lab
https://www.frontiersin.org/journals/pediatrics/articles/10.3389/fped.2025.1636667/full
First, the Virginia software, written with Red language, isolated neonatal anatomy through PyTorch-based PointRend segmentation combined with morphological filtering.
Second, radiometric decoding via ExifTool and ImageMagick extracted pixel-level temperature values mapped to anatomical regions of interest (chest, extremities). Finally, quantitative thermal metrics were derived, including median body surface, temperature and spatial thermal variability (interquartile range).
A key advantage of this automated pipeline is its low operator dependence; once the image is acquired, the entire segmentation and feature extraction process is software-driven, minimizing human interpretation bias.
jeudi 23 octobre 2025
BlurHash with r3
What is BlurHash? (from https://uploadcare.com/blog/blurhash-images/ and https://github.com/woltapp/blurhash )
BlurHash is a lightweight way to represent a blurred version of an image that was invented by the Wolt team. It is represented by a short piece of text that when decoded can produce a low quality version of an image which can be shown to the user while the actual image is being loaded.
The idea behind BlurHash is to design a placeholder that is close to the original image but has a smaller size. This makes it possible to send the placeholder to the client side of your application, which reduces the time that the user spends waiting for the page to load.
Oldes wrote an extension for Rebol 3 that makes it very easy to use BlurHash.
See https://github.com/Siskin-framework/Rebol-BlurHash
Rebol [
title: "Rebol/BlurHash test"
]
blurhash: import 'blurhash
cv: import 'opencv
image: load %../pictures/test1.tiff ;--use your own image
print ["Encoding image of size" as-yellow image/size]
hash: blurhash/encode image
print ["String: " hash]
print ["Decoding hash into image"]
blured: resize blurhash/decode hash 32x32 image/size
with cv [
print "Source Image"
imshow/name image "Source"
waitkey 0
print "Blured Image"
imshow/name blured "Blured"
waitkey 0
print "Any key to close"
]
jeudi 2 octobre 2025
SEDFormer
Exciting news! Our paper "SEDformer: Path Signatures and Transformers to Predict Newborns Movement Symmetry" has been accepted at the International Joint Workshop of Artificial Intelligence for Healthcare (HC@AIxIA) and HYbrid Models for Coupling Deductive and Inductive ReAsoning (HYDRA), ECAI 2025.
This research introduces SEDformer, a FEDformer variant that integrates path signatures to enhance newborn movement symmetry prediction - a crucial step toward automated early screening tools for motor development disorders.
Thank you to the entire team for this interdisciplinary collaboration between applied mathematics, AI, and pediatric medicine.
samedi 20 septembre 2025
Nonlinear acoustic phenomena tune the adults’ facial thermal response to baby cries with the cry amplitude envelope
A new article on the use of thermography: https://royalsocietypublishing.org/doi/10.1098/rsif.2025.0150.
I am very proud of this innovative article. The first thermal image processing codes used Red and Rebol-3.
mardi 2 septembre 2025
samedi 26 juillet 2025
Using FLIR cameras for research
The IR cameras from FLIR (https://www.flir.fr) are little marvels of technology that can acquire quality IR images. What I like about FLIR is that the data format remains the same regardless of the camera used. For my work in neonatal medicine, I use either the C3 model (basic) or the 650SC model (much more expensive and more powerful).
FLIR generates four types of image. The first is the IR image, whose resolution varies from 320x240 to 640x480 pixels, depending on the camera model. The second is an RGB image, up to six times larger than the IR image. The third image, which can be the same size as the IR image or smaller (80x60 pixels). It contains temperatures in degrees Celsius. Finally, the last image is the color palette of the IR image. So you can imagine all the calculations that have to be made to obtain comparable images. You'll find various toolkits in Python, MatLab, R... that allow you to process these different images. Unfortunately, these libraries are not universal and often depend on other libraries that are not easy to install.
That's why, as part of the Virginia project (https://uniter2p2.fr/projets/), I designed an easy-to-use FLIR image processing module for the Red and Rebol 3 languages.
THE FLIR MODULE
This module has been tested with various FLIR cameras. Its main function is to decode the metadata contained in a radiometric file and extract the visible image (RGB), the infrared image (IR), the color palette associated with the IR image and the temperatures associated with each pixel.
This module calls on two external programs that are installed by default on macOS and Linux.
ExifTool (https://exiftool.org), written and maintained by Phil Harvey, is a fabulous program written in Perl that lets you read and write the metadata of a large number of computer files. ExifTool supports FLIR files. It runs on all MacOs, Linux and Windows platforms.
ImageMagick (https://imagemagick.org/index.php) is an open-source software package comprising a library and a set of command-line utilities for creating, converting, modifying and displaying images in a wide range of formats. The FLIR module essentially uses the magick utility for MacOs and Linux versions. For Windows, use a portable version that supports 16-bit images (https://imagemagick.org/archive/binaries/ImageMagick-7.1.0-60-portable-Q16-x64.zip) and the magick command.
The module calls for:
rcvGetFlirMetaData: This function takes the name of the FLIR file as a parameter (in the form of a character string). It returns all the information in the patient's irtmp/exif.red file in a format that can be directly processed by Red or Rebol 3.
rcvGetVisibleImage: This function extracts the RGB image from the FLIR file and saves it in the irtmp/rgb.jpg file.
rcvGetFlirPalette: Extracts the color palette contained in the FLIR file and samples it for a linear range of values [0..255]. The extracted image is saved as irtmp/palette.png.
rcvMakeRedPalette: Exports the color palette as a block for fast processing with Red or Rebol 3.
rcvGetFlirRawData: Extracts raw temperature data (in 16-bit format) into the irtmp/rawimg.png file.
rcvGetPlanckValues: Retrieves all constants required for accurate temperature calculations.
rcvGetImageTemperatures: This function uses the previous two functions to calculate the temperature of each image pixel as an integer value. It creates the image tmp/celsius.pgm. This is a 16-bit image with a maximum value of 65535. It's a simple text file containing the image size and the 16-bit values of each pixel.
rcvGetTemperatureAsBlock : The temperatures contained in the irtmp/celsius.pgm image are returned as a real value (e.g. 37.2) in the block passed as a parameter to the function. This is a dynamic calculation.
WHY IMAGES ALIGNMENT IS FUNDAMENTAL?
The neural networks we use to identify babies' bodies have not been trained on thermal images, which are difficult to process, but work very well with RGB images. Once the baby's body is correctly identified in the RGB image, we can use the resulting body mask to retrieve the temperatures in the thermal image. Obviously, we can't use the RGB image directly, but the RGB image aligned with the thermal image.
In previous versions of Virginia, I wrote a rather complicated algorithm for aligning thermal and IR images. Studying the code, I found that it was possible to make it simpler. There are three values that will help us: Real2IR, offsetX and offsetY, which come from the rcvGetFlirMetaData function. Real2IR allows us to calculate the ratio between the RGB image and the thermal image. OffsetX and offsetY are the X and Y offset coordinates to be applied to find the origins of the ROI in the RGB image. If these values are equal to 0, alignment is not required.
The result is perfect!
The code for Rebol 3 is here:
The code for Red is here:
https://github.com/ldci/redCV/blob/master/samples/image_thermal/Flir/align.red
jeudi 3 juillet 2025
Gnuplot
I really like Gnuplot (http://www.gnuplot.info), a command-line utility for creating sophisticated graphics. It's in line with Red and Rebol's philosophy: Keep It Simple (KIS). Here's an example:
#!/usr/local/bin/gnuplot -persist
set hidden3d
set isosamples 50,50
set ticslevel 0
set pm3d
set palette defined (0 "black", 0.25 "blue", 0.5 "green", 0.75 "yellow", 1 "red")
splot sin(sqrt(x**2+y**2))/sqrt(x**2+y**2)
And the result:
samedi 28 juin 2025
Statistics on image
With Red or Rebol R3, the vector! type is ideal for fast numerical calculations.
Recently, Oldes has introduced new properties for vectors in R3 that allow you to obtain the descriptive statistics of a vector in one basic step. Great work!
An example
REBOL [
]
vect: #(float64! [1.62 1.72 1.64 1.7 1.78 1.64 1.65 1.64 1.66 1.74])
print query vect object!
signed: #(true)
type: decimal!
size: 64
length: 10
minimum: 1.62
maximum: 1.78
range: 0.16
sum: 16.79
mean: 1.679
median: 1.655
variance: 0.02529
population-deviation: 0.0502891638427206
sample-deviation: 0.0530094331227943
signed: #(false)
type: integer!
size: 8
length: 65536
minimum: 2
maximum: 225
range: 223
sum: 4377641
mean: 66.7975006103516
median: 64.0
variance: 126148517.630557
population-deviation: 43.87338169177
sample-deviation: 43.873716422939
vendredi 13 juin 2025
K-means algorithm
The K-means algorithm is a well-known unsupervised algorithm for clustering that can be used for data analysis, image segmentation, semi-supervised learning... The k-means clustering algorithm is an exclusive method: a data point can exist in only one cluster.
K-means is an iterative centroid-based clustering algorithm that partitions a dataset into similar groups based on the distance between their centroids. The centroid (or cluster center) is either the mean or the median of all points.
Given a set of points and an integer k, the algorithm aims to divide the points into k groups, called clusters, that are homogeneous.
In this sample we generate a set of aleatory points in an image.
;--an object for storing values (points and clusters)point: object [x: 0.0 ;--x positiony: 0.0 ;--y positiongroup: 0 ;--cluster number (label)]
You will find the documented code for Red and Rebol 3 here:
https://github.com/ldci/R3_OpenCV_Samples/tree/main/image_kmeans
samedi 7 juin 2025
Compress and Uncompress Images
A few years ago, I presented a way of compressing images with the Red zlib proposed by Bruno Anselme (https://redlcv.blogspot.com/2018/01/image-compression-with-red.html). Since then, Red and Oldes's Rebol 3 have implemented different compression methods that are faster and simpler to use.
Both languages feature a compress function. Input data can be string or binary values, which is useful for RGB images. Returned values are binary. Both languages use lossless compression methods.
method: 'zlib ;--a wordimg: load %../pictures/in.png ;--use your own imagebin: img/rgb ;--image as RGB binaryprint ["Method :" form method]print ["Image size:" img/size]print ["Before compression:" nU: length? bin]t: dt [cImg: compress bin method] ;--R3/Red compressprint ["After compression:" nC: length? cImg]ratio: round/to 1.0 - (nC / nU) * 100 0.01 ;--compression ratioprint ["Compression :" form ratio "%"]print ["Compress :" third t * 1000 "ms"] ;--in msect: dt [uImg: decompress cImg method] ;--R3/Red decompressprint ["Decompress :" third t * 1000 "ms"] ;--in msecprint ["After decompression:" length? uImg]
Method : zlib
Image size: 1920x1280
Before compression: 7372800
After compression: 4011092
Compression : 45.6 %
Compress : 46.298 ms
Decompress : 26.706 ms
After decompression: 7372800
samedi 19 avril 2025
Braille Translator with Rebol and Red
I've always been impressed to see how blind children and adults are able to read Braille. It requires unparalleled tactile sensitivity and cognitive skills. In the early days, the braille cell consisted of 6 dots in a 2x3 matrix, representing 64 characters. Later, this matrix became 2x4 with 8 dots, enabling 256 characters to be represented.
1 4
2 5
3 6
7 8
]
All these dots characters are now accessible in Unicode with values ranging from 10240 to 10495 (integer values). I've written a little ANSI->Braille->ANSI translator. The code is written in Rebol 3.19.0, but can be easily adapted to Red 0.6.6. There are some differences about the map! datatype.
The idea is simple. We build 2 dictionaries, one for ANSI->Braille coding and the second for Braille->ANSI coding. Maps are high performance dictionaries that associate keys with values and are very fast.
Classically, the first 32 ANSI codes do not represent characters, but escape codes used for communication with a terminal or printer. On the other hand, these 32 codes are used in Braille to facilitate document layout.
And the result:
-------------------------------------------------------------------------------
Hello Fantastic Red and Rebol Worlds!
-------------------------------------------------------------------------------
⡈⡥⡬⡬⡯⠠⡆⡡⡮⡴⡡⡳⡴⡩⡣⠠⡒⡥⡤⠠⡡⡮⡤⠠⡒⡥⡢⡯⡬⠠⡗⡯⡲⡬⡤⡳⠡
-------------------------------------------------------------------------------
Hello Fantastic Red and Rebol Worlds!
-------------------------------------------------------------------------------
mercredi 16 avril 2025
What tools are available for image processing with Red and Rebol?
For Rebol 2 we have: https://github.com/ldci/OpenCV3-rebol.
This version is old (2015) but still operational. There are around 600 basic OpenCV functions available with Rebol 2.
For Rebol 3, there's the fabulous module created by Oldes:
https://github.com/Oldes/Rebol-OpenCV
Although incomplete, this module is fanatstic as it allows you to use the latest versions of OpenCV on different X86 or ARM64 platforms.
You'll find a lot of samples here: https://github.com/ldci/R3_OpenCV_Samples
For Red, we have https://github.com/ldci/OpenCV3-red, which is still active. Although written more than 10 years ago, the code is compatible with the latest versions of Red (0.6.6).
And of course for Red, we have RedCV: https://github.com/ldci/redCV. Most of the code is written in Red/System and offers over 600 basic functions or routines for image processing with Red.
With the exception of the Oldes code, I'm the only one to maintain all this, and I'm not sure that many people other than me use these codes. In any case, it has enabled me to write some very nice professional applications used at R2P2 (https://uniter2p2.fr).
mardi 15 avril 2025
Motiongrams
A few years ago, I discovered the work of Alexander Refsum Jensenius (https://www.uio.no/ritmo/english/people/management/alexanje/) and really appreciated his work on motiongrams. In my memory, the code was written with Processing (https://processing.org).
As you know, at R2P2 we make extensive use of video motion analysis to create algorithms for screening babies for motor disorders, using sophisticated neural networks.
But sometimes a simple actimetric analysis is all that's needed, and that's where motiongrams come into their own, because they're so easy to use.
A few days ago, I resumed the analysis of films of premature babies that we had collected in various Parisian university hospitals (thanks to them). The videos were acquired with a GoPro camera with an FPS of 120 frames by second.
The code is very simple and can be used with Red and redCV or Rebol 3 and OpenCV.
The first step is to define a ROI in the first image. This prevents the movement of the caregivers from adding noise to the image.
Once this has been done, we proceed to analyze the video. The simple idea is to have two images at T and T+1. Then, a simple difference between the two images lets us know if there has been any movement.
As a precaution, I add a binary filter to remove the background noise present in the image. Then simply average the binary image to obtain a direct assessment of the rate of movement.



















