vendredi 19 décembre 2025

A simple GUI for Rebol3

Rebol3, developed by Oldes (https://github.com/Oldes/Rebol3), is improving day by day. Unfortunately, Rebol3 does not yet have a VID (except for Windows as a proof of concept), which means that attractive interfaces cannot yet be created. 
Fortunately, Oldes added a nice modules management to Rebol3, which allows this problem to be overcome. 
In this code, we use two modules:
The idea is as follows: we use the Blend2D module to create the interface for our application and the OpenCV module to display the result. 
With Blend2D module, I created five buttons (simple blend2D boxes). Each button is associated with a Rebol3 function that will execute when the button is clicked. Button 1 calls the loadImage function, which allows you to select an image and display it. Buttons 2 and 3 call the convertTo function, which displays the image in grayscale or in HSV color space. Button 4 displays the original image and button 5 exits the application.
Now, the question is how to associate mouse movements with different buttons. Blend2D does not have a mouse manager, which is where the OpenCV module comes in.

Oldes has introduced a mouse event handler into the OpenCV module for these events:
  • MOUSEMOVE
  • LBUTTONDOWN
  • RBUTTONDOWN
  • MBUTTONDOWN
  • LBUTTONUP
  • RBUTTONUP
  • MBUTTONUP
  • LBUTTONDBLCLK
  • RBUTTONDBLCLK
  • MBUTTONDBLCLK
  • MOUSEWHEEL
  • MOUSEHWHEEL
With the cv/setMouseCallback function, we can retrieve the event type, the x and y position of the mouse, and the event flag. We also obtain the mouse position as a pair! x y.

;--OpenCV mouse callback in context
ctx: context [
    on-mouse-click: func [
        type  [integer!]
        x     [integer!]
        y     [integer!]
        flags [integer!]
    ][
    pos: mcb/pos ;--mouse position as a pair!
        if type == cv/EVENT_LBUTTONDOWN [
        if pos/y < 20 [
        case [
        all [pos > 0x0 pos < 80x20]   [loadImage]         ;--button 1
        all [pos > 80x0 pos < 160x20] [if isLoaded? [convertTo/GS fileName]]        ;--button 2
        all [pos > 160x0 pos < 240x20][if isLoaded? [convertTo/HSV fileName]]    ;--button 3
        all [pos > 240x0 pos < 320x20][showSource]    ;--button 4
        all [pos > 320x0 pos < 400x20][quit] ;--button 5
        ]
        ]
        ]
    ]
]

In the on-mouse-click function, we first check that the mouse is within the button area. Then we check that the mouse is on the selected button. If so, we execute the function associated with the button. In this example button 3 is clicked and convertTo function is executed.
Added new faces for Rebol3 GUI
CheckBox

RadioBox

Horizontal Scroller
Toggle On
Progress









vendredi 12 décembre 2025

Virginia Project: A first paper from R2P2 Lab

https://www.frontiersin.org/journals/pediatrics/articles/10.3389/fped.2025.1636667/full

First, the Virginia software, written with Red language, isolated neonatal anatomy through PyTorch-based PointRend segmentation combined with morphological filtering.

Second, radiometric decoding via ExifTool and ImageMagick extracted pixel-level temperature values mapped to anatomical regions of interest (chest, extremities). Finally, quantitative thermal metrics were derived, including median body surface, temperature and spatial thermal variability (interquartile range).

A key advantage of this automated pipeline is its low operator dependence; once the image is acquired, the entire segmentation and feature extraction process is software-driven, minimizing human interpretation bias.





jeudi 23 octobre 2025

BlurHash with r3

 

What is BlurHash? (from https://uploadcare.com/blog/blurhash-images/ and https://github.com/woltapp/blurhash )

BlurHash is a lightweight way to represent a blurred version of an image that was invented by the Wolt team. It is represented by a short piece of text that when decoded can produce a low quality version of an image which can be shown to the user while the actual image is being loaded.

The idea behind BlurHash is to design a placeholder that is close to the original image but has a smaller size. This makes it possible to send the placeholder to the client side of your application, which reduces the time that the user spends waiting for the page to load.

Oldes wrote an extension for Rebol 3 that makes it very easy to use BlurHash.

See https://github.com/Siskin-framework/Rebol-BlurHash


#!/usr/local/bin/r3
Rebol [
    title: "Rebol/BlurHash test"
]
blurhash: import 'blurhash
cv:         import 'opencv

image: load %../pictures/test1.tiff ;--use your own image
print ["Encoding image of size" as-yellow image/size]
hash: blurhash/encode image
print ["String: " hash]
print ["Decoding hash into image"]
blured: resize blurhash/decode hash 32x32 image/size
;--use OpenCV extension for visualisation
with cv [
print "Source Image"
imshow/name image "Source"
waitkey 0
print "Blured Image"
imshow/name blured "Blured"
waitkey 0
print "Any key to close"
]

Thanks to Oldes!😀




jeudi 2 octobre 2025

SEDFormer

Exciting news! Our paper "SEDformer: Path Signatures and Transformers to Predict Newborns Movement Symmetry" has been accepted at the International Joint Workshop of Artificial Intelligence for Healthcare (HC@AIxIA) and HYbrid Models for Coupling Deductive and Inductive ReAsoning (HYDRA), ECAI 2025.


This research introduces SEDformer, a FEDformer variant that integrates path signatures to enhance newborn movement symmetry prediction - a crucial step toward automated early screening tools for motor development disorders.

Thank you to the entire team for this interdisciplinary collaboration between applied mathematics, AI, and pediatric medicine.

 Rambaud, P., Rimmel, A., Trabelsi, I., Zini, J., Wodecki, A., Motte Signoret, E., Jouen, F., Tomasik, J., Bergounioux, J. (2025). SEDformer: Path Signatures and Transformers to Predict Newborns Movement Symmetry. International Joint Workshop of Artificial Intelligence for Healthcare (HC@AIxIA) and HYbrid Models for Coupling Deductive and Inductive ReAsoning (HYDRA)" ECAI 2025.

samedi 26 juillet 2025

Using FLIR cameras for research

The IR cameras from FLIR (https://www.flir.fr) are little marvels of technology that can acquire quality IR images. What I like about FLIR is that the data format remains the same regardless of the camera used. For my work in neonatal medicine, I use either the C3 model (basic) or the 650SC model (much more expensive and more powerful).

FLIR generates four types of image. The first is the IR image, whose resolution varies from 320x240 to 640x480 pixels, depending on the camera model. The second is an RGB image, up to six times larger than the IR image. The third image, which can be the same size as the IR image or smaller (80x60 pixels).  It contains temperatures in degrees Celsius. Finally, the last image is the color palette of the IR image. So you can imagine all the calculations that have to be made to obtain comparable images. You'll find various toolkits in Python, MatLab, R... that allow you to process these different images. Unfortunately, these libraries are not universal and often depend on other libraries that are not easy to install.

That's why, as part of the Virginia project (https://uniter2p2.fr/projets/), I designed an easy-to-use FLIR image processing module for the Red and Rebol 3 languages.

THE FLIR MODULE

This module has been tested with various FLIR cameras. Its main function is to decode the metadata contained in a radiometric file and extract the visible image (RGB), the infrared image (IR), the color palette associated with the IR image and the temperatures associated with each pixel.

This module calls on two external programs that are installed by default on macOS and Linux.

ExifTool (https://exiftool.org), written and maintained by Phil Harvey, is a fabulous program written in Perl that lets you read and write the metadata of a large number of computer files. ExifTool supports FLIR files. It runs on all MacOs, Linux and Windows platforms.


ImageMagick (https://imagemagick.org/index.php) is an open-source software package comprising a library and a set of command-line utilities for creating, converting, modifying and displaying images in a wide range of formats. The FLIR module essentially uses the magick utility for MacOs and Linux versions. For Windows, use a portable version that supports 16-bit images (https://imagemagick.org/archive/binaries/ImageMagick-7.1.0-60-portable-Q16-x64.zip) and the magick command.

The module calls for:

rcvGetFlirMetaData: This function takes the name of the FLIR file as a parameter (in the form of a character string). It returns all the information in the patient's irtmp/exif.red file in a format that can be directly processed by Red or Rebol 3.

rcvGetVisibleImage: This function extracts the RGB image from the FLIR file and saves it in the irtmp/rgb.jpg file.

rcvGetFlirPalette: Extracts the color palette contained in the FLIR file and samples it for a linear range of values [0..255]. The extracted image is saved as irtmp/palette.png.

rcvMakeRedPalette: Exports the color palette as a block for fast processing with Red or Rebol 3.

rcvGetFlirRawData: Extracts raw temperature data (in 16-bit format) into the irtmp/rawimg.png file.

rcvGetPlanckValues: Retrieves all constants required for accurate temperature calculations.

rcvGetImageTemperatures: This function uses the previous two functions to calculate the temperature of each image pixel as an integer value. It creates the image tmp/celsius.pgm. This is a 16-bit image with a maximum value of 65535. It's a simple text file containing the image size and the 16-bit values of each pixel.

rcvGetTemperatureAsBlock : The temperatures contained in the irtmp/celsius.pgm image are returned as a real value (e.g. 37.2) in the block passed as a parameter to the function. This is a dynamic calculation.

WHY IMAGES ALIGNMENT IS FUNDAMENTAL?

The neural networks we use to identify babies' bodies have not been trained on thermal images, which are difficult to process, but work very well with RGB images. Once the baby's body is correctly identified in the RGB image, we can use the resulting body mask to retrieve the temperatures in the thermal image.  Obviously, we can't use the RGB image directly, but the RGB image aligned with the thermal image. 

In previous versions of Virginia, I wrote a rather complicated algorithm for aligning thermal and IR images. Studying the code, I found that it was possible to make it simpler. There are three values that will help us: Real2IR, offsetX and offsetY, which come from the rcvGetFlirMetaData function. Real2IR allows us to calculate the ratio between the RGB image and the thermal image.  OffsetX and offsetY are the X and Y offset coordinates to be applied to find the origins of the ROI in the RGB image. If these values are equal to 0, alignment is not required.

The result is perfect!



The code for Rebol 3 is here:

https://github.com/ldci/FLIR

The code for Red is here:

https://github.com/ldci/redCV/blob/master/samples/image_thermal/Flir/align.red


jeudi 3 juillet 2025

Gnuplot

I really like Gnuplot (http://www.gnuplot.info), a command-line utility for creating sophisticated graphics. It's in line with Red and Rebol's philosophy: Keep It Simple (KIS). Here's an example:

#!/usr/local/bin/gnuplot -persist
set hidden3d
set isosamples 50,50
set ticslevel 0
set pm3d
set palette defined (0 "black", 0.25 "blue", 0.5 "green", 0.75 "yellow", 1 "red")
splot sin(sqrt(x**2+y**2))/sqrt(x**2+y**2)

And the result:


Just in a few lines of code. Great!
 


samedi 28 juin 2025

Statistics on image

With Red or Rebol R3, the vector! type is ideal for fast numerical calculations. 

Recently, Oldes has introduced new properties for vectors in R3 that allow you to obtain the descriptive statistics of a vector in one basic step. Great work!

An example

#!/usr/local/bin/r3
REBOL [ 
]
vect: #(float64! [1.62 1.72 1.64 1.7 1.78 1.64 1.65 1.64 1.66 1.74])
print query vect object!

The result:

signed: #(true)

type: decimal!

size: 64

length: 10

minimum: 1.62

maximum: 1.78

range: 0.16

sum: 16.79

mean: 1.679

median: 1.655

variance: 0.02529

population-deviation: 0.0502891638427206

sample-deviation: 0.0530094331227943


But this can also be applied to images!
An example 

#!/usr/local/bin/r3
REBOL [ 
]
cv: import 'opencv
with cv [
filename: %../images/lena.png         ; --use your own image
mat: imread/with filename 2 ;--read as grayscale image with one channel
imshow/name mat filename ;--display the image  with file name as title
moveWindow filename 200x10 ;--move window
vect: get-property mat MAT_VECTOR ;--get matrix values as a vector    
print query vect object!
print "A key to quit"
waitKey 0
]

The result:

signed: #(false)

type: integer!

size: 8

length: 65536

minimum: 2

maximum: 225

range: 223

sum: 4377641

mean: 66.7975006103516

median: 64.0

variance: 126148517.630557

population-deviation: 43.87338169177

sample-deviation: 43.873716422939


Efficient :)



 

vendredi 13 juin 2025

K-means algorithm

 The K-means algorithm is a well-known unsupervised algorithm for clustering that can be used for data analysis, image segmentation, semi-supervised learning... The k-means clustering algorithm is an exclusive method: a data point can exist in only one cluster.

K-means is an iterative centroid-based clustering algorithm that partitions a dataset into similar groups based on the distance between their centroids. The centroid (or cluster center) is either the mean or the median of all points.

Given a set of points and an integer k, the algorithm aims to divide the points into k groups, called clusters, that are homogeneous.

In this sample we generate a set of aleatory points in an image.


For processing data, we create a Red/Rebol object such as 

;--an object for storing values (points and clusters)
point: object [
x: 0.0 ;--x position
y: 0.0 ;--y position
group: 0 ;--cluster number (label)
]
The first step is to randomly define k centroids and associate them with k labels. Then, for each point, we calculate x and y Euclidian distances to the centroids and associate the point with the closest centroid and its corresponding label. This labels our data.

Secondly, we recalculate centroids, which will be the center of gravity of each labeled cluster of points. We repeat these steps until a convergence criterion is reached: centroids no longer move from the previous ones.




You will find the documented code for Red and Rebol 3 here:

 https://github.com/ldci/R3_OpenCV_Samples/tree/main/image_kmeans


samedi 7 juin 2025

Compress and Uncompress Images

A few years ago, I presented a way of compressing images with the Red zlib proposed by Bruno Anselme (https://redlcv.blogspot.com/2018/01/image-compression-with-red.html). Since then, Red and Oldes's Rebol 3 have implemented different compression methods that are faster and simpler to use. 

Both languages feature a compress function. Input data can be string or binary values, which is useful for RGB images. Returned values are binary. Both languages use lossless compression methods. 

Red and R3 share the following methods: 
deflate: A lossless data compression format that combines the LZ77 algorithm with Huffman coding.
zlib: Implements the deflate compression algorithm and can create files in gzip format. This library is widely used, due to its small size, efficiency and flexibility.
gzip: gzip is based on the deflate algorithm.

R3 adds a few more algorithms: 
br: Compression Brotli. A fast alternative to GZIP compression proposed by Google.
crush: A lossless compression package developed by the NASA.
lzma: Lempel-Ziv-Markov chain algorithm, is a lossless data compression algorithm.

As these methods are variations on deflate compression, the compression ratio doesn't vary much from one method to another. The difference is in the speed of compression.
 
Of course, both languages have a decompress function. Input data is binary, and the method used must be the same as that chosen for compression.   

Here's a minimalist example of code for Red and R3.  

method: 'zlib ;--a word
img: load %../pictures/in.png         ;--use your own image
bin: img/rgb ;--image as RGB binary
print ["Method    :" form method]
print ["Image size:" img/size]
print ["Before compression:" nU: length? bin]
t: dt [cImg: compress bin method]         ;--R3/Red compress
print ["After  compression:" nC: length? cImg]
ratio: round/to 1.0 - (nC / nU) * 100 0.01                 ;--compression ratio
print ["Compression :" form ratio "%"]
print ["Compress    :" third t * 1000  "ms"]                 ;--in msec
t: dt [uImg: decompress cImg method]         ;--R3/Red decompress
print ["Decompress  :" third t * 1000  "ms"]                 ;--in msec
print ["After decompression:" length? uImg]

The result:

Method    : zlib

Image size: 1920x1280

Before compression: 7372800

After  compression: 4011092

Compression : 45.6 %

Compress    : 46.298 ms

Decompress  : 26.706 ms

After decompression: 7372800


Fast and efficient!

samedi 19 avril 2025

Braille Translator with Rebol and Red

I've always been impressed to see how blind children and adults are able to read Braille. It requires unparalleled tactile sensitivity and cognitive skills. In the early days, the braille cell consisted of 6 dots in a 2x3 matrix, representing 64 characters.  Later, this matrix became 2x4 with 8 dots, enabling 256 characters to be represented. 

[dots order 
1 4 
2 5 
3 6 
7 8
]

All these dots characters are now accessible in Unicode with values ranging from 10240 to 10495 (integer values). I've written a little ANSI->Braille->ANSI translator. The code is written in Rebol 3.19.0, but can be easily adapted to Red 0.6.6. There are some differences about the map! datatype.

The idea is simple. We build 2 dictionaries, one for ANSI->Braille coding and the second for Braille->ANSI coding. Maps are high performance dictionaries that associate keys with values and are very fast.

Classically, the first 32 ANSI codes do not represent characters, but escape codes used for communication with a terminal or printer. On the other hand, these 32 codes are used in Braille to facilitate document layout. 

This is the code:

#!/usr/local/bin/r3
Rebol [
]
;--generate ANSI and Braille codes
generateCodes: does [
i: 0 ;--we use all chars
codesA: #[] ;--a map object ANSI->Braille
codesB: #[] ;--a map object Braille->ANSI
while [i <= 255] [
idx: i + 10240 ;--for Braille code value
key: form to-char i ;--map key is ANSI value
value: form to-char idx ;--map value is Braille code
append codesA reduce [key value];--update map as string values
append codesB reduce [value key];--idem but reverse order key value
++ i
]
]

processString: func [
"Processes ANSI string or Braille string"
string [string!]
/ansi /braille
][
str: copy ""
;--for ansi use select/case, characters are case-sensitive
if ansi [foreach c string [append str select/case codesA form c]] 
if braille [foreach c string [append str select codesB form c]]
str

generateCodes
print-horizontal-line
print a: "Hello Fantastic Red and Rebol Worlds!"  
print-horizontal-line
print b: processString/ansi a
print-horizontal-line
print c: processString/braille b
print-horizontal-line

And the result: 


-------------------------------------------------------------------------------

Hello Fantastic Red and Rebol Worlds!

-------------------------------------------------------------------------------

⡈⡥⡬⡬⡯⠠⡆⡡⡮⡴⡡⡳⡴⡩⡣⠠⡒⡥⡤⠠⡡⡮⡤⠠⡒⡥⡢⡯⡬⠠⡗⡯⡲⡬⡤⡳⠡

-------------------------------------------------------------------------------

Hello Fantastic Red and Rebol Worlds!

-------------------------------------------------------------------------------


Thanks to the help of Oldes, we have a faster version that doesn't use the map! datatype.

encode-braille: function [
    "Process ANSI string and returns Braille string"
    text [string!]
][  
    out: copy ""
    foreach char text [
        if char <= 255 [char: char + 10240]
        append out char
    ]
    out
]
decode-braille: function [
    "Process string while decoding Braille's characters"
    text [string!]
][
    out: copy ""
    foreach char text [
        if all [char >= 10240 char <= 10495] [char: char - 10240]
        append out char
    ]
    out
]





mercredi 16 avril 2025

What tools are available for image processing with Red and Rebol?

 For Rebol 2 we have: https://github.com/ldci/OpenCV3-rebol

This version is old (2015) but still operational. There are around 600 basic OpenCV functions available with Rebol 2.

For Rebol 3, there's the fabulous module created by Oldes: 

https://github.com/Oldes/Rebol-OpenCV

Although incomplete, this module is fanatstic as it allows you to use the latest versions of OpenCV on different X86 or ARM64 platforms. 

You'll find a lot of samples here: https://github.com/ldci/R3_OpenCV_Samples

For Red, we have https://github.com/ldci/OpenCV3-red, which is still active. Although written more than 10 years ago, the code is compatible with the latest versions of Red (0.6.6).

And of course for Red, we have RedCV: https://github.com/ldci/redCV. Most of the code is written in Red/System and offers over 600 basic functions or routines for image processing with Red. 

With the exception of the Oldes code, I'm the only one to maintain all this, and I'm not sure that many people other than me use these codes. In any case, it has enabled me to write some very nice professional applications used at R2P2 (https://uniter2p2.fr).

mardi 15 avril 2025

Motiongrams

A few years ago, I discovered the work of Alexander Refsum Jensenius (https://www.uio.no/ritmo/english/people/management/alexanje/) and really appreciated his work on motiongrams. In my memory, the code was written with Processing (https://processing.org). 

As you know, at R2P2 we make extensive use of video motion analysis to create algorithms for screening babies for motor disorders, using sophisticated neural networks.

But sometimes a simple actimetric analysis is all that's needed, and that's where motiongrams come into their own, because they're so easy to use. 

A few days ago, I resumed the analysis of films of premature babies that we had collected in various Parisian university hospitals (thanks to them). The videos were acquired with a GoPro camera with an FPS of 120 frames by second.

The code is very simple and can be used with Red and redCV or Rebol 3 and OpenCV.

The first step is to define a ROI in the first image. This prevents the movement of the caregivers from adding noise to the image. 



Once this has been done, we proceed to analyze the video. The simple idea is to have two images at T and T+1. Then, a simple difference between the two images lets us know if there has been any movement.



As a precaution, I add a binary filter to remove the background noise present in the image. Then simply average the binary image to obtain a direct assessment of the rate of movement.