dimanche 15 décembre 2024

Signal processing with Red and Rebol

In many of the data we collect in hospital, we are dealing with time series, some of which show an unexpected variation as a function of time. For example, in our work on the perception of babies' cries by adults, we observed that most of the signals showed a linear temperature drift over the course of the experiment. This is probably linked to the electronics of our camera. For these reasons, I've developed a few simple algorithms in Red and Rebol 3 that solve some of these problems. I mainly use datatype vector!, which is very efficient for numerical calculations with Red or Rebol 3.

One of the first ways is to remove the DC constant from the signal. Simply remove the mean value of the signal for each value of the signal. Rebol and Red have a function (average) that calculates the average of a vector. 

detrendSignal: func [v [vector!]

"Remove continuous component in signal"
][
        ;--basic (x - mean)
        _v: copy v
        _average: average _v    
---average is a native function in Red and Rebol 3
        repeat i _v/length [_v/:i: _v/:i - _average]
        _v
]

Now let's move on to signal normalization. Normalization is basically bringing signals to the same range or a predefined range. A typical example of a predefined range is the statistical approach of the normalization, which is transforming the signal so that its mean is 0 and standard deviation is 1. This is very useful when you want to compare signals with different amplitudes. Simply calculate the standard deviation of the distribution before normalizing the signal.

stddev: func [v [vector!]
"Standard deviation"
][
sigma: 0.0
foreach value v [sigma: sigma + (power (value - average v) 2)]
sqrt sigma / ((v/length) - 1)
]

normalizeSignal: func [v [vector!]
"Z-score algorithm"
][
;--use z-Score algorithm (x - mean / standard deviation)
_v: copy v
_average: average _v;   ;---average is a native function in Red and Rebol 3
_std: stddev _v             ;--get standart deviation
repeat i _v/length [_v/:i: _v/:i - _average / _std]
_v
]

Another way of normalizing data is to use the minimum and maximum values contained in each data series. With this algorithm, the values of each series are in a space [0.0 .. 1.0]. 

minMaxNormalization: func [v [vector!]
"Min Max normalization"
][
;-- use  min-max algorithm (x: x - min / xmax - xmin)
_v: copy v
xmin: _v/minimum xmax: _v/maximum
repeat i _v/length [_v/:i: (_v/:i - xmin) / (xmax - xmin)]
_v
]

But these techniques aren't always effective, because they don't detect the anomalies (outliers) contained in the signal. For this reason, I often use an algorithm based on the median of the distribution.  This algorithm is more robust and minimizes the effects of anomalies. Of course, we need to calculate the median and interquartile range of our signal.

median: func [
"Return the sample median"
sample [vector!]
][
data: sort to block! copy sample
n: length? data
case [
odd?  n [pick data n + 1 / 2]
even? n [(pick data n / 2) + (pick data n / 2 + 1) / 2]
]
]

interquartileRange: func [
"Return the sample Interquartile Range"
sample [vector!]
][
    data: sort to-block copy sample
    n: length? data
    Q1: 0.25 * n ;--(1 / 4)
    Q2: 0.50 * n ;--(2 / 4)
    Q3: 0.75 * n ;--(3 / 4)
    Q4: 1.00 * n ;--(4 / 4)
    Q3 - Q1 ;--IQR
]


medianFilter: func [v [vector!]
"Median filter"
][
;--use median filter (x: x - med / IRQ)
_v: copy v
med: median _v
IQR: interquartileRange _v
repeat i _v/length [_v/:i: (_v/:i - med) / IQR]
_v
]

A sample  





lundi 2 décembre 2024

The Virgina Project

The Virginia project (https://uniter2p2.fr/en/projects/) focuses on studying the thermoregulation of newborns from thermal images. 

The primary goal of this project is to detect any deterioration in the infant’s health as early as possible using their thermal profile. Over 1,000 images of newborns were captured after birth at four different time points, corresponding to Apgar assessments at 1, 3, 5, and 10 minutes after birth. 

The ultimate objective is to analyze the thermal evolution of these infants at these four key moments.

Infrared images were acquired with a FLIR T650sc camera. The T650sc camera is equipped with an uncooled Vanadium Oxide (VOx) microbolometer detector that produces thermal images of 640 x 480 pixels, with an accuracy of +/- 1 °C.

The Virginia software was developed entirely within the R2P2 laboratory (by ldci) using Red programming language (https://www.red-lang.org),  and the redCV library for image processing (https://github.com/ldci/redCV). The Virginia software includes add-on modules for decoding images.


THE FLIR MODULE


This module has been tested with different FLIR cameras. Its main function is to decode the metadata contained in any radiometric file and to extract the visible image (RGB), the infrared image (IR), the color palette associated with the IR image as well as the temperatures (in degrees °C) associated to each pixel.


This module uses two external programs :


ExifTool (
https://exiftool.org), written and maintained by Phil Harvey, is a fabulous program written in Perl that allows you to read and write the metadata of many computer files. ExifTool supports FLIR files. It works on MacOs, Linux and Windows platforms.

 

ImageMagick (https://imagemagick.org/index.php) is a free software, including a library, as well as a set of command line utilities, allowing to create, convert, modify, and display images in a very large number of formats. The FLIR module mainly uses the convert utility for MacOs and Linux versions and the magick utility for Windows.

Once the metadata are extracted, we call a Python library: PixelLib


THE PIXELLIB LIBRARY

 

This superb library written and maintained by Ayoola Olafenwa is used for the semantic segmentation which allows to identify the newborn in the image. We use the latest version of PixelLib (https://github.com/ayoolaolafenwa/PixelLib) which supports PyTorch and is more efficient for segmentation. The PyTorch version of PixelLib uses the PointRend object segmentation architecture by Alexander Kirillov et al.  2019 to replace the Mask R-CNN. PointRend is an excellent neural network for implementing object segmentation. It generates accurate segmentation masks and runs at a high speed that meets the growing demand for real-time computer vision applications.

First, we only look for the class person without looking for other objects in the RGB image. Then, we get the detected mask as a matrix of true or false values. It is then very simple to reconstruct the binary image of the mask by replacing the true values by the white color. With a simple AND logic operator between the FLIR image and the segmentation mask image, we obtain a new image that keeps only the thermal image of the baby. Only the pixel values higher than 0.0.0 (black) will be considered. Here, for example, the values of the baby's crotch will not be included for the various calculations.



After this first operation of body segmentation, we use a double algorithm. The next step is to detect the contours of the body. This operation will detect the contours in the mask as a polygon of vertices connected by a B-Spline curve. The contour detection algorithm uses several techniques.  First, two morphological operators of dilation and erosion are successively applied to smooth the contours of the mask calculated by the semantic segmentation. Then we use the Freeman coding chain technique (FCC). This technique allows the coding with a limited number of bits (8) of the local direction of a contour element defined in the image. This allows the constitution of a chain of codes from an initial pixel, considering that a contour element links two related pixels. 

When the result of the edge detection is adequate we can proceed to the calculation of the body temperatures. We use a ray-tracing algorithm that makes sure that each pixel of the image belongs to the polygon representing the baby's body. This operation allows us to extract from the 2-D temperature matrix only the body temperatures in the form of a vector which is then used for the different calculations. 


The code is not open-source, as we are in the process of registering patents on certain technological innovations.  As soon as this is possible, I will give free access to all sources. The idea was just to show that you can do great things with Red. 



 Freeman H. On the encoding of arbitrary geometric configurations. IRE Transactions on Electronics Computers. 1961. 10:260–268



Panoptic Segmentation. Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9404-9413


samedi 30 novembre 2024

Using json files with Red and Rebol

Json is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects made of name–value pairs and arrays. Json and Red or Rebol are very similar in the way they represent data. I sincerely believe that the development of Json benefited from all the work done by Carl Sassenrath when he developed Rebol 2.

Between 2020 and 2023, I developed a major program at the Raymond Poincaré hospital (https://uniter2p2.fr/) with Red, which used a thermal camera to measure the body temperature of newborn babies. This was a bit tricky, as we had to extract the baby's body coordinates from the thermal image in order to measure body temperature. To do this, I used semantic segmentation algorithms such as those proposed by Ayoola Olafenwa with her PixelLib library (https://pixellib.readthedocs.io/en/latest/).

In this program, I also included an export of the babies' images in .jpg format, as well as an export of the baby's body coordinates in .json format. The idea was to be able to use this data with annotation tools such as labelMe (https://github.com/wkentaro/labelme).

A few days ago, I had to return to this data to prepare a publication. Bad surprise: PixelLib and LableMe no longer work with recent versions of macOS and Apple's new Silicon processors.

Fortunately, with Red (or Rebol3) I was able to solve the problem with a few lines of code. 

Red [
Needs: 'View
]
;--we use func: all words are global
isFile?: none
loadImage: does [
canvas/image: none
clear f/text
clear info/text
isFile?: no
tmpf: request-file/filter ["jpeg Files" "*.jpg"]
unless none? tmpf [
jpgFile:  tmpf
jsonFile: copy jpgFile
replace jsonFile ".jpg" ".json"
canvas/image: load tmpf
f/text: form tmpf
isFile?: yes
]
]
getCoordinates: func [
f [file!]
][
f: read f ;--Red read json file as string
replace f  ",." ",0." ;--in case of missing 0 values
js: load-json f ;--json string -> redbol map object
keys: keys-of js ;--a block of keys
version: select js keys/1 ;--labelMe version
flags: select js keys/2 ;--none
shapes: select js keys/3 ;--coordinates are here as a block of length 1
imagePath: select js keys/4 ;--jpeg file
imageData: select js keys/5 ;--none
imageHeight: select js keys/6 ;--imageHeight
imageWidth: select js keys/7 ;--imageWidth

bPoints: copy [] ;--block for coordinates
;--Thanks to Oldes for s/points
foreach s shapes [
infos: rejoin ["Label: " s/label " ID: " s/group_id " Shape Type: " s/shape_type]
foreach p s/points [
if all [p/1 > 0.0 p/2 > 0.0] [append bPoints to pair! p]
]
]
bPoints ;--returned coordinates
]
showCoordinates: func [f [file!] b [block!]
][
code: compose [
fill-pen 255.0.0.120 ;--draw command
pen 0.0.0.100 ;--draw command
line-width 1 ;--draw command
polygon ;--draw command
]
img: load f
bb: make image! reduce [3x3 black]
foreach p b [
change at img p bb ;--draw coordinates in image
append code p ;--append polygons
]
]
view win: layout [
title  "Neonate Labelling"
button "Load a Neonate File (.jpg)" [loadImage]
button "Draw Extracted Body" [
if isFile? [
showCoordinates jpgFile getCoordinates jsonFile
info/text: infos
canvas/image: draw img code
]
]
info: field 250
pad 15x0
button "Quit" [Quit]
return
canvas: base 640x480 white
return
f: field 640
do [f/enabled?: info/enabled?: no]
]

And the result: 







mercredi 23 octobre 2024

Dynamic Time Warping

Dynamic Time Warping (DTW) is a fabulous tool that I use in various applications with students in R2P2 Lab (https://uniter2p2.fr/  for ballistocardiography, pressure measures, gait analysis...).

Quoting wikipedia:

"In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences which may vary in time or speed. For instance, similarities in walking patterns could be detected using DTW, even if one person was walking faster than the other, or if there were accelerations and decelerations during the course of an observation."


In redCV we use a basic DTW. The objective is to find a mapping between all points of x and y series. In the first step, we will find out the distance between all pair of points in the two signals. Then, in order to create a mapping between the two signals, we need to create a path. The path should start at (0,0) and want to reach (M,N) where (M, N) are the lengths of the two signals. To do this, we thus build a matrix similar to the distance matrix. This matrix would contain the minimum distances to reach a specific point when starting from (0,0). DTW value corresponds to (M,N) sum value.


Results are pretty good for 1-D series.


Now, the question is can we use DTW in image processing? And the response is yes. Images can be considered as a long 1-D vector, and then we can compare two images, as x and y series, to find similarities  between them. 

Now imagine we must compare characters to measure similarity between shapes such as hand-writing productions for example: DTW gives us a direct measurement of the distance between characters. 

If the characters are identical, DTW equals to 0 as illustrated here:





Now consider b and d characters that are close but different by orientation. In this case, DTW value increases.



dtwFreeman.red and dtwPolar.red illustrate this technique for comparing shapes in image. dtwFreeman.red   is a fast version that use only Freeman code chain to identify external pixels of shapes to be compared. dtwPolar.red is more complete since the code associates Freeman code chain and polar coordinates transform to creates X and Y DTW series. Both programs use rcvDTW functions:  rcvDTWDistances,  rcvDTWCosts and  rcvDTWGetDTW.

These techniques were successfully used for a scientific project comparing cursive vs. script writing learning in French and Canadian children. Data were recorded with a Wacom tablet and then processed with Red and RedCV. Each child’s production was compared to a template, and DTW was calculated allowing to reduce the complexity to a sole value, and then allowing statistics on data.


You can do great things with Red!




 








mardi 15 octobre 2024

Giving voices to your apps

I'm starting to think about developing multimodal interfaces (vision and sound) for Red. With macOS it's quite easy because we can use the ‘say’ system command. For Windows and Linux, I'd be delighted if other developers came up with a similar solution.

Here (https://github.com/ldci/Voices) are a few examples of how to use this command with Red and Rebol 3.

This is an example of Red code:

#!/usr/local/bin/red-view
Red[
Author: "ldci"
needs: view
]
;--for macOS 32-bit (Mojave) 
;--new version 
voices:         []
languages: []
sentences: []
flag:                 1
filename:   %voices.txt
getVoices: does [call/shell/output "say -v '?'" filename]
loadVoices:  does [
vfile: read/lines filename
foreach v vfile [
tmp: split v "#" 
append sentences tmp/2
trim/lines tmp/1
append voices first split tmp/1 space
append languages second split tmp/1 space
]
a/text: sentences/1
f/text: languages/1
]
generate: does [
prog: rejoin ["say -v " voices/:flag " " a/text]
call/shell/wait prog
]
mainWin: layout [
title "Voices"
dp1: drop-down data voices
select 1
on-change [
flag: face/selected
f/text: languages/(face/selected)
a/text: sentences/(face/selected)
]
f: field center
button "Talk" [generate] 
pad 100x0 
button  "Quit" [quit]
return
a: area 450x50
do [unless exists? filename [getVoices] loadVoices]
]
view mainWin


And the GUI result


A very interesting approach is now proposed by Oldes for macOS and Windows: https://github.com/Oldes/Rebol-Speak/releases/tag/0.0.1


Minimalistic but efficient :

#!/usr/local/bin/r3
Rebol [
]
speak: import speak     ;--import module
with speak [
list-voices     ;--list all voices
say/as "Hello Red world!" 15             ;--english voice
say/as "Bonjour Red!" 166     ;--french voice
]

And a Windows equivalent of macOS say developed by Philippe Groarkehttps://github.com/p-groarke/wsay?tab=readme-ov-file


And also Jocko's tests for Windows  and macOS: https://github.com/jocko-jc/red-tts

The code was initially developed for Rebol and is now updated for Red.







lundi 14 octobre 2024

Using Unicode Characters with Red

 Unicode characters can be used with Red for making nice GUI apps. 

A sample code is here: https://github.com/ldci/Unicode/blob/main/Red/unicode5.red

But results are slightly different with macOS and Windows versions of Red.

With macOS, unicode characters are correctly displayed in a text-list (I mean with coloured characters).

This is not the case for Windows version with a black and white text-list.


This is probably related to a difference in backends for Red/View related to OS.

But, another difference is also observed when you use to-image function such as

button 200 "Copy to the clipboard" [
img: to-image cc
write-clipboard img
view [title "Image Copy" image 400x300 img button "Close" [unview]]
]

macOS gives an expected result with correct size and position.
Windows not: image is not correctly copied to the clipboard with correct size and position.



Why these differences? Not tested with Linux version of Red.

Thanks to qtxie for the fix :). Now macOS and Windows versions are similar.


Red team is very responsive !










mercredi 9 octobre 2024

Using Z-score with Red or Rebol 3

 Detecting anomalies (outliers) is a classical problem in statistics and computer science.

https://medium.com/@akashsri306/detecting-anomalies-with-z-scores-a-practical-approach-2f9a0f27458d 

Z-score can help to solve this kind of problems. Z-score is calculated as follow z = x - mean / SD  where x is an individual value in the distribution, mean is the average of all distribution values and SD is the standard deviation of the data.

When applied on Gaussian distribution of data, Z-score generates a new distribution with mean = 0.0 and SD = 1.0. This is important when you need to compare data with different scales. 

Then we can use a threshold to identify outliers. A threshold value is a  cutoff point that helps determine what is considered as an anomaly or outlier within the values distribution.  Many scientists use the Z-score to exclude values they consider to be outliers from the data: values greater than 2 SD or less than 2 SD will not be retained. 

But, we can also use Z-Score for extracting significant values from a noisy signal, with these general considerations: There is basic noise in the signal with a general mean and SD of all timeseries. There are data points that significantly deviate from the noise (peaks).

I've found a good explanation here:

https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data of how we can deal with this problem.

The basic idea is simple: if a datapoint in the series is a given x number of standard deviation away from a moving mean, the algorithm gives a signal (equal to 1 ) which means that the datapoint is emerging from the noisy signal.

This is a Red/Rebol 3 function which illustrates how to do.

zThresholding: function [
data      [block! vector!]
output    [block! vector!]
lag           [integer!]
threshold [decimal!]
influence [decimal!]
][
sLength: length? data
filteredY: copy data
;--Red 
avgFilter: make vector! reduce ['float! 64 sLength]
stdFilter: make vector! reduce ['float! 64 sLength]
;--R3
avgFilter: make vector! reduce ['decimal! 64 sLength]
stdFilter: make vector! reduce ['decimal! 64 sLength]

avgFilter/:lag: mean data lag
stdFilter/:lag: stdDev data lag
i: lag
while [i < sLenght][
n:   i + 1          ;--index of the next value
y:   data/:n
avg: avgFilter/:i
std: stdFilter/:i
v1: abs(y - avg)
v2: threshold * std
either v1 > v2 [
output/:n: pick [1 -1] y > avg
filteredY/:n: (influence * y) + ((1 - influence) * filteredY/:i)
][
output/:n: 0
]
avgFilter/:n: mean   (at filteredY i - lag) lag
stdFilter/:n: stdDev (at filteredY i - lag) lag
i: i + 1
]
filteredY
]

And the result for Red:


And Rebol3:




See for the code for Red and Rebol: 


mardi 6 août 2024

Gems from Rebol (2)

 Other useful functions found in Rebol 3 : filter and unfiltered for png images.


Source image

Filtered image
Unfiltered image
Of course, source and unfiltered images are identical.






samedi 20 juillet 2024

Gems from Rebol

Each version of Rebol includes pearls that make image processing easy.

In Rebol 2, for example, you can find an extremely fast convolution function.  

You'll find here http://www.rebol.com/view/demos/convolve.r, the demo of convolution effect (By Cyphre).

I remember presenting Rebol at the Hanoi Polytechnic University (https://bachkhoahanoi.edu.vn/) a long time ago, and colleagues were impressed by the speed of a simple interpreted script designed for convolution.

Basically, this function uses a 3 by 3 kernel and offers various filters such as emboss and others. But you can also create your own filter.


These ideas have been implemented in redCV and you can find several examples in the RedCV/samples/image_convolution directory.

Recently, I did a little digging into the Rebol 3 version developed by Oldes (https://github.com/Siskin-framework/Rebol) and also found a few gems. 

The first is blur function which allows a Gaussian blurring of any image.
USAGE:
     BLUR image radius

DESCRIPTION:
     Blur (Gaussian) given image. 
     BLUR is a native! value.

ARGUMENTS:
     image         [image!]   Image to blur (modified).
     radius         [number!]  Blur amount.

radius must be positive and > 1. If radius = 1 you'll get an un-blurred image as result.


The second one is rgb-to-hsv function: 

USAGE:
     RGB-TO-HSV rgb

DESCRIPTION:
     Converts RGB value to HSV (hue, saturation, value). 
     RGB-TO-HSV is a native! value.

ARGUMENTS:
     rgb           [tuple!] 
     
Of course yo'll get the inverse function hsv-to-rgb:
USAGE:
     HSV-TO-RGB hsv

DESCRIPTION:
     Converts HSV (hue, saturation, value) to RGB. 
     HSV-TO-RGB is a native! value.

ARGUMENTS:
     hsv           [tuple!]   






Thanks to these fabulous developers for offering us such easy-to-use tools.






vendredi 12 juillet 2024

Red 64-bit Image

More than once while developing redCV, I regretted that Red didn't offer the possibility of creating images in 8, 16, 32 and 64 bits, with a channel number from 1 to 4 as is the case with OpenCV.  Such formats are sometimes very useful for speeding up image processing and improving precision. When we developed Matrix with Toomas Vooglaid and Qingtian Xie in 2020, we solved some of the problems. But I was left wanting more, so I did a bit of digging to find out whether Red objects could answer this question. The following code is just one illustration of such an approach.


#!/usr/local/bin/red

Red [

]


rcv: object [

create: func [

type [integer!] ;--1:byte 2:integer 3:float

bit [integer!] ;--8, 16, 32 or 64-bit

isize [pair!] ;--image size as pair!

channels [integer!] ;--1 to 4 channels

return: [vector!] ;--image data

/local

size width height [integer!]

data [vector!]

][

width: isize/x

height: isize/y

size: width * height * channels 

switch bit [

8  [data: make vector! reduce ['char! 8 (size)]]

16 [data: make vector! reduce ['integer! 16 (size)]]

32 [data: make vector! reduce ['integer! 32 (size)]]

64 [data: make vector! reduce ['float! 64 (size)]]

]

data

]

]

;********************************* tests ************************************

iSize: 256x256

random/seed now/precise

img1: rcv/create 3 64 iSize 1 ;--create a float image with 1 channel

n: length? img1

repeat i n [img1/:i: round/to random 1.0 0.01] ;--random values [0..1]

repeat i n [img1/:i: img1/:i / 1.0 * 255]         ;--random values [0..255]

bin1: copy #{} ;--binary string

repeat i n [append/dup bin1  to integer! img1/:i 3]         ;--integer values

dest1: make image! reduce [iSize bin1] ;--a grayscale Red image with 3 channels


img2: rcv/create 3 64 iSize 3 ;--create a float image with 3 channels

n: length? img2

repeat i n [img2/:i: round/to random 1.0 0.01] ;--random values [0..1]

repeat i n [img2/:i: img2/:i / 1.0 * 255]         ;--random values [0..255]

bin2: copy #{} ;--binary string

foreach [r g b] img2 [

append bin2 to-integer r ;--red channel 

append bin2 to-integer g ;--green channel

append bin2 to-integer b ;--blue channel

]

dest2: make image! reduce [iSize bin2] ;--a rgb Red image


view [

title "64-bit image test"

below

image dest1

image dest2

pad 100x0

button "Quit" [quit]