dimanche 31 décembre 2017

Red and Dlib for face processing

Dlib (http://dlib.net/) is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ . It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib's open source licensing allows you to use it in any application, free of charge. Included in Dlib there is a valuable face recognition algorithm very useful for our experiments about facial restoration after face surgery in children.  This algorithm uses a 68 face landmarks and a neural network to identify faces in images. 




Based on adult faces learning, DLib algorithm is also efficient for face processing  in children and infants less than 3 months. Dlib also offers the possibility to train a neural network and we are planning to do that on a new database involving about 500 infants' faces. 

Since Dlib is C++ library we had first to write a basic C++ program that allows face processing from an image passed as parameter. The program analyses the image and sends back a text file including 68 detected landmarks. This program also requires OpenCV in order to improve image access.  

C++ Code

#include <opencv2/opencv.hpp>
#include <iostream>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing.h>
#include <dlib/image_io.h>
#include <dlib/opencv.h>
#include <fstream>
#define NB_PTS 68
using namespace std;
using namespace cv;
using namespace dlib;
int main(int argc, char  *argv[])
{
    if(argc != 2)
    {
        cout<<"Usage : './FacePoints path_to_img' "<<endl;
        return -1;
    }

     Mat image;
     image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
    if(! image.data )                              // Check for invalid input
    {
        cout <<  "Could not open or find the image" << std::endl ;
        return -1;
    }
      
     string out_file="detectedPoints.txt";
     shape_predictor sp;
     array2d<rgb_pixel> img;

     frontal_face_detector detector = get_frontal_face_detector();
     deserialize("shape_predictor_68_face_landmarks.dat") >> sp;
     load_image(img, argv[1]);
     std::vector<dlib::rectangle> dets = detector(img);
     full_object_detection shape = sp(img, dets[0]);
     std::cout<<shape.num_parts()<<std::endl;

    ofstream m_file;
    m_file.open(out_file);
    for(unsigned long i=0; i<shape.num_parts(); i++)
    {
        cv::Point pt=cv::Point((shape.part(i)).x(), (shape.part(i)).y());
        circle(image, pt, 1, Scalar(0,255,0), 2);
        m_file<<i<<"\t"<<pt.x<<"\t"<<pt.y<<"\n";
    }
    m_file.close();
    return 0;
}


Red Code

Then it is really simple with Red to get back the result of Dlib image processing inside a Red Gui application.  Here we use call/wait fonction in order to execute C++ program before processing the result.

#! /usr/local/bin/red
Red [
    Title:   "Faces Processing"
    Author:  "Francois Jouen"
    File:    %faces.red
    Needs:   'View
]
;'
appDir: %your_directory
change-dir appDir
lmFile: %landmarks.jpg
conFile: %config.txt 
resultFile: %detectedPoints.txt
srcFile: %default.jpg
srcImg: load %default.jpg
img:    load %default.jpg
canvas1: none
canvas2: none
; images 14x18 cm (3.5x4.5 * 4)
gsize2: as-pair 364 510
gsize1: gsize2 / 2
margins: 10x10

isFile: false
isProcessed: false
nbPMax: 68
prog: " ./FacePoints default.jpg"
prog2: "killall FacePoints"
count: 2
radius: 3
centerXY: 182x255
rot: 0.0
rotRadian: rot * (pi / 180.0)
sFactor: 0.45
transl: 20x25
srcP: dstP: 0x0

; for default image test 
pt1: as-pair srcImg/size/x / 2 0 
pt2: as-pair srcImg/size/x / 2 srcImg/size/y
pt3: as-pair 0 srcImg/size/y / 2 
pt4: as-pair srcImg/size/x srcImg/size/y / 2
centerXY as-pair pt1/x  + srcImg/size/y / 2 pt3/y + srcImg/size/x / 2

plot: []
axes: compose [pen red line-width 2 line (pt1) (pt2) line (pt3) (pt4)]
rotation: compose [scale (sFactor) (sFactor) translate (transl) rotate (rot) (centerXY) image img]


; for all images
points: copy [] ; pour stocker les coordonnées
vertex: copy [] ; pour les vertices numérotés et sélectionnés

readConfig: does [
    confLM: read/lines conFile
    n: length? confLM
    if (n > nbPMax) [mall/data: true] ; all vertices are selected
    i: 2 ; first line: "Vertex" 
    while [i <= n] [
        v: 1 + to-integer (confLM/:i) 
        switch v [
            1 [m0/data: true]
            2 [m1/data: true]
            3 [m2/data: true]
            4 [m3/data: true]
            5 [m4/data: true]
            6 [m5/data: true]
            7 [m6/data: true]
            8 [m7/data: true]
            9 [m8/data: true]
            10 [m9/data: true]
            11 [m10/data: true]
            12 [m11/data: true]
            13 [m12/data: true]
            14 [m13/data: true]
            15 [m14/data: true]
            16 [m15/data: true]
            17 [m16/data: true]
            18 [m17/data: true]
            19 [m18/data: true]
            20 [m19/data: true]
            21 [m20/data: true]
            22 [m21/data: true]
            23 [m22/data: true]
            24 [m23/data: true]
            25 [m24/data: true]
            26 [m25/data: true]
            27 [m26/data: true]
            28 [m27/data: true]
            29 [m28/data: true]
            30 [m29/data: true]
            31 [m30/data: true]
            32 [m31/data: true]
            33 [m32/data: true]
            34 [m33/data: true]
            35 [m34/data: true]
            36 [m35/data: true]
            37 [m36/data: true]
            38 [m37/data: true]
            39 [m38/data: true]
            40 [m39/data: true]
            41 [m40/data: true]
            42 [m41/data: true]
            43 [m42/data: true]
            44 [m43/data: true]
            45 [m44/data: true]
            46 [m45/data: true]
            47 [m46/data: true]
            48 [m47/data: true]
            49 [m48/data: true]
            50 [m49/data: true]
            51 [m50/data: true]
            52 [m51/data: true]
            53 [m52/data: true]
            54 [m53/data: true]
            55 [m54/data: true]
            56 [m55/data: true]
            57 [m56/data: true]
            58 [m57/data: true]
            59 [m58/data: true]
            60 [m59/data: true]
            61 [m60/data: true]
            62 [m61/data: true]
            63 [m62/data: true]
            64 [m63/data: true]
            65 [m64/data: true]
            66 [m65/data: true]
            67 [m66/data: true]
            68 [m67/data: true]
        ]
        i: i + 1
    ]
]

saveConfig: does [
    write conFile rejoin ["Vertex" newline]
    i: 0 
    ; quels vertex sont sélectionnés
    
    while [i < nbPMax] [
        m: to-word rejoin ["m" i]
        if select get m 'data [
            ; ' 
            s: rejoin [form i lf]
            write/append conFile s
        ]
        i: i + 1
    ]
]


getAxes: does [
    pt1: as-pair srcImg/size/x / 2 0 
    pt2: as-pair srcImg/size/x / 2 srcImg/size/y
    pt3: as-pair 0 srcImg/size/y / 2 
    pt4: as-pair srcImg/size/x srcImg/size/y / 2
    centerXY as-pair (pt1/x  + srcImg/size/y / 2) (pt3/y + srcImg/size/x / 2)
    axes: compose [pen red line-width 3 line (pt1) (pt2) line (pt3) (pt4)]
]

setAllVertices: function [] [
    m0/data: m1/data: m2/data: m3/data: m4/data: m5/data:
    m6/data: m7/data: m8/data: m9/data: m10/data: m11/data:
    m12/data: m13/data: m14/data: m15/data: m16/data: m17/data:
    m18/data: m19/data: m20/data: m21/data: m22/data: m23/data:
    m24/data: m25/data: m26/data: m27/data: m28/data: m29/data: 
    m30/data: m31/data: m32/data: m33/data: m34/data: m35/data: 
    m36/data: m37/data: m38/data: m39/data: m40/data: m41/data:
    m42/data: m43/data: m44/data: m45/data: m46/data: m47/data: 
    m48/data: m49/data: m50/data: m51/data: m52/data: m53/data: 
    m54/data: m55/data: m56/data: m57/data: m58/data: m59/data: 
    m60/data: m61/data: m62/data: m63/data: m64/data: m65/data: 
    m66/data: m67/data: mall/data
]

; Loads red image
loadImage: does [
    tmp: request-file
    if not none? tmp [
        isFile: false
        canvas1/image: none
        canvas2/image: none
        canvas1/draw: []
        clear list/data
        sb1/text: ""
        sb2/text: ""
        dd1/data: []
        dd2/data: []
        srcFile: tmp
        srcImg: load srcFile
        img: load srcFile
        canvas1/image: srcImg
        prog: copy " ./FacePoints "
        append prog to-string srcFile
        sb1/text: to-string srcFile
        sb11/text: rejoin [form srcImg/size " pixels"]
        getAxes
        rot: 0.0 rotF/text: "0" rotation/7: rot sl/data: 50%
        count: 0
        isFile: true
        isProcessed: false
    ]
]


drawPlot: func [] [
    ; on appelle la fonction draw pour dessiner image et points identifiés
    canvas2/image: draw srcImg  plot
    ; on met à jour pour la rotation
    img: to-image canvas2
    canvas1/draw: rotation
]


processImage: does [
    sb2/text: "Patience! traitement en cours! " 
    t1: now/time/precise
    isProcessed: false
    canvas1/image: none; for update
    points: copy []
    vertex: copy []
    clear list/data
    dd1/data: copy vertex
    dd2/data: copy vertex
    plot: compose [pen green fill-pen green]
    if count = 1 [call/wait prog] ; premier passage on calcule les 68 points
    result: read/lines resultFile
    
    i: 1 
    ; quels vertex sont sélectionnés
    while [i <= nbPMax] [
        m: to-word rejoin ["m" i - 1]
        if select get m 'data [
            ; ' 
            append list/data result/:i
            b: to-block result/:i
            append vertex first b       ; le numéro
            p: as-pair second b third b ; les coordonnées
            append points p 
        ]
        i: i + 1
    ]
    
    srcImg: load srcFile ; recharge image source
    
    ; trace les points si demandé
    if cb0/data [
        foreach p points [append append append plot 'circle p radius];'
    ]

    ; trace les vertices si demandé
    if cb1/data [i: 1 append plot 'pen 
            append plot 'blue 
            foreach v vertex [
            p: points/:i
            append plot reduce ['text (p) form (v)] i: i + 1]
    ]
    ;' trace les axes si demandé
    if cb2/data [append plot axes]
    ; on appelle la fonction draw pour dessiner image et points identifiés
    drawPlot
    foreach p list/data [append vertex p] 
    dd1/data: copy vertex
    dd2/data: copy vertex
    if (length? vertex) > 0 [
        dd1/selected: 1
        dd2/selected: 1
    ]
    sb2/text: rejoin ["Traitement terminé: " form (now/time/precise - t1)]
    curs1/offset: 0x0 + canvas2/offset
    isProcessed: true
]


getCoordinates: does [
    plotCopy: copy plot     ; sauvegarde plot
    imageCopy: load srcFile ; sauvegarde image
    b: to-block dd1/text
    srcP: as-pair second b third b
    b: to-block dd2/text
    dstP: as-pair second b third b 
    append append append append append plot 'pen 'green 'line srcP dstP ;'
    drawPlot
]

cancelDraw: does [
    plot: copy plotCopy
    srcImg: imageCopy
    drawPlot
]

p1: 0x0

view win: layout [
    title "CHArt: Facial 1.0"
    style rect: base 255.255.255.240 25x25 loose draw [line-width 2 pen red line 0x0 0x15 15x0 0x0]
    origin margins space margins
    at 740x5 button 100 "Quitter"         [call prog2 Quit]
    return
    apanel: tab-panel 820x665 [
        "Traitement Image" [
            button 120 "Charger Image"      [loadImage]
            sb1: field 535 "default.jpg"
            pad 6x0
            sb11: field 110 
            return
            button 120 "0 Degré" [rot: 0.0 rotF/text: "0" rotation/7: rot sl/data: 50%]
            
            pad 55x0 
            cb0: check "Points" true
            cb1: check "Numéros"
            cb2: check "Axes" 50
            button 120 "Traiter Image"  [count: count + 1 processImage]
            return
            at 10x350  sl: slider 135 [rotF/text: form to integer! face/data * 360 - 180
                             rot:  to integer! rotF/text  rotRadian: rot * (pi / 180.0) rotation/7: rot]
            at 155x350 rotF: field 35 "0"

            at 101x75 base 2x278 blue
            at 0x211 base 300x2 blue
            canvas1: base  gsize1 snow; pas image au depart
            canvas2:  base gsize2 white srcImg react [
                    ;correction des coordonnées / image size
                    xx: to-float curs1/offset/x - canvas2/offset/x
                    xx: (xx / canvas2/size/x) * srcImg/size/x
                    yy: to-float curs1/offset/y - canvas2/offset/y
                    yy: (yy / canvas2/size/y) * srcImg/size/y
                    pco: as-pair xx yy
                    sb3/text: form curs1/offset - canvas2/offset
            ] cursor hand
            
            list: text-list 100x510 data [] [
                i: face/selected 
                sb: to-block face/data/:i
                x: to-float second sb 
                x: (x / srcImg/size/x) * canvas2/size/x
                y: to-float third sb 
                y: (y /  srcImg/size/y) * canvas2/size/y
                p: as-pair x y
                curs1/offset: p + canvas2/offset
            ]
            return
            pad 190x0 sb2: field 365 "" sb3: field 105 
            
            button 120 "Sauver"
            
            at 730x90 curs1: rect
            at 690x120 dd1: drop-down 120x24
            at 690x150 dd2: drop-down 120x24 
            at 690x180 button 120 "Tracer Ligne"    [if isProcessed [getCoordinates]]
            at 690x210 button 120 "Annuler"         [if isProcessed [cancelDraw]] 
            
        ]
        "Sélection Marqueurs" [
            space 10x5
            m0: check 40 "0"  
            m1: check 40 "1" m2: check 40 "2" m3: check 40 "3" 
            m4: check 40 "4" m5: check 40 "5" m6: check 40 "6" m7: check 40 "7" 
            m8: check 40 "8" m9: check 40 "9" m10: check 40 "10" m11: check 40 "11" 
            m12: check 40 "12" m13: check 40 "13" m14: check 40 "14" m15: check 40 "15" 
            return
            m16: check 40 "16" m17: check 40 "17" m18: check 40 "18" m19: check 40 "19" 
            m20: check 40 "20" m21: check 40 "21" m22: check 40 "22" m23: check 40 "23" 
            m24: check 40 "24" m25: check 40 "25" m26: check 40 "26" m27: check 40 "27" 
            m28: check 40 "28" m29: check 40 "29" m30: check 40 "30" m31: check 40 "31" 
            return
            m32: check 40 "32" m33: check 40 "33" m34: check 40 "34" m35: check 40 "35" 
            m36: check 40 "36" m37: check 40 "37" m38: check 40 "38" m39: check 40 "39" 
            m40: check 40 "40" m41: check 40 "41" m42: check 40 "42" m43: check 40 "43" 
            m44: check 40 "44" m45: check 40 "45" m46: check 40 "46" m47: check 40 "47" 
            return
            m48: check 40 "48" m49: check 40 "49" m50: check 40 "50" m51: check 40 "51" 
            m52: check 40 "52" m53: check 40 "53" m54: check 40 "54" m55: check 40 "55" 
            m56: check 40 "56" m57: check 40 "57" m58: check 40 "58" m59: check 40 "59" 
            m60: check 40 "60" m61: check 40 "61" m62: check 40 "62" m63: check 40 "63" 
            return
            m64: check 40 "64" m65: check 40 "65" m66: check 40 "66" m67: check 40 "67" 
            mall: check 100 "Tous"  [setAllVertices] 
            button 200 "Sauver la sélection"[saveConfig]            
            return 
            pad 70x0
            canvas3:  base 640x480 lmFile
        ]
    ]
    do [sb1/text: to-string srcFile sl/data: 0.5 readConfig sb11/text: form (srcImg/size)
        curs1/offset: 0x0 + canvas2/offset
        ]
]

Result








samedi 30 décembre 2017

Red and Video

As known, support of video by Red is under progress. However, USB webcam can easily accessed with Red due to the presence of the camera object. Thanks a lot to Qingtian for developing this object. Camera type is used to display a video camera feed. Camara/data facet lists the connected cameras as a block of strings and camera/selected facet selects the camera to display. Lastly, camera/image facet can be used to save the content of a camera. This last facet is still unstable and the best way to capture  the content of a camera is to use to-image fonction.
For experimental purposes, I had to simultaneously record images from four cameras and as explained in the code below, this is trivial with Red. 

Code sample for accessing 4 webcams

Red [
    Title:   "Test camera Red VID "
    Author:  "Francois Jouen"
    File:    %cam4.red
    Needs:   'View
]

iSize: 320x240
margins: 10x10
cam1: none ; for camera1
cam2: none ; for camera2
cam3: none ; for camera3
cam4: none ; for camera4

render: func [acam alist][
    either acam/selected [acam/selected: none][acam/selected: alist/selected]
]

view win: layout [
    title "Red Cam"
    origin margins space margins
    btnQuit: button "Quit" [quit]
    return
    cam1: camera iSize
    cam2: camera iSize
    return 
    camList1: drop-list 220 on-create [face/data: cam1/data]
    onoff1: button "Start/Stop" on-click [render cam1 camList1]
    camList2: drop-list 220 on-create [face/data: cam2/data]
    onoff2: button "Start/Stop" on-click [render cam2 camList2]
    return
    cam3: camera iSize
    cam4: camera iSize
    return
    camList3: drop-list 220 on-create [face/data: cam3/data]
    onoff3: button "Start/Stop"  on-click [render cam3 camList3]
    camList4: drop-list 220 on-create [face/data: cam4/data]
    onoff4: button "Start/Stop" on-click [render cam4 camList4]
    do [camList1/selected: 1 camList2/selected: 2 camList3/selected: 3 camList4/selected: 4]
]

Result 









dimanche 17 décembre 2017

Red and OpenCV: Find faces in image

Red can be used in conjonction with OpenCV (http://opencv.org/) for sophisticated image processing programs. You'll find here the code for accessing OpenCV with Red (https://github.com/ldci/OpenCV3-red).

How to install OpenCV binding for Red

You need first the last Red stable version (0.63 or newer). Basically, you don’t need to install OpenCV. You’ll find in /DLLs directory a 32-bit compiled version of the OpenCV framework (3.0 and 3.10) for the three main operating systems (Mac OS, Linux and Windows). Just copy the library (.dylib, .so or .dll) somewhere on your computer and then edit the platforms.reds file (in /libs) and make the links according to your path.


Red/System and Red for OpenCV

For this binding, most of 600 OpenCV functions were transformed to Red code with Red/System DSL. Imported OpenCV functions can be directly call by Red/System programs or can be accessed with routines if you use Red language. Routines are a fantastic tool that allows to use code written in Red/System inside Red code. Thus, routines give the possibility to access C functions included in DLL via the binding of the library written in Red/System. This means that you can use either Red/System DSL or Red language to write your image processing programs. Result will be the same. 

However, there are some differences when writing Red Language code. First it’s necessary to use the #system directive to include OpenCV libraries. This is also the place to declare any global variables that will be used by routines. Second, you have to write routines that behave as an interface between you red code and the Red/System functions. OpenCV functions and global Red/System variables are thus directly called inside routines.

Using OpenCV with Red/View 

The interest of using Red Language is that you can employ Red/View and Red/Draw DSL for creating GUI and developing sophisticated interface for computer vision such as in the next sample of face processing with Red.


OpenCV's face detection


Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in 2001 (Rapid Object Detection using a Boosted Cascade of Simple Features). It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. OpenCV already contains many pre-trained classifiers for face, eyes, smile or body. Those XML files are stored in opencv/data/haarcascades/ folder. 

Using Red 

All job is done in findFaces routine which allows to call red/system code accessing OpenCV functions. 

First let's load the required XML classifiers 
cascade: cvLoadHaarClassifierCascade classifier 20 20,
then detect faces with cvHaarDetectObjects function
faces: cvHaarDetectObjects pyrImg cascade storage sFactor minNB flag minS/x minS/y maxS/x maxS/y


Where the parameters are:
pyrImg : 8-bit Matrix  containing an image where objects have to be detected.
cascade: OpenCV pre-trained classifiers for face
storage: a opaque pointer used to get the result of face detection
sFactor: This scale factor is used to create scale pyramid. It means we're using a small step for resizing and we increase the chance of a matching size with the model for detection.
minNB: Parameter specifying how many neighbors each candidate rectangle should have to retain it. This parameter will affect the quality of the detected faces: higher value results in less detections but with higher quality
flags : Mode of operation. 
CV_HAAR_SCALE_IMAGE : for each scale factor used the function will downscale the image rather than "zoom" the feature coordinates in the classifier cascade. 
CV_HAAR_DO_CANNY_PRUNING : If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing.
CV_HAAR_FIND_BIGGEST_OBJECT: If it is set, the function finds the largest object (if any) in the image. That is, the output sequence will contain one (or zero) element(s).
CV_HAAR_DO_ROUGH_SEARCH: used only when CV_HAAR_FIND_BIGGEST_OBJECT is set and min_neighbors > 0. If the flag is set, the function does not look for candidates of a smaller size as soon as it has found the object (with enough neighbor candinates) at the current scale. Typically, when min_neighbors is fixed, the mode yields less accurate (a bit larger) object rectangle than the regular single-object mode (flags=CV_HAAR_FIND_BIGGEST_OBJECT), but it is much faster, up to an order of magnitude. A greater value of min_neighbors may be specified to improve the accuracy.
minS : Minimum possible object size. Objects smaller than minS are ignored.
maxS : Maximum possible object size. Objects larger than maxS are ignored.

Code Sample

Red [
    Title:   "Find Face"
    Author:  "F. Jouen"
    File:    %findFaces.red
    Needs:   'View
]

; import required OpenCV libraries
#system [
    #include %../../libs/include.reds ; all OpenCV  functions
    img: declare CvArr!
    imgCopy: declare CvArr!
    clone: declare CvArr!
    pyrImg: declare CvArr!
    cascade: declare CvHaarClassifierCascade!
    storage: declare CvMemStorage!
    faces: declare CvSeq! 
    faceRect: declare byte-ptr!
    ptr: declare int-ptr!
    roi: declare cvRect!
    nFaces: 0 
    classifier: "/red/OpenCV/cascades/haarcascades/haarcascade_frontalface_default.xml"
]

; global red variables to be passed as parameters to routines or used by red functions

set 'appDir what-dir 
margins: 5x5
clName: "haarcascade_frontalface_default.xml"
scaleFactor: 1.1
minNeighbors: 3
minSize: 0x0
maxSize: 0x0
isFile: false
src: 0
flagValue: 1
nbFaces: 0

; some routines for image conversion from openCV to Red 
#include %../../libs/red/cvroutines.red

; Red Routines for OpenCV access

; release all image pointers
freeOpenCV: routine [] [
    releaseImage img
    releaseImage pyrImg
    releaseImage clone
    releaseImage imgCopy
]

loadTraining: routine [name [string!]/local fName][
    fName: as c-string! string/rs-head name;
    classifier: fName
]

; loads image with faces and returns image address as an integer
loadImg: routine [name [string!] return: [integer!] /local fName tmp isLoaded] [
    isLoaded: 0
    fName: as c-string! string/rs-head name;
    tmp: cvLoadImage fName CV_LOAD_IMAGE_COLOR ; CV_LOAD_IMAGE_ANYDEPTH OR CV_LOAD_IMAGE_ANYCOLOR; 
    img: as int-ptr! tmp
    clone: as int-ptr! cvLoadImage fName CV_LOAD_IMAGE_COLOR 
    imgCopy: as int-ptr! cvLoadImage fName CV_LOAD_IMAGE_COLOR
    pyrImg: as int-ptr! cvCreateImage tmp/width / 2  tmp/height / 2 IPL_DEPTH_8U 3
    storage: cvCreateMemStorage 0
    cvSmooth img img CV_GAUSSIAN 3 3 0.0 0.0      ;gaussian smoothing
    cvPyrDown img pyrImg CV_GAUSSIAN_5x5          ;reduce original size to improve speed in face recognition
    cvCopy img clone null
    cvFlip clone clone -1
    isLoaded: as integer! clone
    isLoaded  
]

; looks for faces 
findFaces: routine [sFactor [float!] minNB [integer!] flag [integer!] minS [pair!] maxS [pair!] return: [integer!] 
    /local c x y wd hg ] [
    cvCopy imgCopy img null
    cascade: cvLoadHaarClassifierCascade classifier 20 20 ;seems OK
    faces: cvHaarDetectObjects pyrImg cascade storage sFactor minNB flag minS/x minS/y maxS/x maxS/y
    nFaces: faces/total ; for faceCount routine
    if faces/total > 0 [
        c: 0
        until [
            faceRect: cvGetSeqElem faces c ; faceRect is a byte-ptr!
            ptr: as int-ptr! faceRect ; we cast to an int-ptr! since we have 4 integers to get here
            ; * 2 due to original image pyrdown
            x: ptr/1 * 2 
            y: ptr/2 * 2 
            wd: (ptr/1 + ptr/3) * 2 
            hg:  (ptr/2 + ptr/4) * 2
            roi: cvRect x y wd hg
            cvRectangle img roi/x roi/y roi/width roi/height 0.0 255.0 0.0 0.0 2 CV_AA 0
            c: c + 1
            c = faces/total
        ]
    ]
    cvCopy img clone null
    cvFlip clone clone -1
    as integer! clone 
]

;returns nb of found faces
countFaces: routine [return: [integer!]][nFaces]


;Red Functions calling routines 

loadImage: does [
    isFile: false
    canvas/image: black
    tmp: request-file 
    if not none? tmp [      
        fileName: to string! to-local-file tmp  
        src: loadImg fileName
        if src <> 0 [
            isFile: true
            win/text: fileName
            ; update faces
            wsz: getIWidth src wsz 
            hsz: getIHeight src hsz
            canvas/image: makeRedImage src wsz hsz
        ]
    ]
]

loadClassifier: does [
    tmp: request-file 
    if not none? tmp [      
        fileName: to string! to-local-file tmp
        info1/data: form second split-path tmp 
        loadTraining fileName
    ]   
]

faces: does [
    t1: now/time/precise
    src: findFaces scaleFactor minNeighbors flagValue minSize maxSize
    t2:  now/time/precise
    s: form countFaces
    append s " in "
    append s third t2 - t1 
    append s " sec"
    sb/data: s
    canvas/image: makeRedImage src wsz hsz
]

;Red GUI Interface
view win: layout [
    title "Find Faces"
    button 50 "Load"            [loadImage faces]
    button 75 "Classifier"      [loadClassifier if isFile [faces]]
    info1: field  220 clname
    text 35 "Flags"
    flag: drop-down 210x24 
        data ["CV_HAAR_DO_CANNY_PRUNING" "CV_HAAR_FIND_BIGGEST_OBJECT"
           "CV_HAAR_DO_ROUGH_SEARCH" "CV_HAAR_SCALE_IMAGE"] 
        select 1  
        on-change [
            if isFile [
                switch flag/selected[
                    1   [flagValue: 1]
                    2   [flagValue: 4]
                    3   [flagValue: 8]
                    4   [flagValue: 2]
                ]
                faces   
            ]
        ]     
    return
    text "Scale Increase"
    sl1: slider 100 [scaleFactor: 1.1 + to float! face/data 
                    tscale/data: 1.1 + face/data if isFile [faces]]
    tscale: field 40 "1.1"
    text  "Min Neighbors"
    field 30 "3" [minNeighbors: to-integer face/data if isFile [faces]]
    text 80 "Size Min Max"
    field 40 "0x0" [minSize: to-pair face/data if isFile [faces]]
    field 40 "0x0" [maxSize: to-pair face/data if isFile [faces]]
    button 50 "Quit" [if isFile [freeOpenCV] Quit]
    return
    canvas: base 640x480 black
    return
    text 100 "Found faces : " sb: field 130
    do [sl1/data: 0.0]
]

Result