Stocks News

2D Transformation with Built-in Eigenvector Libraries and Coffee – Statistics – December 20, 2023

Let’s open MS Paint and draw a cluster.

This is a high-quality image from a manually drawn 60×60 bmp file.

This file will be fed into our code, and the code will need to convert it.

The direction with the biggest difference is along the x-axis (if I understand correctly).

So some preliminary work is to open the file, free up resources, read the pixels, and convert them to samples.

Our sample would look like this: Just one class, nothing fancy.

      class a_sample
            public:
            int x,y,cluster;
            double xd,yd; 
            a_sample(void)x=0;y=0;cluster=0;
            ;

Ignore clustered reads for other tests. -Failed.

The inputs we will be working with are:

#include <Canvas\Canvas.mqh>;
input string testBMP="test_2.bmp";
input color fillColor=C'255,0,0';
input color BaseColor=clrWheat;
input bool RemoveCovariance=false;

Read the image and replace each pixel that is red (or matches the color input) with a sample.


  if(ResourceCreate("TESTRESOURCE","\\Files\\"+testBMP)){
  uint pixels(),width,height;
  if(ResourceReadImage("::TESTRESOURCE",pixels,width,height))
    
      int total_samples=0;
      uint capture=ColorToARGB(fillColor,255);
      ArrayResize(samples,width*height,0);
      
        originalX=(int)width;
        originalY=(int)height;
        int co=-1; 
        for(int y=0;y<(int)height;y++)
           for(int x=0;x<(int)width;x++)
              co++;
              if(pixels(co)==capture)
                total_samples++;
                samples(total_samples-1).x=x;
                samples(total_samples-1).y=y;
                samples(total_samples-1).xd=x;
                samples(total_samples-1).yd=y;
                
              
           
      ArrayResize(samples,total_samples,0);
      Print("Found "+IntegerToString(total_samples)+" samples");

}

It’s neat. Now we pass the samples into a resized matrix where each sample is a row and each feature (x,y) is a column.

           matrix original;
                  original.Init(total_samples,2);   
                  
                  for(int i=0;i<total_samples;i++)
                  original(i)(0)=samples(i).xd;
                  original(i)(1)=samples(i).yd;
                   

Therefore, we reshape (resize) the matrix so that it has rows and two columns (x, y) for the total number of samples. X+Y is a sample function in this example.

Then the mq library, not us, constructs the covariance matrix and passes false to the parameter when the features are in the column. Call this function with true if there are features in the row, i.e. one row is one feature.

matrix covariance_matrix=original.Cov(false);

What is a covariance matrix? This is a feature-specific (feature x feature) (2×2) matrix that measures the covariance of each feature with respect to all other features. It’s neat that you can use one line of code.

Then we need the eigenvectors and eigenvalues ​​of the 2×2 covariance matrix.

The theory, which I don’t quite understand, says that this will indicate the “direction” of most “variance”.

let’s play together

           matrix eigenvectors;
           vector eigenvalues;
           if(covariance_matrix.Eig(eigenvectors,eigenvalues))
           Print("Eigenvectors");
           Print(eigenvectors);
           Print("Eigenvalues");
           Print(eigenvalues);
           elsePrint("Can't Eigen");

If I’m not mistaken, the eigenvectors will be a 2×2 matrix, just like the covariance matix.

Now let’s take the sample and “rotate” it so that the most different directions are on the x-axis. This is how it works and it’s not random.

Let’s visualize what we expect here based on the first image posted.

Here’s how I did it so far:

           
           for(int i=0;i<total_samples;i++)
              vector thissample=samples(i).xd,samples(i).yd;
              thissample=thissample.MatMul(eigenvectors);
              samples(i).xd=thissample(0);
              samples(i).yd=thissample(1);
              

I’m constructing a vector that is one sample, so I multiply the two elements into a matrix with the eigenvector matrix and then pass the resulting x and y back to the sample.

Then we do some conversions again. The reason is to accommodate multiple eigenvector passes. You’ll probably get the same idea.

           
             double minX=INT_MAX,maxX=INT_MIN,minY=INT_MAX,maxY=INT_MIN;
             for(int i=0;i<total_samples;i++)
                if(samples(i).xd>maxX)maxX=samples(i).xd;
                if(samples(i).xd<minX)minX=samples(i).xd;
                if(samples(i).yd>maxY)maxY=samples(i).yd;
                if(samples(i).yd<minY)minY=samples(i).yd;
                
             
               double rangeX=maxX-minX;
               double rangeY=maxY-minY;
               double allMax=MathMax(maxX,maxY);
               double allMin=MathMin(minX,minY);
               double allRange=allMax-allMin;
               for(int i=0;i<total_samples;i++)
                  samples(i).xd=((samples(i).xd-minX)/rangeX)*1000.0;
                  samples(i).yd=((samples(i).yd-minY)/rangeY)*1000.0;
                  samples(i).x=(int)samples(i).xd;
                  samples(i).y=(int)samples(i).yd;
                  
               originalX=1000;
               originalY=1000;

Then the not-so-important drawing part.

this is the result

Samples touching the edge resulted from reconstruction, but not transformation.

Here’s the full code. Commented out the mistake. Also RemoveCovariance doesn’t work.

Please let me know if you find any issues.

Now imagine you have 1000 features. Is that right? Since we can’t plot everything, how do we construct a new feature that is a composite weighted sum of the previous features based on the eigenvectors (if we’re going in the right direction here). It is the same as principle component analysis.

#property version   "1.00"
#include <Canvas\Canvas.mqh>;
input string testBMP="test_2.bmp";
input color fillColor=C'255,0,0';
input color BaseColor=clrWheat;
input bool RemoveCovariance=false;
string system_tag="TEST_";
      class a_sample
            public:
            int x,y,cluster;
            double xd,yd; 
            a_sample(void)x=0;y=0;cluster=0;
            ;
int DISPLAY_X,DISPLAY_Y,originalX,originalY;
double PIXEL_RATIO_X=1.0,PIXEL_RATIO_Y=1.0;
bool READY=false;
a_sample samples();
CCanvas DISPLAY;
int OnInit()
  
  ArrayFree(samples);
  DISPLAY_X=0;
  DISPLAY_Y=0;
  originalX=0;
  originalY=0;
  PIXEL_RATIO_X=1.0;
  PIXEL_RATIO_Y=1.0;
  READY=false;
  ObjectsDeleteAll(ChartID(),system_tag);
  EventSetMillisecondTimer(44); 
  ResourceFree("TESTRESOURCE");
  return(INIT_SUCCEEDED);
  

void OnTimer(){
EventKillTimer();

  if(ResourceCreate("TESTRESOURCE","\\Files\\"+testBMP)){
  uint pixels(),width,height;
  if(ResourceReadImage("::TESTRESOURCE",pixels,width,height))
    
      int total_samples=0;
      uint capture=ColorToARGB(fillColor,255);
      ArrayResize(samples,width*height,0);
      
        originalX=(int)width;
        originalY=(int)height;
        int co=-1; 
        for(int y=0;y<(int)height;y++)
           for(int x=0;x<(int)width;x++)
              co++;
              if(pixels(co)==capture)
                total_samples++;
                samples(total_samples-1).x=x;
                samples(total_samples-1).y=y;
                samples(total_samples-1).xd=x;
                samples(total_samples-1).yd=y;
                
              
           
      ArrayResize(samples,total_samples,0);
      Print("Found "+IntegerToString(total_samples)+" samples");
      
        
        
            
           matrix original;
                  original.Init(total_samples,2);   
                  
                  for(int i=0;i<total_samples;i++)
                  original(i)(0)=samples(i).xd;
                  original(i)(1)=samples(i).yd;
                   
            
         
           matrix covariance_matrix=original.Cov(false);
           Print("Covariance matrix");
           Print(covariance_matrix);
           matrix eigenvectors;
           vector eigenvalues;
           matrix inverse_covariance_matrix=covariance_matrix.Inv();
           if(covariance_matrix.Eig(eigenvectors,eigenvalues))
           Print("Eigenvectors");
           Print(eigenvectors);
           Print("Eigenvalues");
           Print(eigenvalues);
           
           for(int i=0;i<total_samples;i++)
              vector thissample=samples(i).xd,samples(i).yd;
              thissample=thissample.MatMul(eigenvectors);
              samples(i).xd=thissample(0);
              samples(i).yd=thissample(1);
              if(RemoveCovariance)
                samples(i).xd/=MathSqrt(eigenvalues(0));
                samples(i).yd/=MathSqrt(eigenvalues(1));
                
              
              
           
           
             double minX=INT_MAX,maxX=INT_MIN,minY=INT_MAX,maxY=INT_MIN;
             for(int i=0;i<total_samples;i++)
                if(samples(i).xd>maxX)maxX=samples(i).xd;
                if(samples(i).xd<minX)minX=samples(i).xd;
                if(samples(i).yd>maxY)maxY=samples(i).yd;
                if(samples(i).yd<minY)minY=samples(i).yd;
                
             
               double rangeX=maxX-minX;
               double rangeY=maxY-minY;
               double allMax=MathMax(maxX,maxY);
               double allMin=MathMin(minX,minY);
               double allRange=allMax-allMin;
               for(int i=0;i<total_samples;i++)
                  samples(i).xd=((samples(i).xd-minX)/rangeX)*1000.0;
                  samples(i).yd=((samples(i).yd-minY)/rangeY)*1000.0;
                  
                  samples(i).x=(int)samples(i).xd;
                  samples(i).y=(int)samples(i).yd;
                  
               originalX=1000;
               originalY=1000;
           elsePrint("Cannot eigen");          
      
      build_deck(originalX,originalY);
      READY=true;
    else
    Print("Cannot read image");
    
  }else
  Print("Cannot load file");
  
ExpertRemove();
Print("DONE");
}

void build_deck(int img_x,
                int img_y)

  int screen_x=(int)ChartGetInteger(ChartID(),CHART_WIDTH_IN_PIXELS,0);
  int screen_y=(int)ChartGetInteger(ChartID(),CHART_HEIGHT_IN_PIXELS,0);

  int btn_height=40;
  screen_y-=btn_height;

  double img_x_by_y=((double)img_x)/((double)img_y);
  
    int test_x=screen_x;
    int test_y=(int)(((double)test_x)/img_x_by_y);
    if(test_y>screen_y)
    test_y=screen_y;
    test_x=(int)(((double)test_y)*img_x_by_y);
    

  int px=(screen_x-test_x)/2;  
  int py=(screen_y-test_y)/2;
DISPLAY.CreateBitmapLabel(ChartID(),0,system_tag+"_display",px,py,test_x,test_y,COLOR_FORMAT_ARGB_NORMALIZE);
DISPLAY_X=test_x;
DISPLAY_Y=test_y;
DISPLAY.Erase(0);
PIXEL_RATIO_X=((double)(DISPLAY_X))/((double)(img_x));
PIXEL_RATIO_Y=((double)(DISPLAY_Y))/((double)(img_y));
PIXEL_RATIO_X=MathMax(1.0,PIXEL_RATIO_X);
PIXEL_RATIO_Y=MathMax(1.0,PIXEL_RATIO_Y);

PIXEL_RATIO_X=8.0;
PIXEL_RATIO_Y=8.0;
update_deck();



void update_deck()
DISPLAY.Erase(ColorToARGB(clrBlack,255));
uint BASECOLOR=ColorToARGB(BaseColor,255);


  for(int i=0;i<ArraySize(samples);i++)
  double newx=(((double)samples(i).x)/((double)originalX))*((double)DISPLAY_X);
  double newy=(((double)samples(i).y)/((double)originalY))*((double)DISPLAY_Y);
  int x1=(int)MathFloor(newx-PIXEL_RATIO_X/2.00);
  int x2=(int)MathFloor(newx+PIXEL_RATIO_X/2.00);
  int y1=(int)MathFloor(newy-PIXEL_RATIO_Y/2.00);
  int y2=(int)MathFloor(newy+PIXEL_RATIO_Y/2.00);
  DISPLAY.FillRectangle(x1,y1,x2,y2,BASECOLOR);
  
DISPLAY.Update(true);
ChartRedraw();





void OnDeinit(const int reason)
    
  
void OnTick()
  
  

Edit: I found an article about how to configure the feature.

I’m not sure, but I don’t think you can reduce 2 features to 1 feature. The expression is incorrect.

Creating one feature from two features leaves one new feature.

So if you have 100 features and you do the following, you’ll get one feature, one new feature, rather than 99 features.

I think.

https://techntales.medium.com/eigenvalues-and-eigenVectors-and-their-use-in-machine-learning-and-ai-c7a5431ae388#:~:text=Eigenvalues%20and%20eigenVectors%20are%20concepts,machine %20Learning%20and%20Artificial%20Intelligence.

Related Articles

Back to top button