Difference between revisions of "CUVI by Example"

From CUVI Wiki
(30 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Motion Detection==
<p>CUVI library comes with all the image processing essentials that can be used to build countless applications. For example the '''Computer Vision''' module of CUVI can be used for motion and intrusion detection in a live video stream and tracking an object of interest throughout series of cameras installed in a premises. The processing pipeline for motion detection goes as follows:</p>
*Read a frame from the camera stream
*Select Strong Features in that Frame using CUVI
*Read next frame
*Track features of first frame in the second frame using CUVI
*Set alarm if motion is detected
<p>The CUVI functions used in this example are '''goodFeaturesToTrack()''' and '''trackFeatures()'''. For simplicity we have removed the I/O part on host side from the code</p>
{|
{|
|style="font-size:130%;"|
|style="font-size:130%;"|
<syntaxhighlight lang="c">
<syntaxhighlight lang="cpp">


#include <cuvi.h>
#include <cuvi.h>


static const int width = 640; //Width of video frame
static const int width = 640; //Width of video frame
static const int height = 480; //Height of video frame
static const int height = 480; //Height of video frame
CuviFilter* f; // Declaring CUVI Filter
//Parameters for feature selection
static const int requestedFeatures = 150; //Number of features to look for
static const int requestedFeatures = 150; //Number of features to look for
static const float featureQuality = 0.006f; //Quality of a feature
static const float featureQuality = 0.006f; //Quality of a feature
static const int featureMinDistance = 3; //Minimum distance between 2 features
static const int featureMinDistance = 15; //Minimum distance between 2 features
static const int blockSize = 3; //block size for computing Eigen Matrix
static const float k = -2.0f; //k for Harris Corner detector
static const float k = -2.0f; //k for Harris Corner detector
static const int PyramidLevels = 3; //Level Of Scaling
 
//Parameters for feature tracking
static const int pyramidLevels = 3; //Level Of Scaling
static const CuviSize trackingWindow = cuviSize(30,30); //Size of tracking window
static const CuviSize trackingWindow = cuviSize(30,30); //Size of tracking window
static const float residue = 20.0f; //Absolute Difference Between Original Location Window & Tracked Location Window
static const float residue = 20.0f; //Absolute Difference Between Original Location Window & Tracked Location Window
static const int iterations = 10; //Maximum number of iterations before a feature is found
static const int iterations = 10; //Maximum number of iterations before a feature is found
//Pre-processing parameters
static const bool smoothBeforeSelecting = false; //Smooth Image Before Feature Selection & Tracking
static const bool smoothBeforeSelecting = false; //Smooth Image Before Feature Selection & Tracking
static const bool adjustImage = false; //Adjust Image Light Before Feature Selection
static const bool adjustImage = false; //Adjust Image Light Before Feature Selection


//Post-processing parameters
static const float movementThreshold = 0.33f; //Mark as motion if a feature moves 0.33 Pixels


CuviFilter* Gauss = Cuvi_Builtin_Filters::Gaussian(3,0.7f); //3x3 Gaussian Filter with Standard Deviation 0.7


static const float MovementThreshold = 0.33f; //Mark as motion if a feature moves 0.33 Pixels
//Checks if the feature has moved from is original location.
//It can be used in intrusion detection and the sensitivity can be set using 'threshold' parameter
bool featureHasMoved(CuviPointValue2D point1, CuviPointValue2D point2, float threshold){
        if(point2.val != 0.0f) return false;
return ((fabsf(point1.x - point2.x)>threshold) || (fabsf(point1.y - point2.y)>threshold));
}




//Checks if the feature has moved from is original location
bool FeatureHasMoved(CuviPointValue2D point1, CuviPointValue2D point2, float thresh)
{
if(point2.val != 0.0f) return false;
return ((fabsf(point1.x - point2.x)>thresh) || (fabsf(point1.y - point2.y)>thresh));
}






void main()
void main()
{
{
      //Read a Video Frame
        //Creating a smoothing 3x3 Gaussian Filter with Standard Deviation 0.7
        cuviCreateFilter(&f,3,3);
        f->sigma = 0.7f;
        cuviCreateFilterSpecial(f,CUVI_FILTER_GAUSSIAN);


CuviImage* gFrame = new CuviImage(Width,Height,GetOpenCVPitch(Width,Height,8,3),8,3);
        //Image size
        CuviSize size = cuviSize(width,height);


CuviImage* gimg1 = new CuviImage(Width,Height,GetOpenCVPitch(Width,Height,8,1),8,1);
        //Buffer Images on GPU
CuviImage* gimg2 = new CuviImage(Width,Height,GetOpenCVPitch(Width,Height,8,1),8,1);
CuviImage* gFrame = new CuviImage(size,8,3);
CuviImage* gimg1 = new CuviImage(size,8,1);
CuviImage* gimg2 = new CuviImage(size,8,1);


CuviROI roi = cuviROI(0,0,Width,Height);
        //Region of Interest in the video frame
CuviROI roi = cuviROI(0,0,width,height);


CuviPointValue2D *features1, *features2;
CuviPointValue2D *features1, *features2;
Line 53: Line 75:
do
do
{
{
                 //Read Frame
                 //Read a Video Frame on host 'frame' and populate GPU image with it
gFrame->upload(frame->imageData);
gFrame->upload(frame->imageData,frame->widthStep);
cuvi::colorConvert(gFrame,gimg1);
                //Converting to Gray Image for computations
                cuvi::colorOperations::RGB2Gray(gFrame,gimg1);


//Read Next Frame
//Do the same with next, adjacent frame
gFrame->upload(frame->imageData);
gFrame->upload(frame->imageData);
cuvi::colorConvert(gFrame,gimg2);
cuvi::colorConvert(gFrame,gimg2);
Line 63: Line 87:
feature_count = RequestedFeatures; //Reset feature count to original
feature_count = RequestedFeatures; //Reset feature count to original


if(AdjustImage)
                //Use this option if the adjacent frames are lightening sensitive
{
if(AdjustImage){
cuvi::adjust(gimg1);
cuvi::colorOperations::adjust(gimg1);
cuvi::adjust(gimg2);
cuvi::colorOperations::adjust(gimg2);
}
}


if(SmoothBeforeSelecting)
                //Use this option if the images contain fair amount of noise
{
if(SmoothBeforeSelecting){
//Apply Gaussian Smoothing Filter On Both The Images
//Apply Gaussian Smoothing Filter On Both The Images
cuvi::imFilter(gimg1,roi,Gauss);
cuvi::imageFiltering::imageFilter(gimg1,roi,f);
cuvi::imFilter(gimg2,roi,Gauss);
cuvi::imageFiltering::imageFilter(gimg2,roi,f);
}
}
//Call A Feature Detector ( KLT | HARRIS | PETER )
 
cuvi::goodFeaturesToTrack(gimg1,roi,features1,&feature_count,CUVI_FEATURES_HARRIS,FeatureQuality,FeatureMinDistance,3,k);
 
//Defining feature selection criteria from parameters
CuviFeaturesCriteria feature_criteria = cuviFeaturesCriteria(CUVI_FEATURES_HARRIS, featureQuality,                      featureMinDistance, blockSize, k);
 
//Call any Feature Detector on first Frame( KLT | HARRIS | PETER )
cuvi::computerVision::goodFeaturesToTrack(gimg1,roi,features1,&feature_count,feature_criteria);
 
//Track Features Using KLT Method
cuvi::trackFeatures(gimg1,gimg2,features1,features2,feature_count,PyramidLevels,TrackingWindow,Residue,Iterations);


//Plot The Tracked Features
for(int i=0; i<feature_count; i++)
if(FeatureHasMoved(features1[i],features2[i],MovementThreshold)) //Plot Only If The Feature Has Moved From Its Location


//Defining tracking criteria from tracking parameters
CuviTrackingCriteria tracking_criteria = cuviTrackingCriteria(pyramidLevels, trackingWindow, iterations, residue);
//Track Features Using of Frame#1 onto Frame#2 using KLT Tracker
cuvi::trackFeatures(gimg1,gimg2,features1,features2,feature_count,tracking_criteria );
//At this point you can indetify whether the selected features in frame one moved in frame two or not
for(int i=0; i<feature_count; i++){
//True only if the feature has moved from its location
if(FeatureHasMoved(features1[i],features2[i],MovementThreshold))
//You can also plot the tracked features on the screen
}
}while(video_Frames)




//Freeing GPU Memory
delete gFrame;
delete gimg1;
delete gimg2;


}
}


</syntaxhighlight>
|}
{{#ev:vimeo|38484537|500}}
<p>Here's an exact same example applied on a video feed of a webcam</p>
==Demosaic Example==
CUVI demosaic, especially DFPD version is one of the most used and sought after feature of library. The sheer speed of debayering with CUVI linear debayer approach and the perfection in the resultant image in DFPD approach makes it the most demanded function of the library by camera manufacturers and video houses alike. In this example, we'll demonstrate how easy to use CUVI's own demosacing with just few lines of code.
{|
|style="font-size:130%;"|
<syntaxhighlight lang="cpp">
        #include <cuvi.h>
        CuviBayerSeq sensorAlignment = CuviBayerSeq::CUVI_BAYER_RGGB;
gFrame->release();
// 8 bits data in an 8 bit container. Setting this is very important
gimg1->release();
Cuvi32s containerBits = 8;
gimg2->release();
Cuvi32s dataBits = 8;
}
        //Load and Upload image to GPU
        CuviImage input("D:/lighthouse_8bit_RGGB.tif", CUVI_LOAD_IMAGE_GRAYSCALE_KEEP_DEPTH);
input.setDataBits(dataBits);
 
        //Create container for 3-channel output image
CuviImage output(input.size(), containerBits, 3);  
 
        //Perform Demosaic DFPD
cuvi::colorOperations::DFPD(input, output, sensorAlignment);
 
        //Save resultant image to file
cuvi::io::saveImage(output, "D:/lighthouse.tif");    
 


</syntaxhighlight>
|}
|}
[[File:Lighthouse.jpg|700px]]

Revision as of 16:44, 27 March 2018

Motion Detection

CUVI library comes with all the image processing essentials that can be used to build countless applications. For example the Computer Vision module of CUVI can be used for motion and intrusion detection in a live video stream and tracking an object of interest throughout series of cameras installed in a premises. The processing pipeline for motion detection goes as follows:

  • Read a frame from the camera stream
  • Select Strong Features in that Frame using CUVI
  • Read next frame
  • Track features of first frame in the second frame using CUVI
  • Set alarm if motion is detected

The CUVI functions used in this example are goodFeaturesToTrack() and trackFeatures(). For simplicity we have removed the I/O part on host side from the code

#include <cuvi.h>

static const int width = 640;	//Width of video frame
static const int height = 480;	//Height of video frame
CuviFilter* f; // Declaring CUVI Filter

//Parameters for feature selection
static const int requestedFeatures = 150; //Number of features to look for
static const float featureQuality = 0.006f; //Quality of a feature
static const int featureMinDistance = 15;	 //Minimum distance between 2 features
static const int blockSize = 3;	//block size for computing Eigen Matrix
static const float k = -2.0f;	//k for Harris Corner detector

//Parameters for feature tracking
static const int pyramidLevels = 3; //Level Of Scaling
static const CuviSize trackingWindow = cuviSize(30,30);	//Size of tracking window
static const float residue = 20.0f; //Absolute Difference Between Original Location Window & Tracked Location Window
static const int iterations = 10; //Maximum number of iterations before a feature is found

//Pre-processing parameters 
static const bool smoothBeforeSelecting = false; //Smooth Image Before Feature Selection & Tracking
static const bool adjustImage = false;	//Adjust Image Light Before Feature Selection

//Post-processing parameters
static const float movementThreshold = 0.33f; //Mark as motion if a feature moves 0.33 Pixels


//Checks if the feature has moved from is original location.
//It can be used in intrusion detection and the sensitivity can be set using 'threshold' parameter
bool featureHasMoved(CuviPointValue2D point1, CuviPointValue2D point2, float threshold){
        if(point2.val != 0.0f)	return false;
	return ((fabsf(point1.x - point2.x)>threshold) || (fabsf(point1.y - point2.y)>threshold));
}




 

void main()
{
        //Creating a smoothing 3x3 Gaussian Filter with Standard Deviation 0.7
        cuviCreateFilter(&f,3,3);
        f->sigma = 0.7f;
        cuviCreateFilterSpecial(f,CUVI_FILTER_GAUSSIAN);

        //Image size
        CuviSize size = cuviSize(width,height);

        //Buffer Images on GPU
	CuviImage* gFrame = new CuviImage(size,8,3);
	CuviImage* gimg1 = new CuviImage(size,8,1);
	CuviImage* gimg2 = new CuviImage(size,8,1);

        //Region of Interest in the video frame
	CuviROI roi = cuviROI(0,0,width,height);

	CuviPointValue2D *features1, *features2;

	int feature_count = 0;

	do
	{
                //Read a Video Frame on host 'frame' and populate GPU image with it
		gFrame->upload(frame->imageData,frame->widthStep);
		
                //Converting to Gray Image for computations
                cuvi::colorOperations::RGB2Gray(gFrame,gimg1);

		//Do the same with next, adjacent frame
		gFrame->upload(frame->imageData);
		cuvi::colorConvert(gFrame,gimg2);

		feature_count = RequestedFeatures; //Reset feature count to original

                //Use this option if the adjacent frames are lightening sensitive
		if(AdjustImage){
			cuvi::colorOperations::adjust(gimg1);
			cuvi::colorOperations::adjust(gimg2);
		}

                //Use this option if the images contain fair amount of noise
		if(SmoothBeforeSelecting){
			//Apply Gaussian Smoothing Filter On Both The Images
			cuvi::imageFiltering::imageFilter(gimg1,roi,f);
			cuvi::imageFiltering::imageFilter(gimg2,roi,f);
		}
		


		//Defining feature selection criteria from parameters
		CuviFeaturesCriteria feature_criteria = cuviFeaturesCriteria(CUVI_FEATURES_HARRIS, featureQuality,                      featureMinDistance, blockSize, k);

		//Call any Feature Detector on first Frame( KLT | HARRIS | PETER )
		cuvi::computerVision::goodFeaturesToTrack(gimg1,roi,features1,&feature_count,feature_criteria);

		



		//Defining tracking criteria from tracking parameters
		CuviTrackingCriteria tracking_criteria = cuviTrackingCriteria(pyramidLevels, trackingWindow, iterations, residue);

		//Track Features Using of Frame#1 onto Frame#2 using KLT Tracker
		cuvi::trackFeatures(gimg1,gimg2,features1,features2,feature_count,tracking_criteria );




		//At this point you can indetify whether the selected features in frame one moved in frame two or not
		for(int i=0; i<feature_count; i++){
			//True only if the feature has moved from its location
			if(FeatureHasMoved(features1[i],features2[i],MovementThreshold)) 
				//You can also plot the tracked features on the screen
		}

	}while(video_Frames)


//Freeing GPU Memory
delete gFrame;
delete gimg1;
delete gimg2;

}

Here's an exact same example applied on a video feed of a webcam

Demosaic Example

CUVI demosaic, especially DFPD version is one of the most used and sought after feature of library. The sheer speed of debayering with CUVI linear debayer approach and the perfection in the resultant image in DFPD approach makes it the most demanded function of the library by camera manufacturers and video houses alike. In this example, we'll demonstrate how easy to use CUVI's own demosacing with just few lines of code.

#include <cuvi.h>

        CuviBayerSeq sensorAlignment = CuviBayerSeq::CUVI_BAYER_RGGB;
	
	// 8 bits data in an 8 bit container. Setting this is very important
	Cuvi32s containerBits = 8;
	Cuvi32s dataBits = 8; 
	
        //Load and Upload image to GPU
        CuviImage input("D:/lighthouse_8bit_RGGB.tif", CUVI_LOAD_IMAGE_GRAYSCALE_KEEP_DEPTH);
	input.setDataBits(dataBits);

        //Create container for 3-channel output image
	CuviImage output(input.size(), containerBits, 3); 

        //Perform Demosaic DFPD
	cuvi::colorOperations::DFPD(input, output, sensorAlignment);

        //Save resultant image to file
	cuvi::io::saveImage(output, "D:/lighthouse.tif");

Lighthouse.jpg