World of OpenCV, AI, Computer Vision and Robotics Examples and Tutorials http://pythonopencv.com Learn OpenCV, AI & Robotics with Python and C++ Wed, 11 Apr 2018 07:47:37 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.5 (Easy & Fast) Pre-Compiled OpenCV Libraries and Headers for 3.2 with Visual Studio 2015 x64 Windows 10 Support /easy-fast-pre-compiled-opencv-libraries-and-headers-for-3-2-with-visual-studio-2015-x64-windows-10-support/ /easy-fast-pre-compiled-opencv-libraries-and-headers-for-3-2-with-visual-studio-2015-x64-windows-10-support/#respond Sun, 26 Nov 2017 06:22:44 +0000 /?p=432 Hello Friends,

Following our last post on installing OpenCV 3.3 as per the below link, i am sure most of you might have wished that what if somebody could do the favor and supply with the Pre-Compiled Libraries and Headers of OpenCV any version above 3.0. Well, i did the same thing.
I compiled my OpenCV 3.2 build and sharing it with you all so that you don’t have to go over the pain of manually setting it up. The idea here is that it will save your time and also will let you concentrate more on the coding than building the environment.

Here was our original post on this topic.

(Step by Step) Install OpenCV 3.3 with Visual Studio 2015 on Windows 10 x64 (2017 DIY)

Steps:
Well, steps are really simple, download the the two Zip files from the link below(scan them for Antivirus if you don’t trust us :P) dump them in their respective folder and start using them in visual studio 2015 by linking them. Its’ that simple.

OpenCV_visualstudio2015_includes_3.2

opencv_visualstudio2015_libs_3.2

I tested it on my system with Windows 10 x64 with Visual Studio 2015 and it works fine.
Following is the list of INPUT files that you would need to put in. They are all included in the zip file.

opencv_core320.lib
opencv_highgui320.lib
opencv_imgproc320.lib
opencv_imgcodecs320.lib
opencv_features2d320.lib
opencv_video320.lib
opencv_videoio320.lib
opencv_objdetect320.lib
opencv_ml320.lib
opencv_flann320.lib
opencv_shape320.lib
opencv_calib3d320.lib

Remember to add others as per need. Hope this greatly helps you.

Let me know if you have any questions or queries.

]]>
/easy-fast-pre-compiled-opencv-libraries-and-headers-for-3-2-with-visual-studio-2015-x64-windows-10-support/feed/ 0
(Best and Easy) Support Vector Machines(SVM) + HOG Tutorial | OpenCV 3.0, 3.1, 3.2, 3.3 | Windows 10 | Visual Studio 2015 | x64 | Train & Test /best-and-easy-support-vector-machinessvm-hog-tutorial-opencv-3-0-3-1-3-2-3-3-windows-10-visual-studio-2015-x64-train-test/ /best-and-easy-support-vector-machinessvm-hog-tutorial-opencv-3-0-3-1-3-2-3-3-windows-10-visual-studio-2015-x64-train-test/#comments Thu, 23 Nov 2017 18:45:33 +0000 /?p=323 Hello Friends,

Have you ever wondered how can you create your own OpenCV 3 Support Vector Machines(SVM) + HOG Classifier? Did you want to create your own classifier that actually works? Are you tried of searching google for a good SVM tutorial but all you find it bunch of example SVM Tutorials that does nothing but show a test image with useless circles?
If the answer is Yes! , you have come to the right place my friend.
I have myself struggled a lot to get this working all because there wasn’t a good tutorial anywhere that existed which would teach you the SVM.
In this tutorial, we’ll create a simple Car Detector using SVM aka Support Vector Machines.

So, before we get started, let’s look at the pre-requisites.

1) Visual Studio 2015
2) Windows 10 x64 (works best)
3) OpenCV 3.0, 3.1, 3.2, 3.3 and above
4) Patience !!

If you are someone who doesn’t really bother what’s going on behind the scene and would want to straight out implement this goddamn classifier, then without further due, let’s look at the code. (Although, i would prefer that you understand what’s what before actually using it, but I know you always don’t have that options 🙂 !!!)

Special Note: A lot of things have changed between OpenCV 2.x and OpenCV 3.x builds with respect to SVM. I mean if you are trying to port the code or maybe using the existing classifier that you created in OpenCV 2.x in OpenCV 3.x, man you better off start fresh because things have changed like hell.

Step by Step for Training Phase:
1) Create a new ’empty’ project in Visual Studio 2015
2) Link the OpenCV Includes and Libs with the project (Follow this link on how to install OpenCV 3.3 with Visual Studio and run a test Solution)
3) For Training, Create 3 files namely:
a) Train.cpp
b) Datamanager.cpp
c) Datamanager.h
4) Name your classifier file under “svm->save(“firetrain.yml”);” to your desired filename
5) Create a folder called “Positive” inside the project and add all the positive images to it. (Size 64×64)
6) Create a folder called “Negative” inside the project and add all the Negative images to it. (Size 64×64)
7) (Optional) Use svm->trainAuto(td); instead of svm->train(td) to set value of C and gamma, but it takes a lot of time

Note: Shortly, after this post, i’ll create a crop_image software to assist you with the cropping of images.

Step by Step for Detection Phase:
1) Create a new ’empty’ project in Visual Studio 2015
2) Link the OpenCV Includes and Libs with the project (Follow this link on how to install OpenCV 3.3 with Visual Studio and run a test Solution)
3) For Detection, Create 1 file called “DetectCode.cpp” by excluding the above 3 training files and also referencing the classifier file in “#define TRAINED_SVM “firetrain.yml” that you created in training phase.

Note: There are hog parameters that you need to tune accordingly. I have plans to create a HOG based UI software, so that you can easily get all those parameters from your referenced image. Stay tuned for updates for it.

Algorithm pipeline(Training svm):
– > From the image dataset that you have created, randomly separate it into training and test set(80%, 20% split)
– > Set the hog windowSize(64×64) and Compute the HOG features for the training image set containing vehicles and non-vehicles.
– > Train a SVM using these HOG features. (Use linear or RBF SVM)
– > Test the trained SVM on the separate test imageset and evaluate the results and performance.
– > For the real world test, run a sliding window and use the SVM from above to predict vehicle/non-vehicle.

Algorithm pipeline(Detectting car in video):
– > Extract individual frame of the video frames.
– > Convert each captured frame into grayscale image.
– > Discard the upper half containing the sky part.
– > Choose a sliding window size, and resize to the trained size; Use SVM to predict
– > Filter the detect points for false positive(use template matching only for the subsection of detected)

Codes :

#Train.cpp

//train.cpp
#include 
//#include 

#include 
#include 
//#include "util/imageutils.h"
#include "DataSetManager.h"



using namespace std;
using namespace cv;
using namespace cv::ml;



HOGDescriptor hog(
        Size(64,64), //winSize
        Size(8,8), //blocksize
        Size(8,8), //blockStride,
        Size(8,8), //cellSize,
                 9, //nbins,
                  1, //derivAper,
                 -1, //winSigma,
                  0, //histogramNormType,
                0.2, //L2HysThresh,
                  0,//gammal correction,
                  64,//nlevels=64
                  1);




void getSVMParams(SVM *svm)
{
	cout << "Kernel type     : " << svm->getKernelType() << endl;
	cout << "Type            : " << svm->getType() << endl;
	cout << "C               : " << svm->getC() << endl;
	cout << "Degree          : " << svm->getDegree() << endl;
	cout << "Nu              : " << svm->getNu() << endl;
	cout << "Gamma           : " << svm->getGamma() << endl;
}

void SVMtrain(Mat &trainMat, vector &trainLabels, Mat &testResponse, Mat &testMat) {
	Ptr svm = SVM::create();
	svm->setGamma(0.50625);
        svm->setC(100);
	svm->setKernel(SVM::RBF);
	svm->setType(SVM::C_SVC);
	Ptr td = TrainData::create(trainMat, ROW_SAMPLE, trainLabels);
	svm->train(td);
	//svm->trainAuto(td);
	svm->save("firetrain.yml");
	svm->predict(testMat, testResponse);
	getSVMParams(svm);

	/*
	To acheive 100% rate.
	Descriptor Size : 576
Kernel type     : 2
Type            : 100
C               : 2.5
Degree          : 0
Nu              : 0
Gamma           : 0.03375
the accuracy is :100
	
	*/

}

void SVMevaluate(Mat &testResponse, float &count, float &accuracy, vector &testLabels) {

	for (int i = 0; i(i,0) << " " << testLabels[i] << endl;
		if (testResponse.at(i, 0) == testLabels[i]) {
			count = count + 1;
		}
	}
	accuracy = (count / testResponse.rows) * 100;
}
void computeHOG(vector &inputCells, vector > &outputHOG) {

	for (int y = 0; y descriptors;
		hog.compute(inputCells[y], descriptors);
		outputHOG.push_back(descriptors);
	}
}
void ConvertVectortoMatrix(vector > &ipHOG, Mat & opMat)
{

	int descriptor_size = ipHOG[0].size();
	for (int i = 0; i(i, j) = ipHOG[i][j];
		}
	}
}

int main(int argc, char ** argv)
{

	/**************** user code starts *******************/
	cout << " User code starts" << endl;
	DataSetManager dm;
	dm.addData("Positive", 1);// positive train data
	dm.addData("Negative", -1);// negative train data
										//dm.addData("./testfolder/test/",1);// test data
										// can also provide fullpath "/home/pankaj/opencv/programs/udacity/carND/cardetection"
										//dm.addData("./vehicles/vehicles/",2);
	cout << "Total data length : " << dm.getTotalDataNum() << endl;
	dm.distribute();
	dm.display();
	/***********load all the dataset into vector of Mat*********/
	vector trainCells;
	vector testCells;
	vector trainLabels;
	vector testLabels;
	for (int i = 0; i > trainHOG;
	std::vector > testHOG;

	//compute_hog(trainCells, gradient_lst);
	computeHOG(trainCells, trainHOG);
	computeHOG(testCells, testHOG);

	int descriptor_size = trainHOG[0].size();
	cout << "Descriptor Size : " << descriptor_size << endl;
	/******** HOG descriptor ends ****************************/

	/********Prepeare trainData and test data and call SVM ML algorithm*********/
	Mat trainMat(trainHOG.size(), descriptor_size, CV_32FC1);
	Mat testMat(testHOG.size(), descriptor_size, CV_32FC1);
	ConvertVectortoMatrix(trainHOG, trainMat);
	ConvertVectortoMatrix(testHOG, testMat);

	Mat testResponse;
	SVMtrain(trainMat, trainLabels, testResponse, testMat);

	float count = 0;
	float accuracy = 0;
	SVMevaluate(testResponse, count, accuracy, testLabels);

	cout << "the accuracy is :" << accuracy << endl;

	/**************** user code ends *******************/

	//waitKey(0);
	char ch;
	cin >> ch;
	return 0;

}

#DataSetManager.cpp

//DataSetManager.cpp
#include 
#include 
#include 
#include "DataSetManager.h"
#include 

using namespace cv;
using std::cout;
using std::endl;
using std::string;

#define EN_DEBUG

//constructor
DataSetManager::DataSetManager() :testDataPercent(20), validationDataPercent(0), totalDataNum(0), totalTrainDataNum(0), totalTestDataNum(0) {
	// default parameter initialization here
}
// setter and getter methods

void DataSetManager::setTestDataPercent(float num) { testDataPercent = num; }
void DataSetManager::setValidationDataPercent(float num) { validationDataPercent = num; }
int DataSetManager::getTotalDataNum() { return totalDataNum; }
int DataSetManager::getTotalTrainDataNum() { return totalTrainDataNum; }
int DataSetManager::getTotalTestDataNum() { return totalTestDataNum; }

//main functions
void DataSetManager::addData(std::string folderName, int classlabel) {
	// notice here that we are using the Opencv's embedded "String" class 
	std::vector filenames;
	cv::String folder = folderName.c_str();// converting from std::string->cv::String
	cv::glob(folder, filenames);
	// for each of these fileNames append them into the DataSet structure with labels
	for (size_t i = 0; i(filenames[i]);
		tempDataset.label = classlabel;
		dataList.push_back(tempDataset);
	}
	totalDataNum = totalDataNum + filenames.size();
}

// the distribute functions distributes the whole data into training data and test data.
void DataSetManager::distribute() {
	int n_test_valid = static_cast (
		(validationDataPercent*totalDataNum / 100) + (testDataPercent*totalDataNum / 100));
	//cout<<" n_test_valid == "<< n_test_valid< rndIndex;
	std::vector::iterator it;
	DataSet tempDataset;
	while (counter(dataList[num].filename);
		tempDataset.label = dataList[num].label;
		TestData.push_back(tempDataset);
		counter++;
	}
	std::sort(rndIndex.begin(), rndIndex.end());//sort in ascending order
#ifdef EN_DEBUG
	cout << "sortedIndexes: " << endl;
	for (std::vector::iterator it = rndIndex.begin(); it != rndIndex.end(); ++it)
		cout << " " << *it << endl;
	cout << endl;
#endif 

	//cout<<" now fill the TrainData; only exclude the testData"<(dataList[i].filename);
			tempDataset.label = dataList[i].label;
			TrainData.push_back(tempDataset);
		}
		else if ((current == i) && (curIdx

#DataSetManager.h

#ifndef _DATASETMANAGER_H
#define _DATASETMANAGER_H

#include 

struct DataSet {
	std::string filename;
	float label;
};

class DataSetManager
{
private:
	// user defined data member
	float testDataPercent;
	float validationDataPercent;

	// derrived or internally calculated
	int totalDataNum;
	int totalTrainDataNum;
	int totalTestDataNum;
	int totalValidationDataNum;

public:
	//constructor
	DataSetManager();

	// setter and getter methods
	void setTestDataPercent(float num);
	void setValidationDataPercent(float num);

	int getTotalDataNum();
	int getTotalTrainDataNum();
	int getTotalTestDataNum();
	int getTotalValidationDataNum();

	// primary functions of the class
	void addData(std::string folderName, int classlabel);
	void read();
	void display();// displays the read file names for debugging
	void distribute();
	// ideally these are private; need to update
	std::vector dataList;
	std::vector TrainData;
	std::vector TestData;
	std::vector ValidationData;
};

#endif

DetectCode.cpp

//detectcar.cpp
#include 
#include 
#include 
#include 
#include 
#include 


#include 
#include 


#define WINDOW_NAME "WINDOW"

#define TRAFFIC_VIDEO_FILE "7.avi"
#define TRAINED_SVM "firetrain.yml" 
#define	IMAGE_SIZE Size(40,40)
#define save_video true
#define OUT_Video_File "march323230_project_video.avi"

using namespace cv;
using namespace cv::ml;
using namespace std;

bool file_exists(const string &file);

void draw_locations(Mat & img, const vector< Rect > & locations, const Scalar & color);

void readImage(string imgname, Mat & im);
//void test_image(Mat & img, const Size & size);
void test_video(const Size & size);
bool checkIfpatchIsVehicle(Mat & img2check);

void printHOGParams(HOGDescriptor &hog)
{
	cout << "HOG descriptor size is " << hog.getDescriptorSize() << endl;
	cout << "hog.windowSize: " << hog.winSize << endl;
	cout << " cellsize " << hog.cellSize << endl;
	cout << " hog.nbins " << hog.nbins << endl;
	cout << " blockSize " << hog.blockSize << endl;
	cout << " blockStride " << hog.blockStride << endl;
	cout << " hog.nlevels " << hog.nlevels << endl;
	cout << " hog.winSigma " << hog.winSigma << endl;
	cout << " hog.free_coef  " << hog.free_coef << endl;
	cout << " hog.DEFAULT_NLEVELS " << hog.DEFAULT_NLEVELS << endl;

}


int main(int argc, char** argv)
{

	
	test_video(IMAGE_SIZE);
	return 0;
}
void readImage(string imgname, Mat & im) {

	im = imread("54.jpg", IMREAD_COLOR);
	if (im.empty())
	{
		cout << " Invalid image read, imgname =argv[1] = " << imgname << endl;
		CV_Assert(false);
	}
	//cout<<"****** successfully image read, imgname = "<& svm, vector< float > & hog_detector)
{
	// get the support vectors
	Mat sv = svm->getSupportVectors();
	const int sv_total = sv.rows;
	// get the decision function
	Mat alpha, svidx;
	double rho = svm->getDecisionFunction(0, alpha, svidx);
	cout << "alpha = " << alpha << endl;
	//CV_Assert(alpha.total() == 1 && svidx.total() == 1 && sv_total == 1);
	//CV_Assert((alpha.type() == CV_64F && alpha.at(0) == 1.) ||
		//(alpha.type() == CV_32F && alpha.at(0) == 1.f));
	//CV_Assert(sv.type() == CV_32F);
	hog_detector.clear();

	hog_detector.resize(sv.cols + 1);
	memcpy(&hog_detector[0], sv.ptr(), sv.cols * sizeof(hog_detector[0]));
	hog_detector[sv.cols] = (float)-rho;
}

void draw_locations(Mat & img, const vector< Rect > & locations, const Scalar & color)
{
	if (!locations.empty())
	{
		vector< Rect >::const_iterator loc = locations.begin();
		vector< Rect >::const_iterator end = locations.end();
		for (; loc != end; ++loc)
		{
			rectangle(img, *loc, color, 2);
		}
	}
}

void test_video(const Size & size)
{
	char key = 27;
	Mat img, draw;
	Ptr svm;
	HOGDescriptor hog;
	hog.winSize = size;
	vector< Rect > locations;
	vector< Rect > found_filtered;

	// Load the trained SVM.
	svm = StatModel::load(TRAINED_SVM);
	// Set the trained svm to my_hog
	vector< float > hog_detector;
	get_svm_detector(svm, hog_detector);
	hog.setSVMDetector(hog_detector);
	printHOGParams(hog);

	VideoCapture video;
	// Open the video file.
	video.open(TRAFFIC_VIDEO_FILE);
	if (!video.isOpened())
	{
		cerr << "Unable to open the device" << endl;
		exit(-1);
	}
	// Get the frame rate
	double rate = video.get(CV_CAP_PROP_FPS);
	cout << " Frame rate : " << rate << endl;
	cout << " Input video codec :" << video.get(CV_CAP_PROP_FOURCC);
	// initilaize the video writer object to write the video output
	std::string outputFile(OUT_Video_File);
	VideoWriter writer;
	int codec = static_cast(video.get(CV_CAP_PROP_FOURCC));
	//int codec = CV_FOURCC('M', 'J', 'P', 'G');
	bool isWriterInitialized = false;

	int num_of_vehicles = 0;
	bool end_of_process = false;
	while (!end_of_process)
	{
		video >> img;
		if (img.empty())
			break;


		draw = img.clone();
		Mat cropped;
		cv::resize(draw, cropped, Size(720, 560));

		Mat temp, temp3;
		cvtColor(cropped, temp, COLOR_BGR2GRAY);
		/*Mat bgr[3];   //destination array
		split(temp3,bgr);//split source
		temp = bgr[0]+bgr[2];
		*/
		if (isWriterInitialized) {
			//execute only once
			isWriterInitialized = true;
			/*writer.open(outputFile,
			capture.get(CV_CAP_PROP_FOURCC),
			capture.get(CV_CAP_PROP_FPS),
			Size(capture.get(CV_CAP_PROP_FRAME_WIDTH),capture.get(CV_CAP_PROP_FRAME_HEIGHT)),
			true);*/
			writer.open(outputFile, codec, rate, cropped.size(), true);
		}


		locations.clear();
		// Rect(x,y,w,h) w->width=cols;h->rows
		// first remove the upper 50% from height  Original Cropped =size(720,560)=(cols,rows)
		Mat roi = temp(Rect(0, temp.rows*0.5, temp.cols, temp.rows - temp.rows*0.5));
		//size(roi) = size(720,280)
		//cout<<"roi.size() = "< remove false positives
		roi = roi(Rect(0, 0, roi.cols, roi.rows - 100));
		//cout<<"roi.size() = "<::iterator it = locations.begin();
		std::vector::iterator itend = locations.end();
		vector actuallocations;
		bool isVehicle = false;
		for (; it != itend; it++)
		{
			Rect current = *it;
			//cout<<" Rect current = "<< current< templateList;
	templateList.push_back("temp1.png"); templateList.push_back("temp2.png");
	templateList.push_back("temp3.png"); templateList.push_back("temp4.png");
	templateList.push_back("temp5.png"); templateList.push_back("temp6.png");
	templateList.push_back("temp7.png");
	//templateList.push_back("temp8.png");
	//templateList.push_back("temp9.png");templateList.push_back("temp10.png");

	Mat matchScore = Mat::zeros(7, 1, CV_32FC1);

	for (int ii = 0; ii(ii, 0) = maxVal;
		if (maxVal>0.15)
			isVehicle = true;
	}
	//cout<<"MatchScore = "<testdatafolder

]]> /best-and-easy-support-vector-machinessvm-hog-tutorial-opencv-3-0-3-1-3-2-3-3-windows-10-visual-studio-2015-x64-train-test/feed/ 3 OpenCV 3.3.1 vs 3.3.0 Changes (Release Notes) /opencv-3-3-1-vs-3-3-0-changes-release-notes/ /opencv-3-3-1-vs-3-3-0-changes-release-notes/#respond Wed, 15 Nov 2017 16:59:38 +0000 /?p=315 Hello Friends,

Due to a lot of confusion out there as to which build to choose from and which one is better, i have created this blog post.
If you would like to get instructions on how to build OPENCV from scratch, refer to this article here : /step-by-step-install-opencv-3-3-with-visual-studio-2015-on-windows-10-x64-2017-diy/

Ok so what has changed between OpenCV 3.3.1 and OpenCV 3.3.0, let’s have a look at it.

We’ll first look at the changes that went into 3.3.0:

OpenCV Version 3.3.0

=> opencv_dnn module has been moved from the contribution repository (opencv_contrib) to the main repository (opencv) and was significantly improved:

  • High-level API has been modified and is even more convenient now.
  • The regression tests have been expanded, some new tests have been added. Now, there are 46 of them.
  • Many bugs have been fixed in Torch and TF loaders, as well as in some processing layers. Now we check that on a certain set of networks the results from OpenCV DNN match or very close to the results from the original frameworks. We also check that the results claimed in the papers for such networks are achievable with OpenCV DNN.
  • Performance has been substantially improved. Layer fusion has been implemented and some performance-critical layers have been optimized using AVX, AVX2, SSE and NEON. An external BLAS (OpenBLAS, MKL, ATLAS) is not needed anymore.
  • New samples in C++ and Python have been added.

 

  • The optional Halide backend has been added. It can accelerate OpenCV DNN on GPU when the GPU is fast enough.
  • Upgraded IPPICV from 2015.12 to 2017.2 version brought ~15% speed improvement into core and imgproc modules (measured as geometrical mean over the corresponding performance tests)
  •  
  • Dynamic dispatching of SSE4.2/AVX/AVX2 code has been implemented. Previously, OpenCV had to be built with SSE4.x/AVX/AVX2 turned on in order to use such optimizations and that made it incompatible with older hardware. Now the OpenCV binaries automatically adapt to the real hardware and make use of new instructions if they are available while retaining compatibility with older hardware. All the existing AVX/AVX2 optimizations in OpenCV have been refactored to use this technology. AVX acceleration of DNN also uses dynamic dispatching.

  • OpenCV can now be configured and built as C++ 11 library. Pass 
    1
    -DENABLE_CXX11=ON

     to CMake. On some modern Linux distributions, like the latest Fedora, it’s enabled by default.

  • Support for hardware-accelerated video encoding/decoding using Intel GPUs through Intel Media SDK has been implemented for Linux (in the form of backends for 
    1
    cv::VideoCapture

    and 

    1
    cv::VideoWriter

    ).

    • Encoding and decoding of raw H.264 and MPEG1/2 video streams is supported, media containers are not supported yet.
    • Note that system kernel should have specific support for hardware as mentioned in the Media SDK/Server Studio installation guide. In some cases kernel recompilation will be needed.

 

OpenCV Version 3.3.1

Results of several GSoC 2017 projects have been integrated:

  • multi-language (e.g. C++/Python/Java) tutorials by João Cartucho, mentored by Vincent Rabaud
  • AKAZE acceleration by Jiri Horner, mentored by Bence Magyar
  • End-to-end text detection and recognition by Suman Kumar Ghosh, mentored by Prasanna Krishnasamy

One of GSoC 2017 projects that deserves a dedicated section in the change log:

  • Javascript interface to OpenCV (via Emscripten technology) and interactive Web-based OpenCV tutorials by Gang Song and Congxiang Pan. This small yet powerful team was supervised by Sajjad Taheri, Ningxin Hu and Mohammad R Haghighat.

opencv_dnn has been further improved and extended; new samples have been added:

  • Face detection sample and the light-weight Resnet-10 + SSD based network have been added. See the example for details. The detector runs around 20-50FPS on a normal desktop/laptop, and the network is just 10MB (FP32) or even 5MB (FP16).
  • The partial Darknet parser, enough to parse YOLO models, as well as the layers to support a few variations of YOLO object detection networks have been integrated. See the corresponding sample.
  • Preliminary support for FP16 networks has been added. We do not do computations in FP16 yet, we convert FP16 coeffs to FP32 when loading the networks. In the case of Caffe we rely on the following fork, whereas in the case of TF we use the official version.
  • Several new layers have been added to support text detection, image colorization and some other networks.

  • OpenCV has been optimised for PPC64 (64-bit PowerPC) architecture by mapping the universal intrinsics to VSX. Big thanks to Sayed Adel for the patches.
  • OpenCL acceleration path of the bioinspired module has been restored. See the bioinspired-based HDR/Background segmentation example. On Iris Pro HD5200 we get ~5x acceleration over the CPU branch.
  • KFC tracker has been accelerated by ~40%.

  • Hardware-accelerated video encoding/decoding via MediaSDK is now available on Windows too.

 

Summary

All in all, OpenCV 3.3.1 adds improvement to the Text detection, Face detection, KFC Tracker, Hardware-Acceleration.

Hope this helps

]]>
/opencv-3-3-1-vs-3-3-0-changes-release-notes/feed/ 0
Turn your Raspberry Pi into old school retro gaming console using RetroPie (Windows & MAC) /turn-your-raspberry-pi-into-old-school-retro-gaming-console-using-retropie-windows-mac/ /turn-your-raspberry-pi-into-old-school-retro-gaming-console-using-retropie-windows-mac/#respond Wed, 15 Nov 2017 16:17:34 +0000 /?p=305 Hello Friends,

This is one my favorite post of this season not because it shows how a modern day Micro-processor based system can let you enjoy your vintage gaming but how we still love old school gaming.
In this tutorial, we are going to setup our very own Retro Gaming Console but this time, we are not going to need to buy those fat a$$ consoles from 90’s rather we’ll use a modern day small micro-processor based raspberry pi to setup the system.

Following is the Inventory List:

MicroSD card, 16GB | 32GB × 1
HDMI cable × 1
USB gamepad × 1
MicroSD card reader × 1
USB keyboard × 1
Raspberry Pi 3 × 1

Here’s how you start:

Step #1 : Download the Retro-Pie SDCard Image

For those who don’t know, RetroPie is a software package for the Raspberry Pi that is based on Raspbian OS, a Linux distribution. It combines a full suite of tools and utilities that will allow you to quickly and easily run ROMs for various vintage gaming platforms. We’re going to do our install using an SD card image — essentially a snapshot of an entire working installation of RetroPie. This makes it really easy to get up and running.

Because the Raspberry Pi doesn’t have an internal hard drive, it uses a microSD card for storage of the entire operating system and all files contained therein.

Download and unzip the latest RetroPie SD-Card Image. There are two versions of the RetroPie SD-Card Image:

  • One for the Raspberry Pi Zero, Zero W, A, B, A+ and B+
  • One for the Raspberry Pi 2 and 3

Select the appropriate image for your Pi.

Download the RetroPie SD-card image

Step #2 : Format and ready your SDCard

First, you’ll need to format the SD card as FAT. Insert the SD card into your SD card reader. Your SD card will now show up as a mounted drive on your computer.

Format Type

If your SD card is 32GB or smaller, we’ll format it as MS-DOS (FAT). If your SD card is 64GB or larger, we’ll format it as ExFAT.

Formatting on Windows

Open up Explorer, locate the SD card, right-click it, and select Format from the context menu. Select the desired format and click the Start button.

Formatting on Mac

Open Disk Utility by navigating to Applications > Utilities > Disk Utility. Select your SD card in the left pane. Click the Erase button, select the desired format, give it a name, and click the Erase button. For OS X Yosemite and older, you’ll need to navigate to the Erase tab first.

Format your SD card to work with Raspberry Pi

Step#3 (a) : Install the IMAGE (MAC OS)

To do this, we’ll use a third-party utility called ApplePi-Baker. Download the most recent version and open the application. ApplePi-Baker requires SUDO (admin) access in order to read/write to your SD card. Therefore, you will be prompted to enter your Mac account password.

After opening the application, select your SD card in the left hand column. Then, click the “Restore Backup” button and select the (unzipped) RetroPie SD-Card Image (.IMG file) that you downloaded earlier.

If you see a message stating “ApplePi-Baker.app can’t be opened because it is from an unidentified developer” when you first open ApplePi-Baker, close the message, navigate to System Preferences > Security & Privacy, and allow apps downloaded from anywhere. Or, click “Open anyways” in this pane.

Install the RetroPie image (using a Mac)

Step#3 (b) : Install the IMAGE (Windows OS)
Download and install the Win32DiskImager utility. Follow the instructions here and select the (unzipped) RetroPie SD-Card Image (.IMG file) that you downloaded earlier.

Step #4: Connect the SDCard with Raspberry PI

Safely eject the SD card and slide it into your Raspberry Pi.

Plug in your keyboard, USB game controller, and HDMI cable. Connect the HDMI cable to a monitor or TV. It’s also possible to configure your Pi without a monitor or keyboard if that’s more convenient for you at this point. This is known as “headless” mode.

Finally, connect the MicroUSB power supply. Always connect the power supply after connecting your other peripherals so that your Pi will detect all of the peripherals properly on boot.

Your Pi will now boot!

Put the SD card into your Raspberry Pi and connect your peripherals

Step #5 : Connect to Internet on Pi

You’ll need to connect your Pi to the Internet in order to add game ROMs (more on that later) and access additional RetroPie features such as game rating/description scraping.

Note: This step is only required if you want to access these additional features or transfer ROMs over your network. If you have a Pi Zero and don’t want to add WiFi, you can also transfer ROMs via USB. If you’re using a Pi Zero W, which has onboard WiFi, you’re already ready to connect to the internet!

There are a few ways to add internet functionality to your Pi:

Ethernet (CAT5) Cable

If you have easy access to your router, you can simply connect your Pi using an Ethernet cable.

Built-in WiFi

Only the Raspberry Pi 3 and Pi Zero Wireless have built-in WiFi.

USB WiFi dongle

You can find a USB WiFi adapter super cheap on Amazon.

RetroPie WiFi Setup

If using one of the WiFi options above: After connecting all your peripherals and booting up your Pi, select the RetroPie menu icon and then select WIFI.

Connect your Pi to the Internet

Step #6 : Expand SD Card Filesystem to get complete Space

If your SD card is larger than 4GB, you must expand it before your Pi can use the remaining space. To do this, you’ll need to launch the Raspberry Pi configuration tool (raspi-config).

You can either press F4 to exit the RetroPie UI and get back to the shell (i.e. command line), enter the following and press enter:

sudo raspi-config

Or, you can use the Retropie interface to do this. Select the RetroPie menu icon and then select RASPI-CONFIG.

Then, choose either Expand Filesystem or expand_rootfs from the menu (this option will vary based on your Raspberry Pi version). You now need to restart your Pi. You may have noticed there’s no reset button (unless you’ve added one).

To safely reboot your Raspberry Pi, use the following Pi reboot command after pressing F4 to return to the shell:

sudo reboot

After your Pi reboots, we want to make sure that all packages are up to date. Press F4 to get back to the shell/command line, and run the following commands:

sudo apt-get update
apt-get upgrade

Reboot your Pi once more.

Expand your SD card to utilize all usable space

Step #7 : Connect to PI

We now need to connect to your Raspberry Pi from your computer so that we can copy over game ROMs and easily edit configuration files.

Again, this step is optional as you can also transfer ROMs via USB and accessing your configuration and other additional features isn’t strictly required.

There are numerous ways to do this; my favorite method is via SSH/SFTP using an FTP client. As far as free FTP clients go, I recommend FileZilla since it’s very well documented and supported and is available for both Mac and Windows.

Download FileZilla from their downloads page and install it. I recommend you uncheck all the “additional components” that FileZilla will ask you to install, such as the Yahoo search page and toolbar crap.

**Note: As of the latest version of Raspbian Jessie, SSH is disabled by default for security purposes; you will need to enable SSH on your Pi before proceeding. Thankfully, this process is super easy and painless.

Use the following credentials to connect to your Pi. The default Pi username and password are pi and raspberry, respectively.

Host:your pi's IP address (see below) 
Username: pi
Password: raspberry
Port: 22

For security purposes, I highly recommend you change the default Raspberry Pi password to something else. It only takes a minute.

To find your Pi’s IP, open Terminal (Mac) or Command Prompt (Windows) and enter the following command to ping your Pi and return its network IP:

ping raspberrypi

or, for newer versions of RetroPie, use:

ping retropie

It may take a few tries to get a response. If you see a “Request timeout” response when you run the ping command, then the command has failed. Instead, boot up your Pi, press F4 to get to the shell, and run the following command:

ifconfig

This alternate method will list your Pi’s IP immediately afterinet addr: under eth0.

Step #8 : Configuring Controller

You’ll now want to configure your USB gamepad to work with your Pi. I recommend the Buffalo Classic USB Gamepad since it’s inexpensive, highly compatible with the Pi, and comes in sweet Japanese packaging. You can find an Amazon link to that controller at the top of this guide.

To configure your controller to work with the menu system and games, boot up your Pi. Your Pi will automatically launch the RetroPie UI where you will be prompted to configure the controller. If you mess up, don’t worry — you can access this configuration menu again later by pressing Start in the RetroPie UI or by typing F4 on your keyboard and then rebooting your Pi.

Configuring your controller

Step #9: Finding ROM’s

A ROM is an entire port of a particular video game. RetroPie contains a copy of EmulationStation, which both provides the user interface for your new retro gaming rig and interprets these ROMs appropriately. RetroPie comes with a few games preinstalled — such as QuakeDuke Nukem 3D, and Cave Story. These games are best played using a keyboard, however, since the gamepad doesn’t have enough keys to map the controls for some PC-ported games.

A Legal Note

Most vintage games are owned by a company (yes, even the very old ones!) and are protected by copyright laws. Thus, unfortunately, downloading ROMs for those games constitutes piracy.

While you can find tons of ROMs on any Torrent site, keep in mind that you should not download any copyrighted titles.

Free ROMs

Luckily, there are some free ROMs out there that we can use for now! MAMEdev.org has a nice list of these free, legal ROMs. We’ll use these as examples and you can find more ROMs on your own.

Let’s use Gridlee and Super Tank as examples. Download each ROM.

Finding game ROMs

Step #10: Installing ROMs

ROMs can be installed via SSH/SFTP (over your network) or via a USB thumb drive. Additional methods for copying ROMs to RetroPie can be found on the RetroPie Wiki.

I wrote a separate guide on installing RetroPie ROMs using a USB drive. Or, if your Pi is connected to the internet, you can use the instructions below.

Reconnect FileZilla and browse to the following directory:

/home/pi/RetroPie/roms

Unzip each game ROM and upload each game folder into its respective game system folder. For example, if you had a Super Mario Bros 3 ROM, you would upload the game’s folder into the “nes” directory.

Gridlee and Super Tank go in the “mame” directory since MAME handles the arcade emulation for most vintage arcade-style games that don’t belong to a specific home video game system such as the NES, SNES or Atari.

After you’ve copied these directories over, restart your Pi.

Installing game ROMs

Step #11 : Ready for the Vintage Gaming Experience

Your Pi will boot into RetroPie automatically. Bask in the glory of simple graphics, bolstered by highly addictive gameplay.

Note:
Cave Story is actually a pretty sweet game.

You're ready to play!

Additional Steps:

Step#12: Exiting
To exit a game, press the START and SELECT buttons at the same time. This will bring you back to the RetroPie UI.

]]> /turn-your-raspberry-pi-into-old-school-retro-gaming-console-using-retropie-windows-mac/feed/ 0 OpenCV Real-time Graph Plot using Matplotlib or Python-Drawnow /opencv-real-time-graph-plot-using-matplotlib/ /opencv-real-time-graph-plot-using-matplotlib/#respond Mon, 13 Nov 2017 09:38:27 +0000 /?p=299 Hello Friends,

In this tutorial, we’ll see how we can use Matplotlib to generate live-opencv graphs. We’ll look at atleast three ways of doing this with their relevant source code.

Method #1 : A simple one
Here’s the first working version of the code (requires at least version Matplotlib 1.1.0 from 2011-11-14):

import numpy as np
import matplotlib.pyplot as plt

plt.axis([0, 10, 0, 1])
plt.ion()

for i in range(10):
    y = np.random.random()
    plt.scatter(i, y)
    plt.pause(0.05)

while True:
    plt.pause(0.05)

Note some of the changes:

Call plt.ion() in order to enable interactive plotting. plt.show(block=False) is no longer available.
Call plt.pause(0.05) to both draw the new data and it runs the GUI’s event loop (allowing for mouse interaction).
The while loop at the end is to keep the window up after all data is plotted.

Method #2: Real-time plot with Matplotlib Animation API
If you’re interested in realtime plotting, I’d recommend looking into matplotlib’s animation API.
In particular, using blit to avoid redrawing the background on every frame can give you substantial speed gains (~10x):

#!/usr/bin/env python

import numpy as np
import time
import matplotlib
matplotlib.use('GTKAgg')
from matplotlib import pyplot as plt


def randomwalk(dims=(256, 256), n=20, sigma=5, alpha=0.95, seed=1):
    """ A simple random walk with memory """

    r, c = dims
    gen = np.random.RandomState(seed)
    pos = gen.rand(2, n) * ((r,), (c,))
    old_delta = gen.randn(2, n) * sigma

    while True:
        delta = (1. - alpha) * gen.randn(2, n) * sigma + alpha * old_delta
        pos += delta
        for ii in xrange(n):
            if not (0. <= pos[0, ii] < r):
                pos[0, ii] = abs(pos[0, ii] % r)
            if not (0. <= pos[1, ii] < c):
                pos[1, ii] = abs(pos[1, ii] % c)
        old_delta = delta
        yield pos


def run(niter=1000, doblit=True):
    """
    Display the simulation using matplotlib, optionally using blit for speed
    """

    fig, ax = plt.subplots(1, 1)
    ax.set_aspect('equal')
    ax.set_xlim(0, 255)
    ax.set_ylim(0, 255)
    ax.hold(True)
    rw = randomwalk()
    x, y = rw.next()

    plt.show(False)
    plt.draw()

    if doblit:
        # cache the background
        background = fig.canvas.copy_from_bbox(ax.bbox)

    points = ax.plot(x, y, 'o')[0]
    tic = time.time()

    for ii in xrange(niter):

        # update the xy data
        x, y = rw.next()
        points.set_data(x, y)

        if doblit:
            # restore background
            fig.canvas.restore_region(background)

            # redraw just the points
            ax.draw_artist(points)

            # fill in the axes rectangle
            fig.canvas.blit(ax.bbox)

        else:
            # redraw everything
            fig.canvas.draw()

    plt.close(fig)
    print "Blit = %s, average FPS: %.2f" % (
        str(doblit), niter / (time.time() - tic))

if __name__ == '__main__':
    run(doblit=False)
    run(doblit=True)

Output:

Blit = False, average FPS: 54.37
Blit = True, average FPS: 438.27

Method#3 : Without using MatplotLib
There is a package available called drawnow on GitHub as “python-drawnow”.
This provides an interface similar to MATLAB’s drawnow which allows you to easily update a figure.
python-drawnow is a thin wrapper around plt.draw but provides the ability to confirm (or debug) after figure display.

An example

import matplotlib.pyplot as plt
from drawnow import drawnow

def make_fig():
    plt.scatter(x, y)  # I think you meant this

plt.ion()  # enable interactivity
fig = plt.figure()  # make a figure

x = list()
y = list()

for i in range(1000):
    temp_y = np.random.random()
    x.append(i)
    y.append(temp_y)  # or any arbitrary update to your figure's data
    i += 1
    drawnow(make_fig)
]]>
/opencv-real-time-graph-plot-using-matplotlib/feed/ 0
Modern C++ libraries for your OpenCV toolbox /modern-c-libraries-for-your-opencv-toolbox/ /modern-c-libraries-for-your-opencv-toolbox/#respond Wed, 25 Oct 2017 09:34:46 +0000 /?p=295 Hello Folks,

I think following is my personal favorite list of toolbox that i frequently use with OpenCV. Hope this will assist you in time of need.
Keep it handy !!

Cross-platform libraries that are free for commercial (or non-commercial) applications

 




Links to additional lists of open source C++ libraries:

http://en.cppreference.com/w/cpp/links/libs

]]>
/modern-c-libraries-for-your-opencv-toolbox/feed/ 0
Usage of “Assert” in OpenCV World /usage-of-assert-in-opencv-world/ /usage-of-assert-in-opencv-world/#respond Wed, 25 Oct 2017 09:23:25 +0000 /?p=290 Hello Folks,

Today, we’ll learn about the most interesting ASSERT function in OpenCV.

assert

 will terminate the program (usually with a message quoting the assert statement) if its argument turns out to be false. it’s commonly used during debugging to make the program fail more obviously if an unexpected condition occurs.

assert(length >= 0); // die if length is negative.
You can also add a more informative message to be displayed if it fails like so:

for example:

assert(length >= 0); // die if length is negative.
You can also add a more informative message to be displayed if it fails like so:

assert(length >= 0 && “Whoops, length can’t possibly be negative! (didn’t we just check 10 lines ago?) Tell jsmith”);

Or else like this:
assert((“Length can’t possibly be negative! Tell jsmith”, length >= 0));

When you’re doing a release (non-debug) build, you can also remove the overhead of evaluating assert statements by defining the NDEBUG macro, usually with a compiler switch. The corollary of this is that your program should never rely on the assert macro running.

// BAD
assert(x++);

// GOOD
assert(x);
x++;

// Watch out! Depends on the function:
assert(foo());

// Here’s a safer way:
int ret = foo();
assert(ret);

From the combination of the program calling abort() and not being guaranteed to do anything, asserts should only be used to test things that the developer has assumed rather than, for example, the user entering a number rather than a letter (which should be handled by other means).

]]>
/usage-of-assert-in-opencv-world/feed/ 0
(COMPLETE, BEST & EASIEST) ESP8266-01 | ESP8266-12E/F Configuration Guide with Arduino Nano/Mega/UNO for IoT Projects (Includes all ESP Issues) /complete-best-easiest-esp8266-01-esp8266-12ef-configuration-guide-with-arduino-nanomegauno-for-iot-projects-includes-all-esp-issues/ /complete-best-easiest-esp8266-01-esp8266-12ef-configuration-guide-with-arduino-nanomegauno-for-iot-projects-includes-all-esp-issues/#comments Thu, 19 Oct 2017 08:55:19 +0000 /?p=262 Hello Friends,

Have you been trying real-hard to get your good old ESP8266 -01 / 12 E / 12 F working with Arduino UNO, NANO or Mega? Well, i bet you this guide will fix all your issues and you will be able to run it smoothly without any issues.

First thing first, ESP8266 is great cheap price device if you would like to configure your projects for IoT but, getting it to work can be a problem. There are several issues that i have myself faced in getting it to work. Let’s start with jotting down all the issues and then there solution afterwards.

Issues :

1) AT Commands won’t work
2) ESP8266 reboots indefinitely
3) TCP send doesn’t work
4) Flashing issues
5) Garbage text in Serial Monitor
6) Dependency on USB to TTL or FTDI Converter
7) No lights flashing in ESP
8) ADC PIN of ESP8266 doesn’t work accurately
9) Battery Input Voltage required for ESP8266 models
10) Basic Connection with Arduino
11) Flashing ESP8226 will remove AT ? The big question
12) Unable to Flash ESP8266 from Arduino IDE
13) Meaning of Reset Codes and Boot Codes

If you face any of the above issues, this is the guide for you. Let’s start tackling each of the aforementioned issues one by one.

Issue #1 : ESP8266 AT Commands doesn’t work
There can be several reasons for this, but the most famous one and only applicable if you don’t see anything in the Serial Monitor when you have connected it with Arduino would be a WIRING issue i.e. you haven’t correctly connected it with the Arduino.
The best way in my experience to judge if the wiring is perfect is to connect the VCC, CHPD to respective 3.3V (if using ESP8266 12F/12E) or 5V(ESP8266-01) and GND to Ground of Arduino, GPIO15 of ESP8266-12E/12F to GND, then put in the RX pin of ESP8266 to TX of Arduino and TX pin of ESP8266 to RX of Arduino. Once, you are done with these steps, simply power on the Arduino and then Open the Serial Monitor(Try both 115200 and 9600 baud rate with CL/NL) and now, gently remove the RX pin connection of Arduino and quickly put it back in. If you see a garbage text, it means the connections are perfect. If you don’t see anything, switch the RX and TX pins and repeat the steps. This should assist you in fixing the wiring connections.

Issue #2 : ESP8266 reboots indefinitely
If you have worked with the ESP8266 for good amount of time after getting over those wiring and other issues, you have undoubtedly experienced the endless resets on power-up. The looping message occurs at about 5 second intervals, which seems to be the default internal watchdog timer time-out period. The message, at 115200 baud, looks something like this:

/////////////////////////////////////////////////
ets Jan 8 2013,rst cause:4, boot mode:(3,7)
wdt reset
load 0x40100000, len 30000, room 16
tail 0
chksum 0x67
load 0x3ffe8000, len 2556, room 8
tail 4
chksum 0xb7
load 0x3ffe8a00, len 3080, room 4
tail 4
chksum 0x59
csum 0x59
/////////////////////////////////////////////////

From my experience, there are two main cause of this endless reboot loop issue.
1) Inadequate Power Supply
2) Flash Errors

For #1: You need to add some components to your wiring scheme. You must include the following three items before the power source to the ESP8266.
a) Sufficient current. A regulated 3.3V source of at least 500ma is essential. Aside from the 300ma peak current needs of the ESP8266, it is essential to also consider the current requirements for other components you have – like the sensors and controls in your circuit. Generally, speaking from my experience, this shouldn’t be an issue if you are using Arduino.
b) Large capacitor (suggest using 100uF, 220uF or 470 uF) across the VCC to GND rails on your breadboard or PCB is a vital ingredient that will minimize reset inducing voltage fluctuations. So, how do you do this? Take the VCC and GND from Arduino to a breadboard, next, connect the Large capacitor to with Big leg of it to the VCC and short leg to the GND and now connect all the respective pins of ESP8266 to the same rails.
c) A small 0.1 uF decoupling capacitor across the ESP8266 VCC to Gnd inputs very close to the pins. If you are good with soldering, I would suggest to Solder this capacitor to the VCC and GND of ESP8266. If you can’t solder, then put this between the VCC and GND rails. DO NOT SKIP THIS COMPONENT! This cheap yet often overlooked component, when missing, is the root cause of ESP8266 resets. I myself have got relief after putting this component in.

For#2 : If you have already completed the above steps, then you know what the issue you are dealing with : FLASH CHIP error.
Try to flash the chip again with a stable firmware and see if that fixes your issue.

Issue #3 : TCP send doesn’t work
This is a simple one to tackle. You just need to check if your TCP address is correct and also, if the ESP8266 is connected with your Modem/Router successfully. Try to try some global links to verify the connectivity. Also, pay attention to the correct syntax.

Issue #4 : Flashing issues
Most of the flashing issues are due to power supply issues. Follow issue #2 recommendations and then it should provide you with relief.
In case you still get issues after putting all power controls, try to use TTL or FTDI converter to see if that makes any difference.

Issue #5 : Garbage text in Serial Monitor
This is again, the most common error and to our surprise is mainly because of two reasons.
a) Inadequate Power Supply
b) Flash Erorrs

For both, follow point #2 to get relief

Issue #6 : Dependency on USB to TTL or FTDI Converter
I would put this dependency only if you would want to flash your Chip, other than that, Arduino should be able to handle all sorts of connections and program flashing

Issue #7 : No lights flashing in ESP
If you are using ESP8266-01, then lights indeed flash but if you are using ESP8266-12E/12F models, the light only flash once, per power on and on subsequent requests, they won’t power on. Only other case where the LED would flash would be Flashing or writing program from Arduino IDE.

Issue #8 : ADC PIN of ESP8266 doesn’t work accurately
ESP8266 has a single ADC channel available to users. It may be used either to read voltage at ADC pin, or to read module supply voltage (VCC).
To read external voltage applied to ADC pin, use analogRead(A0). Input voltage range is 0 — 1.0V.
To read VCC voltage, ADC pin must be kept unconnected. Additionally, the following line has to be added to the sketch:
ADC_MODE(ADC_VCC);
This line has to appear outside of any functions, for instance right after the #include lines of your sketch.

I have myself struggled to get this working. It seems that it’s not reliable to use ADC pin of ESP8266 because of two issues.
1) 90% of time, you won’t get accurate results
2) voltage capping is very low ( under 1 volt readings only)
Alternate solution would be using ADS1115 with Digital Pins of ESP8266.

Issue #9 : Battery Input Voltage required for ESP8266 models
Very tricky question, you would find tons of articles online saying use this or that, it smoked my esp8266 etc. etc.
In my experience, and yes don’t quote me if anything goes wrong, i have successfully used 5V with ESP8266-01 as it never worked on 3.3V from Arduino . For ESP8266-12E/12F, they have worked successfully for me over 3.3 V.

Issue #10 : Basic Connection with Arduino
I think we have already covered this point by now in this post, but to revise it again.

Pin VCC, E/CHPD(E in ESP12 and CHPD in ESP01) to VCC of Arduino
Pin GND to GND of Arduino
Pin GPIO15(only 12E/12F) to GND of arduino
Pin RX to TX of Arduino
Pin TX to RX of Arduino
Pin GPIO0 to GND of Arduino (only in case of Flashing either using Flash Software or Arduino Sketch)

Issue #11 : Flashing ESP8226 will remove AT ? The big question
The answer is sad and simple, YES. Flashing ESP chips in any form other than their original firmware would remove the AT commands access.
This is because, when you use Arduino Sketches to Flash the ESP chips, the flash goes on 0x00000 address, which is where you firmware contents are stored for AT commands.

Issue #12 : Unable to Flash ESP8266 from Arduino IDE
Make sure that you have grounded GPIO0 pin on ESP. This is the only reason of flash not working if your other connections are perfect.

Issues #13 : Meaning of RESET codes and boot Codes
Following are the meaning of RESET Codes and boot codes

reset causes:
0:
1: normal boot
2: reset pin
3: software reset
4: watchdog reset

boot device:
0:
1: ram
3: flash

Hope this helps.
Enjoy!!

]]>
/complete-best-easiest-esp8266-01-esp8266-12ef-configuration-guide-with-arduino-nanomegauno-for-iot-projects-includes-all-esp-issues/feed/ 1
How to Easily Install OpenCV in Android Studio? /how-to-easily-install-opencv-in-android-studio/ /how-to-easily-install-opencv-in-android-studio/#respond Tue, 17 Oct 2017 10:26:48 +0000 /?p=281 Hello Friends,

A small and sweet post in between all those C, C++ and Python OpenCV Tutorials. I have received several emails from folks who were interested in knowing how to easily install/integrate OpenCV with Android Studio.

The below steps for using Android OpenCV sdk in Android Studio.

  1. Download latest OpenCV sdk for Android from OpenCV.org and decompress the zip file.
  2. Import OpenCV to Android Studio, From File -> New -> Import Module, choose sdk/javafolder in the unzipped opencv archive.
  3. Update build.gradle under imported OpenCV module to update 4 fields to match your project build.gradle a) compileSdkVersion b) buildToolsVersion c) minSdkVersion and d) targetSdkVersion.
  4. Add module dependency by Application -> Module Settings, and select the Dependenciestab. Click + icon at bottom, choose Module Dependency and select the imported OpenCV module.
    • For Android Studio v1.2.2, to access to Module Settings : in the project view, right-click the dependent module -> Open Module Settings
  5. Copy libs folder under sdk/native to Android Studio under app/src/main.
  6. In Android Studio, rename the copied libs directory to jniLibs and we are done.

Step (6) is since Android studio expects native libs in app/src/main/jniLibs instead of older libsfolder. For those new to Android OpenCV, don’t miss below steps

  • include static{ System.loadLibrary(“opencv_java”); } (Note: for OpenCV version 3 at this step you should instead load the library opencv_java3.)
  • For step(5), if you ignore any platform libs like x86, make sure your device/emulator is not on that platform.

OpenCV written is in C/C++. Java wrappers are

  1. Android OpenCV SDK – OpenCV.org maintained Android Java wrapper. I suggest this one.
  2. OpenCV Java – OpenCV.org maintained auto generated desktop Java wrapper.
  3. JavaCV – Popular Java wrapper maintained by independent developer(s). Not Android specific. This library might get out of sync with OpenCV newer versions.
]]>
/how-to-easily-install-opencv-in-android-studio/feed/ 0
How to detect a Christmas Tree using C++? /how-to-detect-a-christmas-tree-using-c/ /how-to-detect-a-christmas-tree-using-c/#comments Sun, 08 Oct 2017 10:14:28 +0000 /?p=275 Hello Friends,

As a follow-up to our previous post on Detecting Christmas trees using Python, in this tutorial, we’ll detect the Christmas tress using C++.
Background: For our new project, let’s try to recognize a Christmas Tree, remember, this will stay true for all the tree detection.
Let’s consider the following images…

  

  

 

 

Here is the source code and below it, its explanation.

//Christmas Tree Detection
//For Ubuntu use: //g++ -Wall -pedantic -ansi -O2 -pipe -s -o christmas_tree christmas_tree.cpp `pkg-config --cflags --libs opencv`

#include 
#include 
#include 

using namespace cv;
using namespace std;

int main(int argc,char *argv[])
{
    Mat original,tmp,tmp1;
    vector  > contours;
    Moments m;
    Rect boundrect;
    Point2f center;
    double radius, max_area=0,tmp_area=0;
    unsigned int j, k;
    int i;

    for(i = 1; i < argc; ++i)
    {
        original = imread(argv[i]);
        if(original.empty())
        {
            cerr << "Error"< max_area)
            {
                max_area = tmp_area;
                j = k;
            }
        }
        tmp1 = Mat::zeros(original.size(),CV_8U);
        approxPolyDP(contours[j], contours[j], 30, true);
        drawContours(tmp1, contours, j, Scalar(255,255,255), CV_FILLED);

        m = moments(contours[j]);
        boundrect = boundingRect(contours[j]);
        center = Point2f(m.m10/m.m00, m.m01/m.m00);
        radius = (center.y - (boundrect.tl().y))/4.0*3.0;
        Rect heightrect(center.x-original.cols/5, boundrect.tl().y, original.cols/5*2, boundrect.size().height);

        tmp = Mat::zeros(original.size(), CV_8U);
        rectangle(tmp, heightrect, Scalar(255, 255, 255), -1);
        circle(tmp, center, radius, Scalar(255, 255, 255), -1);

        bitwise_and(tmp, tmp1, tmp1);

        findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
        max_area = 0;
        j = 0;
        for(k = 0; k < contours.size(); k++)
        {
            tmp_area = contourArea(contours[k]);
            if(tmp_area > max_area)
            {
                max_area = tmp_area;
                j = k;
            }
        }

        approxPolyDP(contours[j], contours[j], 30, true);
        convexHull(contours[j], contours[j]);

        drawContours(original, contours, j, Scalar(0, 0, 255), 3);

        namedWindow(argv[i], CV_WINDOW_NORMAL|CV_WINDOW_KEEPRATIO|CV_GUI_EXPANDED);
        imshow(argv[i], original);

        waitKey(0);
        destroyWindow(argv[i]);
    }

    return 0;
}

Explanation:

The first step is to detect the most bright pixels in the picture, but we have to do a distinction between the tree itself and the snow which reflect its light. Here we try to exclude the snow appling a really simple filter on the color codes:

GaussianBlur(original, tmp, Size(3, 3), 0, 0, BORDER_DEFAULT);
erode(tmp, tmp, Mat(), Point(-1, -1), 10);
cvtColor(tmp, tmp, CV_BGR2HSV);
inRange(tmp, Scalar(0, 0, 0), Scalar(180, 255, 200), tmp);

Then we find every “bright” pixel:

dilate(original, tmp1, Mat(), Point(-1, -1), 15);
cvtColor(tmp1, tmp1, CV_BGR2HLS);
inRange(tmp1, Scalar(0, 185, 0), Scalar(180, 255, 255), tmp1);
dilate(tmp1, tmp1, Mat(), Point(-1, -1), 10);

Finally we join the two results:

bitwise_and(tmp, tmp1, tmp1);

Now we look for the biggest bright object:

findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
max_area = 0;
j = 0;
for(k = 0; k < contours.size(); k++)
{
    tmp_area = contourArea(contours[k]);
    if(tmp_area > max_area)
    {
        max_area = tmp_area;
        j = k;
    }
}
tmp1 = Mat::zeros(original.size(),CV_8U);
approxPolyDP(contours[j], contours[j], 30, true);
drawContours(tmp1, contours, j, Scalar(255,255,255), CV_FILLED);

Now we have almost done, but there are still some imperfection due to the snow. To cut them off we’ll build a mask using a circle and a rectangle to approximate the shape of a tree to delete unwanted pieces:

m = moments(contours[j]);
boundrect = boundingRect(contours[j]);
center = Point2f(m.m10/m.m00, m.m01/m.m00);
radius = (center.y - (boundrect.tl().y))/4.0*3.0;
Rect heightrect(center.x-original.cols/5, boundrect.tl().y, original.cols/5*2, boundrect.size().height);

tmp = Mat::zeros(original.size(), CV_8U);
rectangle(tmp, heightrect, Scalar(255, 255, 255), -1);
circle(tmp, center, radius, Scalar(255, 255, 255), -1);

bitwise_and(tmp, tmp1, tmp1);

The last step is to find the contour of our tree and draw it on the original picture.

findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
max_area = 0;
j = 0;
for(k = 0; k < contours.size(); k++)
{
    tmp_area = contourArea(contours[k]);
    if(tmp_area > max_area)
    {
        max_area = tmp_area;
        j = k;
    }
}

approxPolyDP(contours[j], contours[j], 30, true);
convexHull(contours[j], contours[j]);

drawContours(original, contours, j, Scalar(0, 0, 255), 3);

Here some pictures of the final output:

     

]]>
/how-to-detect-a-christmas-tree-using-c/feed/ 1