* enabled convolution & activation fusion
* a few more optimizations:
+ optimized the common case when the indices of max pooling layer are not used. in this case we use the more efficient branch that computes just maximums over the aperture.
+ optimized the convolution + activation fusion when the activation is relu, which is another common case
+ convolution can now be fused with batch norm. It's the zero-cost fusion. If the batch norm is followed by relu, all three (conv + batchnorm + relu) are fused together. this modification seriously improved ENet performance
* hopefully fixed warnings on Windows
* first commit
* first commit
* adjust code layout
* round mean value
* add missed header
* remove useless header
* remove useless header
* first commit
* first commit
* first commit
* Encapsule function averageHash by class
* remove export macro
* encapsulate phash algo by class
* first commit
* fix bugs of createHash and fillBlcoks
* 1 : add create function
2 : add overload functions
* implement get set functions
* fix bug--destination depth should be CV_32F
* first commit
* first commit
* 1 : fix bug--forgot '"'
2 : forgot to include iostream
* fix warnings
* remove tab
* remove trailing white space
* remove trailing white space
* remove trailing white space
* remove trailing white space
* remove trailing white space
* remove trailing white space
* first commit
* remove trailing white space
* remove trailing white space
* remove trailing white space
* reduce redundance operation
* add explanation of img_hash
* remove useless comments
* remove trailing space
* first commit
* fix missed symbol
* add new defgroup and change all defgroup to ihash
* fix namespace confliction
* change namespace from ihash to img_hash
* change ihash to img_hash
* change include guard
* forbid implicit conversion
* first commit
* 1 : declare function findFeatureVector
2 : forward declare test class--RavialVarHashTester as friend
* first commit
* replace auto with explicit type
* export some symbols, for initialization and testing
* remove trailing white space
* add namespace cv
* fix type cast warning and define default constrcutor/destructor
* declare and define RadialVarHashTester in namespace
* remove default constructor/destructor
* exports functions findFeatures and destructor
* remove trailing white space
* fix bug--wrong definition of destructor
* remove trailing white space
* implement findFeatureVector
* add test case for findFeatureVector
* 1 : fix bug--forgot to allocate space for input
2 : fix bug--compare the results of pixPerLine with wrong matrix
* remove trailing space
* implement hashCalculate
* add test case for hashCalculate
* remove trailing white space
* avoid hiding parameter
* refine codes and keep the original range
* adjust expected hash value since the range of hash change
* add comment
* reduce scope
* remove trailing white space
* adjust format
* add new function compare
* implement compute functions
* use array as buffer of cv::Mat, avoid memory allocation
* remove trailing whitespace
* 1 : implement cross-correlation rather than using matchTemplate since the
results of matchTemplate are weird
2 : remove gamma param, although the paper said PHash apply gamma
correction on the image, but the codes do not do that(I think it is a bug
of PHash)
3 : create function can specify sigma and numOfAngleLine
4 : Use blurImg to replace normalizeImg
5 : remove useless parameters which related to gamma correction
* add example of radial variance hash
* use buffer to avoid memory allocation and use enum to specify hash size
* remove useless header
* fix bug--constructor only accept two params
* add comments
* transpose projection matrix, friendlier for cache hit
* use pointer to access projection value
* function able to specify sigma and numOfAngleLine
* add get/set functions
* implement image hash algo--block mean hash
* include block mean hash and add comments
* remove trailing whitespace
* fix warning--compare with sign and unsigned value
* implement destructor and change mode type to size_t
* add example of block mean hash
* compress the bits of hash
* function--blockMeanHash able to set mode
* fix type cast warning and style
* change expected result to bool
* compress the hash value from 16 bytes to 8 bytes
* update comments
* compress hash from 16 bytes to 8 bytes
* use peak value of cross correlation as comparison result
* add limit header
* first commit
* add group and header file of color moment hash
* should not use auto, it is c++11 feature
* support python binding
* implement destructor of AverageHash
* support python binding
* support python binding
* support python binding
* change type to inputArray and outputArray, easier for python binding
* all algorithms support input type CV_8UC4,CV_8UC3 and CV_8UC1
* Provide better instructions
* Make it more pleasant to read
* Add information of speed comparsion
* remove trailing white space
* remove useless blank line
* refine comments of example
* fix link error
* refine title
* 1 : implement function "getDefaultName"
2 : adjust style
* Update README.md
1 : Fix wrong branch
2 : Add another solution to download img_hash
* img_hash: refactored interfaces to use pImpl
* remove trailing white space
* img_hash: use type-safe pImpl
* change name, easier to find the source file
* 1 : narrow scope of ImgHashImpl
2 : use static cast to replace dynamic cast because class hierarchy of
img_hash is very straightforward
* should not declare ImgHashImpl api in the header file of ImgHashBase, this increase chance of breaking ABI
* should not declare ImgHashImpl api in the header file of ImgHashBase, this increase chance of breaking ABI
* fix warning, because unelaborated friend declaration is a C++11 extension
* first commit
* fix bug, except of windows, other platforms cannot access private member.
pimpl only accessable by the class itself, therefore I think declare
everything as public is quite safe
* first commit comparison and computation charts
* update chart link
* some further optimizations and cleanups in dnn:
+ got rid of dnn::gemm; it's not perf critical anymore (perhaps)
+ embedded col2im functionality into convolution_layer.cpp, since it's not used anywhere else
+ parallel max pooling. even better performance can be achieved if we knew that max indices are not needed (and they are not needed in most networks)
+ somewhat optimized deconvolution layer: optimized bias addition (merged it with col2im), optimized col2im slightly.
+ hopefully fixed incorrect memory access in fully-connected layer; restored aligned memory reads (they should work fine now)
* hopefully fixed regressions in ENet performance
* fixed some typos in deconvolution; added SIMD optimization for the max pooling layer
* fixed warnings in SIMD-less build configuration
* rewritten the following layers to be [much] more efficient: convolution, fully connected, activations (relu, tanh, ...), LRN. Use optional AVX optimization for the first two.
* eliminated trailing whitespaces