Merge pull request #9114 from pengli:dnn_rebase

add libdnn acceleration to dnn module  (#9114)

* import libdnn code

Signed-off-by: Li Peng <peng.li@intel.com>

* add convolution layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add pooling layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add softmax layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add lrn layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add innerproduct layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add HAVE_OPENCL macro

Signed-off-by: Li Peng <peng.li@intel.com>

* fix for convolution ocl

Signed-off-by: Li Peng <peng.li@intel.com>

* enable getUMat() for multi-dimension Mat

Signed-off-by: Li Peng <peng.li@intel.com>

* use getUMat for ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* use CV_OCL_RUN macro

Signed-off-by: Li Peng <peng.li@intel.com>

* set OPENCL target when it is available

and disable fuseLayer for OCL target for the time being

Signed-off-by: Li Peng <peng.li@intel.com>

* fix innerproduct accuracy test

Signed-off-by: Li Peng <peng.li@intel.com>

* remove trailing space

Signed-off-by: Li Peng <peng.li@intel.com>

* Fixed tensorflow demo bug.

Root cause is that tensorflow has different algorithm with libdnn
to calculate convolution output dimension.

libdnn don't calculate output dimension anymore and just use one
passed in by config.

* split gemm ocl file

split it into gemm_buffer.cl and gemm_image.cl

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix compile failure

Signed-off-by: Li Peng <peng.li@intel.com>

* check env flag for auto tuning

Signed-off-by: Li Peng <peng.li@intel.com>

* switch to new ocl kernels for softmax layer

Signed-off-by: Li Peng <peng.li@intel.com>

* update softmax layer

on some platform subgroup extension may not work well,
fallback to non subgroup ocl acceleration.

Signed-off-by: Li Peng <peng.li@intel.com>

* fallback to cpu path for fc layer with multi output

Signed-off-by: Li Peng <peng.li@intel.com>

* update output message

Signed-off-by: Li Peng <peng.li@intel.com>

* update fully connected layer

fallback to gemm API if libdnn return false

Signed-off-by: Li Peng <peng.li@intel.com>

* Add ReLU OCL implementation

* disable layer fusion for now

Signed-off-by: Li Peng <peng.li@intel.com>

* Add OCL implementation for concat layer

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* libdnn: update license and copyrights

Also refine libdnn coding style

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>

* DNN: Don't link OpenCL library explicitly

* DNN: Make default preferableTarget to DNN_TARGET_CPU

User should set it to DNN_TARGET_OPENCL explicitly if want to
use OpenCL acceleration.

Also don't fusion when using DNN_TARGET_OPENCL

* DNN: refine coding style

* Add getOpenCLErrorString

* DNN: Use int32_t/uint32_t instread of alias

* Use namespace ocl4dnn to include libdnn things

* remove extra copyTo in softmax ocl path

Signed-off-by: Li Peng <peng.li@intel.com>

* update ReLU layer ocl path

Signed-off-by: Li Peng <peng.li@intel.com>

* Add prefer target property for layer class

It is used to indicate the target for layer forwarding,
either the default CPU target or OCL target.

Signed-off-by: Li Peng <peng.li@intel.com>

* Add cl_event based timer for cv::ocl

* Rename libdnn to ocl4dnn

Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>

* use UMat for ocl4dnn internal buffer

Remove allocateMemory which use clCreateBuffer directly

Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>

* enable buffer gemm in ocl4dnn innerproduct

Signed-off-by: Li Peng <peng.li@intel.com>

* replace int_tp globally for ocl4dnn kernels.

Signed-off-by: wzw <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>

* create UMat for layer params

Signed-off-by: Li Peng <peng.li@intel.com>

* update sign ocl kernel

Signed-off-by: Li Peng <peng.li@intel.com>

* update image based gemm of inner product layer

Signed-off-by: Li Peng <peng.li@intel.com>

* remove buffer gemm of inner product layer

call cv::gemm API instead

Signed-off-by: Li Peng <peng.li@intel.com>

* change ocl4dnn forward parameter to UMat

Signed-off-by: Li Peng <peng.li@intel.com>

* Refine auto-tuning mechanism.

- Use OPENCV_OCL4DNN_KERNEL_CONFIG_PATH to set cache directory
  for fine-tuned kernel configuration.
  e.g. export OPENCV_OCL4DNN_KERNEL_CONFIG_PATH=/home/tmp,
  the cache directory will be /home/tmp/spatialkernels/ on Linux.

- Define environment OPENCV_OCL4DNN_ENABLE_AUTO_TUNING to enable
  auto-tuning.

- OPENCV_OPENCL_ENABLE_PROFILING is only used to enable profiling
  for OpenCL command queue. This fix basic kernel get wrong running
  time, i.e. 0ms.

- If creating cache directory failed, disable auto-tuning.

* Detect and create cache dir on windows

Signed-off-by: Li Peng <peng.li@intel.com>

* Refine gemm like convolution kernel.

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix redundant swizzleWeights calling when use cached kernel config.

* Fix "out of resource" bug when auto-tuning too many kernels.

* replace cl_mem with UMat in ocl4dnnConvSpatial class

* OCL4DNN: reduce the tuning kernel candidate.

This patch could reduce 75% of the tuning candidates with less
than 2% performance impact for the final result.

Signed-off-by: Zhigang Gong <zhigang.gong@intel.com>

* replace cl_mem with umat in ocl4dnn convolution

Signed-off-by: Li Peng <peng.li@intel.com>

* remove weight_image_ of ocl4dnn inner product

Actually it is unused in the computation

Signed-off-by: Li Peng <peng.li@intel.com>

* Various fixes for ocl4dnn

1. OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel())
2. Ptr<OCL4DNNInnerProduct<float> > innerProductOp
3. Code comments cleanup
4. ignore check on OCL cpu device

Signed-off-by: Li Peng <peng.li@intel.com>

* add build option for log softmax

Signed-off-by: Li Peng <peng.li@intel.com>

* remove unused ocl kernels in ocl4dnn

Signed-off-by: Li Peng <peng.li@intel.com>

* replace ocl4dnnSet with opencv setTo

Signed-off-by: Li Peng <peng.li@intel.com>

* replace ALIGN with cv::alignSize

Signed-off-by: Li Peng <peng.li@intel.com>

* check kernel build options

Signed-off-by: Li Peng <peng.li@intel.com>

* Handle program compilation fail properly.

* Use std::numeric_limits<float>::infinity() for large float number

* check ocl4dnn kernel compilation result

Signed-off-by: Li Peng <peng.li@intel.com>

* remove unused ctx_id

Signed-off-by: Li Peng <peng.li@intel.com>

* change clEnqueueNDRangeKernel to kernel.run()

Signed-off-by: Li Peng <peng.li@intel.com>

* change cl_mem to UMat in image based gemm

Signed-off-by: Li Peng <peng.li@intel.com>

* check intel subgroup support for lrn and pooling layer

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix convolution bug if group is greater than 1

Signed-off-by: Li Peng <peng.li@intel.com>

* Set default layer preferableTarget to be DNN_TARGET_CPU

Signed-off-by: Li Peng <peng.li@intel.com>

* Add ocl perf test for convolution

Signed-off-by: Li Peng <peng.li@intel.com>

* Add more ocl accuracy test

Signed-off-by: Li Peng <peng.li@intel.com>

* replace cl_image with ocl::Image2D

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix build failure in elementwise layer

Signed-off-by: Li Peng <peng.li@intel.com>

* use getUMat() to get blob data

Signed-off-by: Li Peng <peng.li@intel.com>

* replace cl_mem handle with ocl::KernelArg

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(build): don't use C++11, OPENCL_LIBRARIES fix

* dnn(ocl4dnn): remove unused OpenCL kernels

* dnn(ocl4dnn): extract OpenCL code into .cl files

* dnn(ocl4dnn): refine auto-tuning

Defaultly disable auto-tuning, set OPENCV_OCL4DNN_ENABLE_AUTO_TUNING
environment variable to enable it.

Use a set of pre-tuned configs as default config if auto-tuning is disabled.
These configs are tuned for Intel GPU with 48/72 EUs, and for googlenet,
AlexNet, ResNet-50

If default config is not suitable, use the first available kernel config
from the candidates. Candidate priority from high to low is gemm like kernel,
IDLF kernel, basick kernel.

* dnn(ocl4dnn): pooling doesn't use OpenCL subgroups

* dnn(ocl4dnn): fix perf test

OpenCV has default 3sec time limit for each performance test.
Warmup OpenCL backend outside of perf measurement loop.

* use ocl::KernelArg as much as possible

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): fix bias bug for gemm like kernel

* dnn(ocl4dnn): wrap cl_mem into UMat

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): Refine signature of kernel config

- Use more readable string as signture of kernel config
- Don't count device name and vendor in signature string
- Default kernel configurations are tuned for Intel GPU with
  24/48/72 EUs, and for googlenet, AlexNet, ResNet-50 net model.

* dnn(ocl4dnn): swap width/height in configuration

* dnn(ocl4dnn): enable configs for Intel OpenCL runtime only

* core: make configuration helper functions accessible from non-core modules

* dnn(ocl4dnn): update kernel auto-tuning behavior

Avoid unwanted creation of directories

* dnn(ocl4dnn): simplify kernel to workaround OpenCL compiler crash

* dnn(ocl4dnn): remove redundant code

* dnn(ocl4dnn): Add more clear message for simd size dismatch.

* dnn(ocl4dnn): add const to const argument

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): force compiler use a specific SIMD size for IDLF kernel

* dnn(ocl4dnn): drop unused tuneLocalSize()

* dnn(ocl4dnn): specify OpenCL queue for Timer and convolve() method

* dnn(ocl4dnn): sanitize file names used for cache

* dnn(perf): enable Network tests with OpenCL

* dnn(ocl4dnn/conv): drop computeGlobalSize()

* dnn(ocl4dnn/conv): drop unused fields

* dnn(ocl4dnn/conv): simplify ctor

* dnn(ocl4dnn/conv): refactor kernelConfig localSize=NULL

* dnn(ocl4dnn/conv): drop unsupported double / untested half types

* dnn(ocl4dnn/conv): drop unused variable

* dnn(ocl4dnn/conv): alignSize/divUp

* dnn(ocl4dnn/conv): use enum values

* dnn(ocl4dnn): drop unused innerproduct variable

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): add an generic function to check cl option support

* dnn(ocl4dnn): run softmax subgroup version kernel first

Signed-off-by: Li Peng <peng.li@intel.com>
pull/9761/head
pengli 8 years ago committed by Alexander Alekhin
parent f646f61dad
commit e340ff9c3a
  1. 16
      modules/core/include/opencv2/core/ocl.hpp
  2. 16
      modules/core/include/opencv2/core/utils/configuration.private.hpp
  3. 270
      modules/core/src/ocl.cpp
  4. 45
      modules/core/src/opencl/benchmark.cl
  5. 6
      modules/core/src/precomp.hpp
  6. 1
      modules/core/src/system.cpp
  7. 1
      modules/core/src/trace.cpp
  8. 27
      modules/core/src/umatrix.cpp
  9. 2
      modules/dnn/CMakeLists.txt
  10. 1
      modules/dnn/include/opencv2/dnn/dnn.hpp
  11. 118
      modules/dnn/perf/opencl/perf_convolution.cpp
  12. 2
      modules/dnn/perf/perf_net.cpp
  13. 12
      modules/dnn/src/dnn.cpp
  14. 52
      modules/dnn/src/layers/concat_layer.cpp
  15. 60
      modules/dnn/src/layers/convolution_layer.cpp
  16. 134
      modules/dnn/src/layers/elementwise_layers.cpp
  17. 83
      modules/dnn/src/layers/fully_connected_layer.cpp
  18. 4
      modules/dnn/src/layers/layers_common.hpp
  19. 51
      modules/dnn/src/layers/lrn_layer.cpp
  20. 56
      modules/dnn/src/layers/pooling_layer.cpp
  21. 88
      modules/dnn/src/layers/softmax_layer.cpp
  22. 62
      modules/dnn/src/ocl4dnn/include/common.hpp
  23. 854
      modules/dnn/src/ocl4dnn/include/default_kernel_config.hpp
  24. 90
      modules/dnn/src/ocl4dnn/include/math_functions.hpp
  25. 473
      modules/dnn/src/ocl4dnn/include/ocl4dnn.hpp
  26. 57
      modules/dnn/src/ocl4dnn/src/common.cpp
  27. 538
      modules/dnn/src/ocl4dnn/src/math_functions.cpp
  28. 1568
      modules/dnn/src/ocl4dnn/src/ocl4dnn_conv_spatial.cpp
  29. 108
      modules/dnn/src/ocl4dnn/src/ocl4dnn_inner_product.cpp
  30. 126
      modules/dnn/src/ocl4dnn/src/ocl4dnn_lrn.cpp
  31. 213
      modules/dnn/src/ocl4dnn/src/ocl4dnn_pool.cpp
  32. 135
      modules/dnn/src/ocl4dnn/src/ocl4dnn_softmax.cpp
  33. 42
      modules/dnn/src/opencl/activations.cl
  34. 26
      modules/dnn/src/opencl/batchnorm.cl
  35. 45
      modules/dnn/src/opencl/benchmark.cl
  36. 60
      modules/dnn/src/opencl/concat.cl
  37. 1670
      modules/dnn/src/opencl/conv_layer_spatial.cl
  38. 73
      modules/dnn/src/opencl/conv_spatial_helper.cl
  39. 43
      modules/dnn/src/opencl/dummy.cl
  40. 635
      modules/dnn/src/opencl/gemm_image.cl
  41. 55
      modules/dnn/src/opencl/math.cl
  42. 191
      modules/dnn/src/opencl/matvec_mul.cl
  43. 96
      modules/dnn/src/opencl/ocl4dnn_lrn.cl
  44. 177
      modules/dnn/src/opencl/ocl4dnn_pooling.cl
  45. 8
      modules/dnn/src/opencl/softmax.cl
  46. 182
      modules/dnn/src/opencl/softmax_loss.cl
  47. 2
      modules/dnn/src/precomp.hpp
  48. 81
      modules/dnn/test/test_googlenet.cpp
  49. 77
      modules/dnn/test/test_layers.cpp
  50. 119
      modules/dnn/test/test_torch_importer.cpp

@ -665,6 +665,7 @@ CV_EXPORTS const char* convertTypeStr(int sdepth, int ddepth, int cn, char* buf)
CV_EXPORTS const char* typeToStr(int t);
CV_EXPORTS const char* memopTypeToStr(int t);
CV_EXPORTS const char* vecopTypeToStr(int t);
CV_EXPORTS const char* getOpenCLErrorString(int errorCode);
CV_EXPORTS String kernelToStr(InputArray _kernel, int ddepth = -1, const char * name = NULL);
CV_EXPORTS void getPlatfomsInfo(std::vector<PlatformInfo>& platform_info);
@ -731,6 +732,21 @@ protected:
Impl* p;
};
class CV_EXPORTS Timer
{
public:
Timer(const Queue& q);
~Timer();
void start();
void stop();
float milliSeconds();
float microSeconds();
float seconds();
protected:
struct Impl;
Impl* p;
};
CV_EXPORTS MatAllocator* getOpenCLAllocator();

@ -0,0 +1,16 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#ifndef OPENCV_CONFIGURATION_PRIVATE_HPP
#define OPENCV_CONFIGURATION_PRIVATE_HPP
namespace cv { namespace utils {
CV_EXPORTS bool getConfigurationParameterBool(const char* name, bool defaultValue);
CV_EXPORTS size_t getConfigurationParameterSizeT(const char* name, size_t defaultValue);
CV_EXPORTS cv::String getConfigurationParameterString(const char* name, const char* defaultValue);
}} // namespace
#endif // OPENCV_CONFIGURATION_PRIVATE_HPP

@ -51,7 +51,10 @@
#include <inttypes.h>
#endif
#include <opencv2/core/utils/configuration.private.hpp>
#include "opencv2/core/ocl_genbase.hpp"
#include "opencl_kernels_core.hpp"
#define CV_OPENCL_ALWAYS_SHOW_BUILD_LOG 0
#define CV_OPENCL_SHOW_RUN_ERRORS 0
@ -4718,6 +4721,102 @@ const char* convertTypeStr(int sdepth, int ddepth, int cn, char* buf)
return buf;
}
const char* getOpenCLErrorString(int errorCode)
{
switch (errorCode)
{
case 0: return "CL_SUCCESS";
case -1: return "CL_DEVICE_NOT_FOUND";
case -2: return "CL_DEVICE_NOT_AVAILABLE";
case -3: return "CL_COMPILER_NOT_AVAILABLE";
case -4: return "CL_MEM_OBJECT_ALLOCATION_FAILURE";
case -5: return "CL_OUT_OF_RESOURCES";
case -6: return "CL_OUT_OF_HOST_MEMORY";
case -7: return "CL_PROFILING_INFO_NOT_AVAILABLE";
case -8: return "CL_MEM_COPY_OVERLAP";
case -9: return "CL_IMAGE_FORMAT_MISMATCH";
case -10: return "CL_IMAGE_FORMAT_NOT_SUPPORTED";
case -11: return "CL_BUILD_PROGRAM_FAILURE";
case -12: return "CL_MAP_FAILURE";
case -13: return "CL_MISALIGNED_SUB_BUFFER_OFFSET";
case -14: return "CL_EXEC_STATUS_ERROR_FOR_EVENTS_IN_WAIT_LIST";
case -15: return "CL_COMPILE_PROGRAM_FAILURE";
case -16: return "CL_LINKER_NOT_AVAILABLE";
case -17: return "CL_LINK_PROGRAM_FAILURE";
case -18: return "CL_DEVICE_PARTITION_FAILED";
case -19: return "CL_KERNEL_ARG_INFO_NOT_AVAILABLE";
case -30: return "CL_INVALID_VALUE";
case -31: return "CL_INVALID_DEVICE_TYPE";
case -32: return "CL_INVALID_PLATFORM";
case -33: return "CL_INVALID_DEVICE";
case -34: return "CL_INVALID_CONTEXT";
case -35: return "CL_INVALID_QUEUE_PROPERTIES";
case -36: return "CL_INVALID_COMMAND_QUEUE";
case -37: return "CL_INVALID_HOST_PTR";
case -38: return "CL_INVALID_MEM_OBJECT";
case -39: return "CL_INVALID_IMAGE_FORMAT_DESCRIPTOR";
case -40: return "CL_INVALID_IMAGE_SIZE";
case -41: return "CL_INVALID_SAMPLER";
case -42: return "CL_INVALID_BINARY";
case -43: return "CL_INVALID_BUILD_OPTIONS";
case -44: return "CL_INVALID_PROGRAM";
case -45: return "CL_INVALID_PROGRAM_EXECUTABLE";
case -46: return "CL_INVALID_KERNEL_NAME";
case -47: return "CL_INVALID_KERNEL_DEFINITION";
case -48: return "CL_INVALID_KERNEL";
case -49: return "CL_INVALID_ARG_INDEX";
case -50: return "CL_INVALID_ARG_VALUE";
case -51: return "CL_INVALID_ARG_SIZE";
case -52: return "CL_INVALID_KERNEL_ARGS";
case -53: return "CL_INVALID_WORK_DIMENSION";
case -54: return "CL_INVALID_WORK_GROUP_SIZE";
case -55: return "CL_INVALID_WORK_ITEM_SIZE";
case -56: return "CL_INVALID_GLOBAL_OFFSET";
case -57: return "CL_INVALID_EVENT_WAIT_LIST";
case -58: return "CL_INVALID_EVENT";
case -59: return "CL_INVALID_OPERATION";
case -60: return "CL_INVALID_GL_OBJECT";
case -61: return "CL_INVALID_BUFFER_SIZE";
case -62: return "CL_INVALID_MIP_LEVEL";
case -63: return "CL_INVALID_GLOBAL_WORK_SIZE";
case -64: return "CL_INVALID_PROPERTY";
case -65: return "CL_INVALID_IMAGE_DESCRIPTOR";
case -66: return "CL_INVALID_COMPILER_OPTIONS";
case -67: return "CL_INVALID_LINKER_OPTIONS";
case -68: return "CL_INVALID_DEVICE_PARTITION_COUNT";
case -69: return "CL_INVALID_PIPE_SIZE";
case -70: return "CL_INVALID_DEVICE_QUEUE";
case -1000: return "CL_INVALID_GL_SHAREGROUP_REFERENCE_KHR";
case -1001: return "CL_PLATFORM_NOT_FOUND_KHR";
case -1002: return "CL_INVALID_D3D10_DEVICE_KHR";
case -1003: return "CL_INVALID_D3D10_RESOURCE_KHR";
case -1004: return "CL_D3D10_RESOURCE_ALREADY_ACQUIRED_KHR";
case -1005: return "CL_D3D10_RESOURCE_NOT_ACQUIRED_KHR";
case -1024: return "clBLAS: Functionality is not implemented";
case -1023: return "clBLAS: Library is not initialized yet";
case -1022: return "clBLAS: Matrix A is not a valid memory object";
case -1021: return "clBLAS: Matrix B is not a valid memory object";
case -1020: return "clBLAS: Matrix C is not a valid memory object";
case -1019: return "clBLAS: Vector X is not a valid memory object";
case -1018: return "clBLAS: Vector Y is not a valid memory object";
case -1017: return "clBLAS: An input dimension (M:N:K) is invalid";
case -1016: return "clBLAS: Leading dimension A must not be less than the "
"size of the first dimension";
case -1015: return "clBLAS: Leading dimension B must not be less than the "
"size of the second dimension";
case -1014: return "clBLAS: Leading dimension C must not be less than the "
"size of the third dimension";
case -1013: return "clBLAS: The increment for a vector X must not be 0";
case -1012: return "clBLAS: The increment for a vector Y must not be 0";
case -1011: return "clBLAS: The memory object for Matrix A is too small";
case -1010: return "clBLAS: The memory object for Matrix B is too small";
case -1009: return "clBLAS: The memory object for Matrix C is too small";
case -1008: return "clBLAS: The memory object for Vector X is too small";
case -1007: return "clBLAS: The memory object for Vector Y is too small";
default: return "Unknown OpenCL error";
}
}
template <typename T>
static std::string kerToStr(const Mat & k)
{
@ -5134,4 +5233,175 @@ bool internal::isCLBuffer(UMat& u)
return true;
}
struct Timer::Impl
{
const Queue queue;
Impl(const Queue& q)
: queue(q)
, initted_(false)
, running_(false)
, has_run_at_least_once_(false)
{
init();
}
~Impl()
{
clWaitForEvents(1, &start_gpu_cl_);
clWaitForEvents(1, &stop_gpu_cl_);
clReleaseEvent(start_gpu_cl_);
clReleaseEvent(stop_gpu_cl_);
}
void start()
{
#ifdef HAVE_OPENCL
if (!running())
{
clWaitForEvents(1, &start_gpu_cl_);
clReleaseEvent(start_gpu_cl_);
ocl::Kernel kernel("null_kernel_float", ocl::core::benchmark_oclsrc);
float arg = 0;
clSetKernelArg((cl_kernel)kernel.ptr(), 0, sizeof(arg), &arg);
clEnqueueTask((cl_command_queue)queue.ptr(), (cl_kernel)kernel.ptr(), 0,
NULL, &start_gpu_cl_);
clFinish((cl_command_queue)queue.ptr());
running_ = true;
has_run_at_least_once_ = true;
}
#endif
}
void stop()
{
#ifdef HAVE_OPENCL
if (running())
{
clWaitForEvents(1, &stop_gpu_cl_);
clReleaseEvent(stop_gpu_cl_);
ocl::Kernel kernel("null_kernel_float", ocl::core::benchmark_oclsrc);
float arg = 0;
clSetKernelArg((cl_kernel)kernel.ptr(), 0, sizeof(arg), &arg);
clEnqueueTask((cl_command_queue)queue.ptr(), (cl_kernel)kernel.ptr(), 0,
NULL, &stop_gpu_cl_);
clFinish((cl_command_queue)queue.ptr());
running_ = false;
}
#endif
}
float microSeconds()
{
#ifdef HAVE_OPENCL
if (!has_run_at_least_once())
{
return 0;
}
if (running())
{
stop();
}
cl_ulong startTime, stopTime;
clWaitForEvents(1, &stop_gpu_cl_);
clGetEventProfilingInfo(start_gpu_cl_, CL_PROFILING_COMMAND_END,
sizeof startTime, &startTime, NULL);
clGetEventProfilingInfo(stop_gpu_cl_, CL_PROFILING_COMMAND_START,
sizeof stopTime, &stopTime, NULL);
double us = static_cast<double>(stopTime - startTime) / 1000.0;
elapsed_microseconds_ = static_cast<float>(us);
return elapsed_microseconds_;
#else
return 0;
#endif
}
float milliSeconds()
{
#ifdef HAVE_OPENCL
if (!has_run_at_least_once())
{
return 0;
}
if (running())
{
stop();
}
cl_ulong startTime = 0, stopTime = 0;
clGetEventProfilingInfo(start_gpu_cl_, CL_PROFILING_COMMAND_END,
sizeof startTime, &startTime, NULL);
clGetEventProfilingInfo(stop_gpu_cl_, CL_PROFILING_COMMAND_START,
sizeof stopTime, &stopTime, NULL);
double ms = static_cast<double>(stopTime - startTime) / 1000000.0;
elapsed_milliseconds_ = static_cast<float>(ms);
return elapsed_milliseconds_;
#else
return 0;
#endif
}
float seconds()
{
return milliSeconds() / 1000.f;
}
void init()
{
CV_Assert(queue.getImpl() && queue.getImpl()->isProfilingQueue_);
if (!initted())
{
start_gpu_cl_ = 0;
stop_gpu_cl_ = 0;
initted_ = true;
}
}
inline bool initted() { return initted_; }
inline bool running() { return running_; }
inline bool has_run_at_least_once() { return has_run_at_least_once_; }
bool initted_;
bool running_;
bool has_run_at_least_once_;
float elapsed_milliseconds_;
float elapsed_microseconds_;
cl_event start_gpu_cl_;
cl_event stop_gpu_cl_;
};
Timer::Timer(const Queue& q)
{
p = new Impl(q);
}
Timer::~Timer()
{
if(p)
{
delete p;
p = 0;
}
}
void Timer::start()
{
if(p)
p->start();
}
void Timer::stop()
{
if(p)
p->stop();
}
float Timer::microSeconds()
{ return p ? p->microSeconds() : 0; }
float Timer::milliSeconds()
{ return p ? p->milliSeconds() : 0; }
float Timer::seconds()
{ return p ? p->seconds() : 0; }
}}

@ -0,0 +1,45 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void null_kernel_float(float arg) {
float out = arg;
}

@ -297,12 +297,6 @@ TLSData<CoreTLSData>& getCoreTlsData();
#define CL_RUNTIME_EXPORT
#endif
namespace utils {
bool getConfigurationParameterBool(const char* name, bool defaultValue);
size_t getConfigurationParameterSizeT(const char* name, size_t defaultValue);
cv::String getConfigurationParameterString(const char* name, const char* defaultValue);
}
extern bool __termination; // skip some cleanups, because process is terminating
// (for example, if ExitProcess() was already called)

@ -44,6 +44,7 @@
#include "precomp.hpp"
#include <iostream>
#include <opencv2/core/utils/configuration.private.hpp>
#include <opencv2/core/utils/trace.private.hpp>
namespace cv {

@ -6,6 +6,7 @@
#include <opencv2/core/utils/trace.hpp>
#include <opencv2/core/utils/trace.private.hpp>
#include <opencv2/core/utils/configuration.private.hpp>
#include <cstdarg> // va_start

@ -267,19 +267,22 @@ UMat Mat::getUMat(int accessFlags, UMatUsageFlags usageFlags) const
UMat hdr;
if(!data)
return hdr;
Size wholeSize;
Point ofs;
locateROI(wholeSize, ofs);
Size sz(cols, rows);
if (ofs.x != 0 || ofs.y != 0)
if (data != datastart)
{
Mat src = *this;
int dtop = ofs.y;
int dbottom = wholeSize.height - src.rows - ofs.y;
int dleft = ofs.x;
int dright = wholeSize.width - src.cols - ofs.x;
src.adjustROI(dtop, dbottom, dleft, dright);
return src.getUMat(accessFlags, usageFlags)(cv::Rect(ofs.x, ofs.y, sz.width, sz.height));
Size wholeSize;
Point ofs;
locateROI(wholeSize, ofs);
Size sz(cols, rows);
if (ofs.x != 0 || ofs.y != 0)
{
Mat src = *this;
int dtop = ofs.y;
int dbottom = wholeSize.height - src.rows - ofs.y;
int dleft = ofs.x;
int dright = wholeSize.width - src.cols - ofs.x;
src.adjustROI(dtop, dbottom, dleft, dright);
return src.getUMat(accessFlags, usageFlags)(cv::Rect(ofs.x, ofs.y, sz.width, sz.height));
}
}
CV_Assert(data == datastart);

@ -21,6 +21,8 @@ ocv_warnings_disable(CMAKE_CXX_FLAGS -Wno-shadow -Wno-parentheses -Wmaybe-uninit
)
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4701 /wd4100)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src/ocl4dnn/include ${OPENCL_INCLUDE_DIRS})
if(MSVC)
add_definitions( -D_CRT_SECURE_NO_WARNINGS=1 )
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4244 /wd4267 /wd4018 /wd4355 /wd4800 /wd4251 /wd4996 /wd4146

@ -297,6 +297,7 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN
CV_PROP String name; //!< Name of the layer instance, can be used for logging or other internal purposes.
CV_PROP String type; //!< Type name which was used for creating layer by layer factory.
CV_PROP int preferableTarget; //!< prefer target for layer forwarding
Layer();
explicit Layer(const LayerParams &params); //!< Initializes only #name, #type and #blobs fields.

@ -0,0 +1,118 @@
#include "../perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#ifdef HAVE_OPENCL
namespace cvtest
{
namespace ocl
{
using std::tr1::tuple;
using std::tr1::get;
using std::tr1::make_tuple;
using std::make_pair;
using namespace perf;
using namespace testing;
using namespace cv;
using namespace cv::dnn;
enum {STRIDE_OFF = 1, STRIDE_ON = 2};
CV_ENUM(StrideSize, STRIDE_OFF, STRIDE_ON);
enum {GROUP_OFF = 1, GROUP_2 = 2};
CV_ENUM(GroupSize, GROUP_OFF, GROUP_2);
//Squared Size
#define SSZ(n) cv::Size(n, n)
typedef std::pair<MatShape, int> InpShapeNumOut;
typedef tuple<Size, InpShapeNumOut, GroupSize, StrideSize> ConvParam; //kernel_size, inp shape, groups, stride
typedef TestBaseWithParam<ConvParam> ConvolutionPerfTest;
static inline MatShape blobShape(int count, int nplanes, int height, int width)
{
int data[] = {count, nplanes, height, width};
return MatShape(data, data+4);
}
OCL_PERF_TEST_P( ConvolutionPerfTest, perf, Combine(
Values(Size(1, 1), Size(3, 3), Size(5, 5), Size(11, 11)),
Values(make_pair(blobShape(1, 4, 224, 224), 64),
make_pair(blobShape(1, 64, 112, 122), 128),
make_pair(blobShape(1, 256, 28, 28), 512)),
GroupSize::all(),
StrideSize::all())
)
{
RNG rng(0);
ConvParam params = GetParam();
int ksz = get<0>(params).width;
MatShape inpShape = get<1>(params).first;
int outCn = get<1>(params).second;
int groups = get<2>(params);
int stride = (ksz >= 11) ? 4 : (int)get<3>(params);
int inpCn = inpShape[1];
int wgtSize[] = { outCn, inpCn/groups, ksz, ksz };
int biasSize[] = { outCn, 1, 1, 1 };
const int wtype = CV_32F;
Mat wgtBlob(4, wgtSize, wtype), biasBlob(4, biasSize, wtype);
Mat inpBlob(4, &inpShape[0], wtype);
rng.fill(biasBlob, RNG::UNIFORM, -1, +1);
rng.fill(wgtBlob, RNG::UNIFORM, -1, +1);
rng.fill(inpBlob, RNG::UNIFORM, -1, +1);
LayerParams lp;
lp.set("num_output", outCn);
lp.set("group", groups);
lp.set("stride", stride);
lp.set("kernel_size", ksz);
lp.blobs.reserve(2);
lp.blobs.push_back(wgtBlob);
lp.blobs.push_back(biasBlob);
std::vector<Mat*> inpBlobs(1, &inpBlob);
std::vector<Mat> outBlobs, internalBlobs;
cv::setNumThreads(cv::getNumberOfCPUs());
Ptr<Layer> layer = cv::dnn::LayerFactory::createLayerInstance("Convolution", lp);
std::vector<MatShape> inputShapes(1, shape(inpBlob)), outShapes, internals;
layer->getMemoryShapes(inputShapes, 0, outShapes, internals);
for (int i = 0; i < outShapes.size(); i++)
{
outBlobs.push_back(Mat(outShapes[i], CV_32F));
}
for (int i = 0; i < internals.size(); i++)
{
internalBlobs.push_back(Mat());
if (total(internals[i]))
internalBlobs.back().create(internals[i], CV_32F);
}
layer->finalize(inpBlobs, outBlobs);
layer->preferableTarget = DNN_TARGET_OPENCL;
Mat inpBlob2D = inpBlob.reshape(1, outCn);
Mat wgtBlob2D = wgtBlob.reshape(1, outCn*(inpCn/groups));
Mat outBlob2D = outBlobs[0].reshape(1, outBlobs[0].size[0]);
declare.in(inpBlob2D, wgtBlob2D, WARMUP_RNG).out(outBlob2D).tbb_threads(cv::getNumThreads());
// warmup
layer->forward(inpBlobs, outBlobs, internalBlobs);
TEST_CYCLE()
{
layer->forward(inpBlobs, outBlobs, internalBlobs);
}
SANITY_CHECK_NOTHING();
}
}
}
#endif

@ -40,7 +40,7 @@ public:
if (backend == DNN_BACKEND_DEFAULT && target == DNN_TARGET_OPENCL)
{
#if 0 //defined(HAVE_OPENCL)
#if defined(HAVE_OPENCL)
if (!cv::ocl::useOpenCL())
#endif
{

@ -875,7 +875,7 @@ struct Net::Impl
if (preferableBackend == DNN_BACKEND_DEFAULT)
{
CV_Assert(preferableTarget == DNN_TARGET_CPU);
CV_Assert(preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL);
return;
}
@ -1000,6 +1000,7 @@ struct Net::Impl
Ptr<Layer> layerPtr = ld.getLayerInstance();
{
layerPtr->finalize(ld.inputBlobs, ld.outputBlobs);
layerPtr->preferableTarget = preferableTarget;
#if 0
std::cout << "\toutputs:";
size_t noutputs = ld.outputBlobs.size();
@ -1026,7 +1027,7 @@ struct Net::Impl
void fuseLayers(const std::vector<LayerPin>& blobsToKeep_)
{
if( !fusion || preferableBackend == DNN_BACKEND_HALIDE )
if( !fusion || !(preferableBackend == DNN_BACKEND_DEFAULT && preferableTarget == DNN_TARGET_CPU))
return;
CV_TRACE_FUNCTION();
@ -1236,7 +1237,6 @@ struct Net::Impl
}
layersTimings.resize(lastLayerId + 1, 0);
fuseLayers(blobsToKeep_);
}
@ -1402,7 +1402,7 @@ struct Net::Impl
}
else
{
CV_Assert(preferableTarget == DNN_TARGET_CPU);
CV_Assert(preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL);
}
return ld.outputBlobs[pin.oid];
}
@ -1963,12 +1963,12 @@ int64 Net::getPerfProfile(std::vector<double>& timings)
Importer::~Importer() {}
Layer::Layer() {}
Layer::Layer() { preferableTarget = DNN_TARGET_CPU; }
Layer::Layer(const LayerParams &params)
: blobs(params.blobs), name(params.name), type(params.type)
{
preferableTarget = DNN_TARGET_CPU;
}
void Layer::setParamsFrom(const LayerParams &params)

@ -43,6 +43,7 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
namespace cv
{
@ -174,11 +175,62 @@ public:
}
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
int cAxis = clamp(axis, inputs[0]->dims);
if (!(cAxis == 1 && outputs[0].dims == 4 && !padding))
return false;
int bottom_concat_axis;
int concat_size = inputs[0]->size[2] * inputs[0]->size[3];
int top_concat_axis = outputs[0].size[1];
int offset_concat_axis = 0;
UMat inpMat, outMat;
outMat = outputs[0].getUMat(ACCESS_WRITE);
ocl::Kernel kernel;
String buildopt = String("-DDtype=") + ocl::typeToStr(inputs[0]->type()) + String(" ");
if (!kernel.create("concat", ocl::dnn::concat_oclsrc, buildopt))
return false;
for (size_t i = 0; i < inputs.size(); i++)
{
inpMat = inputs[i]->getUMat(ACCESS_READ);
bottom_concat_axis = inputs[i]->size[1];
size_t nthreads = inputs[i]->total();
kernel.set(0, (int)nthreads);
kernel.set(1, ocl::KernelArg::PtrReadOnly(inpMat));
kernel.set(2, (int)inputs[i]->size[0]);
kernel.set(3, (int)concat_size);
kernel.set(4, (int)top_concat_axis);
kernel.set(5, (int)bottom_concat_axis);
kernel.set(6, (int)offset_concat_axis);
kernel.set(7, ocl::KernelArg::PtrWriteOnly(outMat));
if (!kernel.run(1, &nthreads, NULL, false))
return false;
offset_concat_axis += bottom_concat_axis;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
int cAxis = clamp(axis, inputs[0]->dims);
Mat& outMat = outputs[0];

@ -47,6 +47,10 @@
#include "opencv2/core/hal/intrin.hpp"
#include <iostream>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
@ -150,6 +154,11 @@ public:
Ptr<BatchNormLayer> bnorm;
Ptr<ScaleLayer> scaleLayer;
#ifdef HAVE_OPENCL
Ptr<OCL4DNNConvSpatial<float> > convolutionOp;
std::vector<UMat> umat_blobs;
#endif
MatShape computeColRowShape(const MatShape &inpShape, const MatShape &outShape) const
{
Size out(outShape[3], outShape[2]);
@ -636,6 +645,42 @@ public:
}
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
int group = inputs[0]->size[1] / umat_blobs[0].size[1];
if (convolutionOp.empty())
{
OCL4DNNConvConfig config;
config.in_shape = shape(*inputs[0]);
config.out_shape = shape(outputs[0]);
config.kernel = kernel;
config.pad = pad;
config.stride = stride;
config.dilation = dilation;
config.group = group;
config.bias_term = (hasBias()) ? true : false;
convolutionOp = Ptr<OCL4DNNConvSpatial<float> >(new OCL4DNNConvSpatial<float>(config));
}
for (size_t ii = 0; ii < outputs.size(); ii++)
{
UMat inpMat, outMat;
inpMat = inputs[ii]->getUMat(ACCESS_READ);
outMat = outputs[ii].getUMat(ACCESS_WRITE);
int batch_size = inpMat.size[0];
if (!convolutionOp->Forward(inpMat, umat_blobs[0], hasBias() ? umat_blobs[1] : UMat(),
outMat, batch_size))
return false;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
@ -649,6 +694,10 @@ public:
int ngroups = inputs[0]->size[1]/blobs[0].size[1];
CV_Assert(outputs[0].size[1] % ngroups == 0);
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
int k, outCn = blobs[0].size[0];
if( weightsMat.empty() )
@ -1203,8 +1252,17 @@ static void initConvDeconvLayerFromCaffe(Ptr<BaseConvolutionLayer> l, const Laye
Ptr<BaseConvolutionLayer> ConvolutionLayer::create(const LayerParams &params)
{
Ptr<BaseConvolutionLayer> l(new ConvolutionLayerImpl);
ConvolutionLayerImpl* conv_ptr = new ConvolutionLayerImpl;
Ptr<BaseConvolutionLayer> l(conv_ptr);
initConvDeconvLayerFromCaffe(l, params);
#ifdef HAVE_OPENCL
size_t n = params.blobs.size();
conv_ptr->umat_blobs.resize(n);
for (int i = 0; i < n; i++)
conv_ptr->umat_blobs[i] = params.blobs[i].getUMat(ACCESS_READ);
#endif
return l;
}

@ -41,9 +41,12 @@
//M*/
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#include "opencl_kernels_dnn.hpp"
#include <iostream>
namespace cv
{
@ -158,6 +161,10 @@ public:
{
CV_TRACE_FUNCTION();
CV_OCL_RUN((this->preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
func.applyOCL(inputs, outputs, internals))
for (size_t i = 0; i < inputs.size(); i++)
{
const Mat &src = *inputs[i];
@ -191,6 +198,13 @@ public:
bool run_parallel;
};
#ifdef HAVE_OPENCL
static String oclGetTMacro(const UMat &m)
{
return String("-DT=") + ocl::typeToStr(m.type()) + String(" ");
}
#endif
struct ReLUFunctor
{
typedef ReLULayer Layer;
@ -230,6 +244,46 @@ struct ReLUFunctor
}
}
#ifdef HAVE_OPENCL
bool initKernel(ocl::Kernel &ker, const UMat &src) const
{
const char *buildoptSlope = (slope == 0) ? "-DRELU_NO_SLOPE" : "";
String buildopt = oclGetTMacro(src) + buildoptSlope;
if (!ker.create("ReLUForward", ocl::dnn::activations_oclsrc, buildopt))
return false;
if (slope != 0)
ker.set(3, (float)slope);
return true;
}
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
size_t wgSize = ocl::Device::getDefault().maxWorkGroupSize();
for (size_t i = 0; i < inputs.size(); i++)
{
UMat src, dst;
inputs[i]->copyTo(src);
dst = outputs[i].getUMat(ACCESS_WRITE);
CV_Assert(src.isContinuous() && dst.isContinuous() && !src.offset && !dst.offset);
ocl::Kernel ker;
CV_Assert(initKernel(ker, src));
ker.set(0, (int)src.total());
ker.set(1, ocl::KernelArg::PtrReadOnly(src));
ker.set(2, ocl::KernelArg::PtrWriteOnly(dst));
size_t gSize = src.total();
CV_Assert(ker.run(1, &gSize, &wgSize, false));
}
return true;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -293,6 +347,14 @@ struct ReLU6Functor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -320,6 +382,14 @@ struct TanHFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -347,6 +417,14 @@ struct SigmoidFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -376,6 +454,14 @@ struct ELUFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -403,6 +489,14 @@ struct AbsValFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -430,6 +524,14 @@ struct BNLLFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -479,6 +581,14 @@ struct PowerFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
@ -524,18 +634,18 @@ struct ChannelsPReLUFunctor
v_float32x4 s4 = v_setall_f32(s), z = v_setzero_f32();
for( ; i <= len - 16; i += 16 )
{
v_float32x4 x0 = v_load(ptr + i);
v_float32x4 x1 = v_load(ptr + i + 4);
v_float32x4 x2 = v_load(ptr + i + 8);
v_float32x4 x3 = v_load(ptr + i + 12);
v_float32x4 x0 = v_load(srcptr + i);
v_float32x4 x1 = v_load(srcptr + i + 4);
v_float32x4 x2 = v_load(srcptr + i + 8);
v_float32x4 x3 = v_load(srcptr + i + 12);
x0 = v_select(x0 >= z, x0, x0*s4);
x1 = v_select(x1 >= z, x1, x1*s4);
x2 = v_select(x2 >= z, x2, x2*s4);
x3 = v_select(x3 >= z, x3, x3*s4);
v_store(ptr + i, x0);
v_store(ptr + i + 4, x1);
v_store(ptr + i + 8, x2);
v_store(ptr + i + 12, x3);
v_store(dstptr + i, x0);
v_store(dstptr + i + 4, x1);
v_store(dstptr + i + 8, x2);
v_store(dstptr + i + 12, x3);
}
#endif
for( ; i < len; i++ )
@ -546,6 +656,14 @@ struct ChannelsPReLUFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{

@ -43,8 +43,13 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
@ -55,6 +60,11 @@ class FullyConnectedLayerImpl : public InnerProductLayer
public:
enum { VEC_ALIGN = 8 };
#ifdef HAVE_OPENCL
Ptr<OCL4DNNInnerProduct<float> > innerProductOp;
std::vector<UMat> umat_blobs;
#endif
FullyConnectedLayerImpl(const LayerParams& params)
{
setParamsFrom(params);
@ -84,6 +94,12 @@ public:
biasMat = blobs[1] = blobs[1].reshape(1, 1);
else
biasMat = Mat::zeros(1, numOutput, weightsMat.type());
#ifdef HAVE_OPENCL
size_t n = blobs.size();
umat_blobs.resize(n);
for (int i = 0; i < n; i++) umat_blobs[i] = blobs[i].getUMat(ACCESS_READ);
#endif
}
bool getMemoryShapes(const std::vector<MatShape> &inputs,
@ -238,11 +254,78 @@ public:
bool useAVX2;
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &input, std::vector<Mat> &output)
{
int axisCan = clamp(axis, input[0]->dims);
int numOutput = blobs[0].size[0];
int innerSize = blobs[0].size[1];
int outerSize = input[0]->total(0, axisCan);
bool ret = true;
if (innerProductOp.empty())
{
OCL4DNNInnerProductConfig config;
config.num_output = numOutput;
config.bias_term = bias;
config.M = outerSize;
config.K = innerSize;
innerProductOp = Ptr<OCL4DNNInnerProduct<float> >(new OCL4DNNInnerProduct<float>(config));
}
UMat biasOnesMat = UMat::ones(outerSize, 1, umat_blobs[0].type());
for (size_t i = 0; i < input.size(); i++)
{
UMat srcMat, dstMat;
srcMat = input[i]->getUMat(ACCESS_READ);
dstMat = output[i].getUMat(ACCESS_WRITE);
dstMat.setTo(0.0f);
if (!innerProductOp->Forward(srcMat, umat_blobs[0], (bias) ? umat_blobs[1] : UMat(), dstMat))
{
ret = false;
break;
}
if (bias && (outerSize > 1))
{
UMat& biases = umat_blobs[1];
cv::gemm(biasOnesMat, biases, 1, dstMat, 1, dstMat, 0);
}
}
if (ret) return true;
UMat& weights = umat_blobs[0];
for (size_t i = 0; i < input.size(); i++)
{
UMat srcMat, dstMat;
srcMat = input[i]->reshape(1, outerSize).getUMat(ACCESS_READ);
dstMat = output[i].reshape(1, outerSize).getUMat(ACCESS_WRITE);
cv::gemm(srcMat, weights, 1, noArray(), 0, dstMat, GEMM_2_T);
if (bias)
{
UMat& biases = umat_blobs[1];
cv::gemm(biasOnesMat, biases, 1, dstMat, 1, dstMat, 0);
}
}
return true;
}
#endif
void forward(std::vector<Mat*> &input, std::vector<Mat> &output, std::vector<Mat> &)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(input, output))
int axisCan = clamp(axis, input[0]->dims);
int outerSize = input[0]->total(0, axisCan);

@ -51,6 +51,10 @@
#include "layers/layers_common.simd_declarations.hpp"
#undef CV_CPU_OPTIMIZATION_DECLARATIONS_ONLY
#ifdef HAVE_OPENCL
#include "ocl4dnn.hpp"
#endif
namespace cv
{
namespace dnn

@ -46,8 +46,13 @@
#include "opencv2/imgproc.hpp"
#include "opencv2/dnn/shape_utils.hpp"
#include "opencv2/core/hal/hal.hpp"
#include "opencl_kernels_dnn.hpp"
#include <algorithm>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
@ -78,18 +83,64 @@ public:
normBySize = params.get<bool>("norm_by_size", true);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNLRN<float> > lrnOp;
#endif
virtual bool supportBackend(int backendId)
{
return backendId == DNN_BACKEND_DEFAULT ||
backendId == DNN_BACKEND_HALIDE && haveHalide();
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (lrnOp.empty())
{
OCL4DNNLRNConfig config;
config.lrn_type = type == CHANNEL_NRM ?
LRNParameter_NormRegion_ACROSS_CHANNELS :
LRNParameter_NormRegion_WITHIN_CHANNEL;
CHECK_EQ(size % 2, 1)<< "LRN only supports odd values for local_size";
config.local_size = size;
config.alpha = alpha;
config.beta = beta;
config.k = bias;
CHECK_EQ(4, inputs[0]->dims) << "Input must have 4 axes, "
<< "corresponding to (num, channels, height, width)";
config.batch_size = inputs[0]->size[0];
config.channels = inputs[0]->size[1];
config.height = inputs[0]->size[2];
config.width = inputs[0]->size[3];
config.norm_by_size = normBySize;
lrnOp = Ptr<OCL4DNNLRN<float> >(new OCL4DNNLRN<float>(config));
}
UMat inpMat, outMat;
inpMat = inputs[0]->getUMat(ACCESS_READ);
outMat = outputs[0].getUMat(ACCESS_WRITE);
if (!lrnOp->Forward(inpMat, outMat))
return false;
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_Assert(inputs.size() == outputs.size());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
for (int i = 0; i < inputs.size(); i++)
{
CV_Assert(inputs[i]->dims == 4);

@ -44,10 +44,14 @@
#include "layers_common.hpp"
#include "opencv2/core/hal/intrin.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <float.h>
#include <algorithm>
using std::max;
using std::min;
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
@ -81,6 +85,10 @@ public:
ceilMode = params.get<bool>("ceil_mode", true);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNPool<float> > poolOp;
#endif
void finalize(const std::vector<Mat*> &inputs, std::vector<Mat> &outputs)
{
CV_Assert(inputs.size() == 1);
@ -104,11 +112,59 @@ public:
type == PoolingLayer::AVE && !pad.width && !pad.height);
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (poolOp.empty())
{
OCL4DNNPoolConfig config;
config.in_shape = shape(*inputs[0]);
config.out_shape = shape(outputs[0]);
config.kernel = kernel;
config.pad = pad;
config.stride = stride;
config.channels = inputs[0]->size[1];
config.pool_method = type == MAX ? LIBDNN_POOLING_METHOD_MAX :
(type == AVE ? LIBDNN_POOLING_METHOD_AVE :
LIBDNN_POOLING_METHOD_STO);
poolOp = Ptr<OCL4DNNPool<float> >(new OCL4DNNPool<float>(config));
}
for (size_t ii = 0; ii < inputs.size(); ii++)
{
UMat inpMat, outMat, maskMat;
inpMat = inputs[ii]->getUMat(ACCESS_READ);
if (type == MAX)
{
outMat = outputs[2 * ii].getUMat(ACCESS_WRITE);
maskMat = outputs[2 * ii + 1].getUMat(ACCESS_WRITE);
} else {
outMat = outputs[ii].getUMat(ACCESS_WRITE);
maskMat = UMat();
}
CV_Assert(inpMat.offset == 0 && outMat.offset == 0);
if (!poolOp->Forward(inpMat, outMat, maskMat))
return false;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
for (size_t ii = 0; ii < inputs.size(); ii++)
{
switch (type)

@ -43,9 +43,13 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <algorithm>
#include <stdlib.h>
using std::max;
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
@ -63,6 +67,10 @@ public:
setParamsFrom(params);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNSoftmax<float> > softmaxOp;
#endif
bool getMemoryShapes(const std::vector<MatShape> &inputs,
const int requiredOutputs,
std::vector<MatShape> &outputs,
@ -82,11 +90,91 @@ public:
backendId == DNN_BACKEND_HALIDE && haveHalide() && axisRaw == 1;
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (softmaxOp.empty())
{
OCL4DNNSoftmaxConfig config;
config.in_shape = shape(*inputs[0]);
config.axis = axisRaw;
config.channels = inputs[0]->size[axisRaw];
softmaxOp = Ptr<OCL4DNNSoftmax<float> >(new OCL4DNNSoftmax<float>(config));
}
UMat srcMat, dstMat;
srcMat = inputs[0]->getUMat(ACCESS_READ);
dstMat = outputs[0].getUMat(ACCESS_WRITE);
if (!logSoftMax && softmaxOp->Forward(srcMat, dstMat))
return true;
const Mat &src = *inputs[0];
UMat bufMat = internals[0].getUMat(ACCESS_WRITE);
srcMat.copyTo(dstMat);
int axis = clamp(axisRaw, src.dims);
size_t outerSize = src.total(0, axis);
size_t channels = src.size[axis];
size_t innerSize = src.total(axis + 1);
String buildOpts = String("-DT=") + ocl::typeToStr(src.type());
ocl::Kernel kmax, ksub, ksum, kdiv;
if (!kmax.create("kernel_channel_max", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (!ksub.create("kernel_channel_subtract", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (!ksum.create("kernel_channel_sum", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (logSoftMax) buildOpts += " -DLOG_SOFTMAX ";
if (!kdiv.create("kernel_channel_div", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
size_t wgSize = ocl::Device::getDefault().maxWorkGroupSize();
size_t bufSize = internals[0].total();
size_t totalSize = src.total();
kmax.args((int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(dstMat), ocl::KernelArg::PtrReadWrite(bufMat));
if (!kmax.run(1, &bufSize, &wgSize, false))
return false;
ksub.args((int)totalSize, (int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(bufMat), ocl::KernelArg::PtrReadWrite(dstMat));
if (!ksub.run(1, &totalSize, &wgSize, false))
return false;
cv::exp(dstMat, dstMat);
ksum.args((int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(dstMat), ocl::KernelArg::PtrReadWrite(bufMat));
if (!ksum.run(1, &bufSize, &wgSize, false))
return false;
kdiv.args((int)totalSize, (int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(bufMat), ocl::KernelArg::PtrReadWrite(dstMat));
if (!kdiv.run(1, &totalSize, &wgSize, false))
return false;
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
const Mat &src = *inputs[0];
Mat &dst = outputs[0];

@ -0,0 +1,62 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _OPENCV_LIBDNN_COMMON_HPP_
#define _OPENCV_LIBDNN_COMMON_HPP_
#include "../../precomp.hpp"
#include "../../caffe/glog_emulator.hpp"
#include <opencv2/core/opencl/runtime/opencl_core.hpp>
#ifdef HAVE_OPENCL
// Macro to select the single (_float) or double (_double) precision kernel
#define CL_KERNEL_SELECT(kernel) kernel "_float"
#define OCL_CHECK(condition) \
do { \
cl_int error = (condition); \
CHECK_EQ(error, CL_SUCCESS) << " " << cv::ocl::getOpenCLErrorString(error); \
} while (0)
bool clOptionSupport(cv::String option);
#endif // HAVE_OPENCL
#endif

@ -0,0 +1,854 @@
#ifndef _OPENCV_OCL4DNN_DEFAULT_KERNEL_CONFIG_HPP_
#define _OPENCV_OCL4DNN_DEFAULT_KERNEL_CONFIG_HPP_
const char *default_kernel_config_intel[] = {
// Below is the information for OpenCL based on which these configurations tuned
/*******************************************************************************
Number of platforms 1
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 2.0
Platform Profile FULL_PROFILE
Platform Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
Platform Extensions function suffix INTEL
Platform Name Intel(R) OpenCL
Number of devices 1
Device Name Intel(R) HD Graphics
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.0
Driver Version r4.1.61547
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Profile FULL_PROFILE
Max compute units 72
Max clock frequency 950MHz
Device Partition (core)
Max number of sub-devices 0
Supported partition types by <unknown> (0x7FE000000000)
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple 32
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 1 / 1
half 8 / 8 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 26887677543 (25.04GiB)
Error Correction support No
Max memory allocation 4294959103 (4GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing No
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 64 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 4294959103 (4GiB)
Global Memory cache type Read/Write
Global Memory cache size 1572864
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 268434943 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 4 bytes
Pitch alignment for 2D image buffers 4 bytes
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x2048 pixels
Max number of read image args 128
Max number of write image args 128
Max number of read/write image args 128
Max number of pipe args 16
Max active pipe reservations 1
Max pipe packet size 1024
Local memory type Local
Local memory size 65536 (64KiB)
Max constant buffer size 4294959103 (4GiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 131072 (128KiB)
Max size 67108864 (64MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Profiling timer resolution 83ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
SPIR versions 1.2
printf() buffer size 4194304 (4MiB)
Built-in kernels block_motion_estimate_intel;block_advanced_motion_estimate_check_intel;block_advanced_motion_estimate_bidirectional_check_intel
Motion Estimation accelerator version (Intel) 2
Device Available Yes
Compiler Available Yes
Linker Available Yes
Device Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [INTEL]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
********************************************************************************/
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","12 2 8 2 1 1 8 1 0 ",
"EU72_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num2_M192","2 7 16 2 1 1 16 1 0 ",
"EU72_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M48","4 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M96","1 8 32 5 1 8 1 1 0 ",
"EU72_k11x7_cn3_g1_s3x4_d1x1_b1_in64x64_p3x2_num1_M64","4 1 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","4 6 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn4_g1_s1x1_d1x1_b1_in256x256_p1x1_num1_M4","14 1 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M4","4 4 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M208","2 6 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M384","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M320","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","1 8 32 5 1 8 1 1 0 ",
"EU72_k5x1_cn32_g1_s1x1_d1x1_b0_in64x64_p2x0_num1_M32","4 6 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn16_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M4","12 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num1_M64","2 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M16","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn32_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M128","1 16 32 5 1 16 1 1 0 ",
"EU72_k3x3_cn32_g1_s1x1_d2x2_b1_in64x64_p2x2_num1_M32","3 6 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn32_g1_s1x1_d16x16_b1_in64x64_p16x16_num1_M32","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn128_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M512","2 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M384","2 7 16 2 1 1 16 1 0 ",
"EU72_k5x4_cn6_g3_s3x2_d1x1_b1_in128x80_p1x0_num2_M4","1 1 1 4 1 1 1 0 1 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M96","4 5 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M192","10 2 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M192","6 4 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn4_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M96","8 3 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M32","8 1 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M384","4 7 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M256","2 6 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M128","6 4 16 2 1 1 16 1 0 ",
"EU72_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","4 4 16 2 1 1 16 1 0 ",
"EU72_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M48","4 3 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M5","2 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M24","8 2 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn128_g1_s1x1_d1x1_b0_in32x32_p1x1_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M128","2 7 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M32","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M112","8 2 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","4 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num2_M64","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M144","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","8 2 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn16_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU72_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M224","2 7 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","4 6 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M96","4 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M192","10 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","12 2 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M128","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M48","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M48","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","2 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M288","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn1024_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M96","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M1024","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn2048_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M512","4 6 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn512_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M512","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M16","8 2 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","8 3 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M288","2 7 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn16_g1_s1x1_d1x1_b1_in128x128_p1x1_num1_M16","2 5 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn32_g1_s1x1_d8x8_b1_in64x64_p8x8_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M4","8 3 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M256","2 7 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn256_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M256","2 5 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M224","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k2x2_cn16_g1_s2x2_d1x1_b0_in256x256_p0x0_num1_M16","6 4 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M512","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M192","2 5 16 2 1 1 16 1 0 ",
"EU72_k5x5_cn96_g2_s1x1_d1x1_b1_in32x32_p2x2_num1_M128","4 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","8 2 8 2 1 1 8 1 0 ",
"EU72_k2x2_cn64_g1_s2x2_d1x1_b0_in128x128_p0x0_num1_M32","8 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M256","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","12 2 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M32","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M16","12 1 8 2 1 1 8 1 0 ",
"EU72_k11x11_cn3_g1_s4x4_d1x1_b1_in224x224_p0x0_num1_M96","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","4 7 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 5 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M16","12 1 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M512","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M96","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","12 1 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","12 2 8 2 1 1 8 1 0 ",
"EU72_k3x3_cn256_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 7 16 2 1 1 16 1 0 ",
"EU72_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M24","12 1 8 2 1 1 8 1 0 ",
"EU72_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 2 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M144","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn32_g1_s1x1_d4x4_b1_in64x64_p4x4_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn3_g1_s2x2_d1x1_b1_in256x256_p1x1_num1_M13","1 1 1 4 1 1 1 0 1 ",
"EU72_k3x3_cn32_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M32","6 4 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","1 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn64_g1_s1x1_d1x1_b0_in64x64_p1x1_num1_M64","2 7 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M1024","2 8 32 5 1 8 1 1 0 ",
"EU72_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M320","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x5_cn32_g1_s1x1_d1x1_b1_in64x64_p0x2_num1_M32","4 6 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU72_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","4 6 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","12 2 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M128","2 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU72_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M112","1 8 32 5 1 8 1 1 0 ",
"EU72_k4x4_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M2","1 3 16 2 1 1 16 1 0 ",
"EU72_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M2048","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU72_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU72_k1x1_cn512_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M2048","1 8 32 5 1 8 1 1 0 ",
"EU72_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","8 1 16 2 1 1 16 1 0 ",
"EU72_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M208","2 7 16 2 1 1 16 1 0 ",
// Below is the information for OpenCL based on which these configurations tuned
/*******************************************************************************
Number of platforms 1
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 2.0
Platform Profile FULL_PROFILE
Platform Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_driver_diagnostics cl_intel_motion_estimation cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
Platform Extensions function suffix INTEL
Platform Name Intel(R) OpenCL
Number of devices 1
Device Name Intel(R) HD Graphics
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.0
Driver Version 16.5.56875
Device OpenCL C Version OpenCL C 2.0 ( using IGC )
Device Type GPU
Device Profile FULL_PROFILE
Max compute units 48
Max clock frequency 950MHz
Device Partition (core)
Max number of sub-devices 0
Supported partition types by <unknown> (0x7F4B00000000)
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple 32
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 1 / 1
half 8 / 8 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 13361912218 (12.44GiB)
Error Correction support No
Max memory allocation 4294959103 (4GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing No
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 64 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 4294959103 (4GiB)
Global Memory cache type Read/Write
Global Memory cache size 1048576
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 268434943 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 4 bytes
Pitch alignment for 2D image buffers 4 bytes
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x2048 pixels
Max number of read image args 128
Max number of write image args 128
Max number of read/write image args 128
Max number of pipe args 16
Max active pipe reservations 1
Max pipe packet size 1024
Local memory type Local
Local memory size 65536 (64KiB)
Max constant buffer size 4294959103 (4GiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 131072 (128KiB)
Max size 67108864 (64MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Profiling timer resolution 83ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
SPIR versions 1.2
printf() buffer size 4194304 (4MiB)
Built-in kernels block_motion_estimate_intel;block_advanced_motion_estimate_check_intel;block_advanced_motion_estimate_bidirectional_check_intel
Motion Estimation accelerator version (Intel) 2
Device Available Yes
Compiler Available Yes
Linker Available Yes
Device Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_driver_diagnostics cl_intel_motion_estimation cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [INTEL]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
********************************************************************************/
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","8 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn32_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M128","1 16 32 5 1 16 1 1 0 ",
"EU48_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M32","8 1 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M144","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M96","1 16 32 5 1 16 1 1 0 ",
"EU48_k3x3_cn128_g1_s1x1_d1x1_b0_in32x32_p1x1_num1_M128","6 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M128","2 8 32 5 1 8 1 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","8 1 16 2 1 1 16 1 0 ",
"EU48_k2x2_cn16_g1_s2x2_d1x1_b0_in256x256_p0x0_num1_M16","2 7 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn4_g1_s1x1_d1x1_b1_in256x256_p1x1_num1_M4","6 4 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn128_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M512","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M112","8 3 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn512_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M512","2 7 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M384","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M16","8 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M96","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M1024","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","4 7 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M320","2 7 16 2 1 1 16 1 0 ",
"EU48_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU48_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M48","4 2 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","2 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M192","2 8 16 2 1 1 16 1 0 ",
"EU48_k11x11_cn3_g1_s4x4_d1x1_b1_in224x224_p0x0_num1_M96","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M112","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","12 1 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","12 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","8 2 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M192","2 7 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M256","2 5 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn16_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M4","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x5_cn32_g1_s1x1_d1x1_b1_in64x64_p0x2_num1_M32","4 7 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","4 7 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn3_g1_s2x2_d1x1_b1_in256x256_p1x1_num1_M13","1 1 1 4 1 1 1 0 1 ",
"EU48_k11x7_cn3_g1_s3x4_d1x1_b1_in64x64_p3x2_num1_M64","4 1 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M96","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M16","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn32_g1_s1x1_d2x2_b1_in64x64_p2x2_num1_M32","3 3 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn32_g1_s1x1_d8x8_b1_in64x64_p8x8_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M96","2 8 32 5 1 8 1 1 0 ",
"EU48_k2x2_cn64_g1_s2x2_d1x1_b0_in128x128_p0x0_num1_M32","4 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","4 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M128","2 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn16_g1_s1x1_d1x1_b1_in128x128_p1x1_num1_M16","2 7 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn4_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M128","6 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M4","4 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M144","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M384","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M128","1 16 32 5 1 16 1 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M2048","1 16 32 5 1 16 1 1 0 ",
"EU48_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M384","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn16_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","4 7 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M192","2 5 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M128","6 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","12 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn2048_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M512","4 7 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","12 2 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M224","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn256_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 7 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn32_g1_s1x1_d4x4_b1_in64x64_p4x4_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 4 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M288","2 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M48","4 6 8 2 1 1 8 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","8 1 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","12 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","4 6 8 2 1 1 8 1 0 ",
"EU48_k5x5_cn96_g2_s1x1_d1x1_b1_in32x32_p2x2_num1_M128","4 5 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn256_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M256","2 6 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","8 3 8 2 1 1 8 1 0 ",
"EU48_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M48","4 2 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn64_g1_s1x1_d1x1_b0_in64x64_p1x1_num1_M64","10 2 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","4 5 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M208","2 5 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M2048","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M48","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M256","1 16 32 5 1 16 1 1 0 ",
"EU48_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M224","2 7 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","1 8 32 5 1 8 1 1 0 ",
"EU48_k5x1_cn32_g1_s1x1_d1x1_b0_in64x64_p2x0_num1_M32","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M288","2 7 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M192","2 7 16 2 1 1 16 1 0 ",
"EU48_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M32","4 3 16 2 1 1 16 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M96","4 2 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M208","2 5 16 2 1 1 16 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M96","4 2 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M24","12 1 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M16","4 7 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M512","2 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn1024_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M320","2 8 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num2_M192","6 4 16 2 1 1 16 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 3 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU48_k3x3_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M5","2 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num2_M64","1 16 32 5 1 16 1 1 0 ",
"EU48_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","8 2 16 2 1 1 16 1 0 ",
"EU48_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","4 6 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M32","1 16 32 5 1 16 1 1 0 ",
"EU48_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M512","1 8 32 5 1 8 1 1 0 ",
"EU48_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","4 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","12 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","8 3 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M1024","1 8 32 5 1 8 1 1 0 ",
"EU48_k5x4_cn6_g3_s3x2_d1x1_b1_in128x80_p1x0_num2_M4","1 1 1 4 1 1 1 0 1 ",
"EU48_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M256","2 7 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M24","8 2 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M16","12 1 8 2 1 1 8 1 0 ",
"EU48_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M128","10 2 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU48_k3x3_cn32_g1_s1x1_d16x16_b1_in64x64_p16x16_num1_M32","1 16 32 5 1 16 1 1 0 ",
"EU48_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","4 7 8 2 1 1 8 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M16","12 2 8 2 1 1 8 1 0 ",
"EU48_k4x4_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M2","1 4 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M4","8 2 8 2 1 1 8 1 0 ",
"EU48_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","4 2 16 2 1 1 16 1 0 ",
"EU48_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M16","1 8 32 5 1 8 1 1 0 ",
"EU48_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 2 16 2 1 1 16 1 0 ",
"EU48_k3x3_cn32_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M32","2 8 16 2 1 1 16 1 0 ",
"EU48_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","4 2 16 2 1 1 16 1 0 ",
// Below is the information for OpenCL based on which these configurations tuned
/*******************************************************************************
Number of platforms 1
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 2.0
Platform Profile FULL_PROFILE
Platform Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
Platform Extensions function suffix INTEL
Platform Name Intel(R) OpenCL
Number of devices 1
Device Name Intel(R) HD Graphics
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.0
Driver Version 16.5.59288
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Profile FULL_PROFILE
Max compute units 24
Max clock frequency 1050MHz
Device Partition (core)
Max number of sub-devices 0
Supported partition types by <unknown> (0x7F5100000000)
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple 32
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 1 / 1
half 8 / 8 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 6588802663 (6.136GiB)
Error Correction support No
Max memory allocation 3294401331 (3.068GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing No
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 64 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 3294401331 (3.068GiB)
Global Memory cache type Read/Write
Global Memory cache size 524288
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 205900083 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 4 bytes
Pitch alignment for 2D image buffers 4 bytes
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x2048 pixels
Max number of read image args 128
Max number of write image args 128
Max number of read/write image args 128
Max number of pipe args 16
Max active pipe reservations 1
Max pipe packet size 1024
Local memory type Local
Local memory size 65536 (64KiB)
Max constant buffer size 3294401331 (3.068GiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 131072 (128KiB)
Max size 67108864 (64MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Profiling timer resolution 83ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
SPIR versions 1.2
printf() buffer size 4194304 (4MiB)
Built-in kernels block_motion_estimate_intel;block_advanced_motion_estimate_check_intel;block_advanced_motion_estimate_bidirectional_check_intel
Motion Estimation accelerator version (Intel) 2
Device Available Yes
Compiler Available Yes
Linker Available Yes
Device Extensions cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_subgroups cl_intel_subgroups_short cl_intel_va_api_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp16 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [INTEL]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
********************************************************************************/
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","2 8 32 5 1 8 1 1 0 ",
"EU24_k5x1_cn32_g1_s1x1_d1x1_b0_in64x64_p2x0_num1_M32","4 6 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","4 2 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M224","2 5 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k2x2_cn16_g1_s2x2_d1x1_b0_in256x256_p0x0_num1_M16","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M384","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn256_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn2048_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M512","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M16","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M128","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn112_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M224","2 7 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn32_g1_s1x1_d8x8_b1_in64x64_p8x8_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M208","2 7 16 2 1 1 16 1 0 ",
"EU24_k11x11_cn3_g1_s4x4_d1x1_b1_in224x224_p0x0_num1_M96","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","1 8 32 5 1 8 1 1 0 ",
"EU24_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn32_g1_s1x1_d2x2_b1_in64x64_p2x2_num1_M32","3 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M24","8 3 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn128_g1_s1x1_d1x1_b0_in32x32_p1x1_num1_M128","6 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M144","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn1024_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M256","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn96_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M208","2 7 16 2 1 1 16 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M128","4 3 16 2 1 1 16 1 0 ",
"EU24_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M48","4 2 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M2048","4 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M192","6 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b0_in16x16_p0x0_num1_M1024","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn32_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M128","1 16 32 5 1 16 1 1 0 ",
"EU24_k1x1_cn4_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M16","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn192_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M384","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn128_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","4 6 8 2 1 1 8 1 0 ",
"EU24_k5x5_cn48_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M128","4 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","8 2 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M2048","1 16 32 5 1 16 1 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","4 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M384","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x4_cn6_g3_s3x2_d1x1_b1_in128x80_p1x0_num2_M4","1 1 1 4 1 1 1 0 1 ",
"EU24_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M192","6 4 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn256_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M256","2 7 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M320","2 8 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M256","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M192","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M256","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M256","2 5 16 2 1 1 16 1 0 ",
"EU24_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num2_M64","4 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M16","8 3 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M112","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M16","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M96","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M256","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M32","4 2 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M96","8 3 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn16_g1_s1x1_d1x1_b1_in128x128_p1x1_num1_M16","6 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M112","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num2_M96","4 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","8 2 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M288","2 8 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn144_g1_s1x1_d1x1_b1_in16x16_p1x1_num1_M288","2 7 16 2 1 1 16 1 0 ",
"EU24_k7x7_cn3_g1_s2x2_d1x1_b1_in224x224_p3x3_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn4_g1_s1x1_d1x1_b1_in256x256_p1x1_num1_M4","10 2 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn32_g1_s1x1_d16x16_b1_in64x64_p16x16_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M16","8 2 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU24_k1x5_cn32_g1_s1x1_d1x1_b1_in64x64_p0x2_num1_M32","4 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn384_g2_s1x1_d1x1_b1_in16x16_p1x1_num1_M192","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M32","4 6 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","4 6 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn32_g1_s1x1_d4x4_b1_in64x64_p4x4_num1_M32","1 8 32 5 1 8 1 1 0 ",
"EU24_k2x2_cn64_g1_s2x2_d1x1_b0_in128x128_p0x0_num1_M32","2 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn96_g2_s1x1_d1x1_b1_in32x32_p2x2_num1_M128","4 3 16 2 1 1 16 1 0 ",
"EU24_k5x5_cn16_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M48","8 1 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn16_g1_s1x1_d1x1_b0_in256x256_p0x0_num1_M4","8 3 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M256","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M144","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M128","6 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn128_g1_s1x1_d1x1_b1_in32x32_p1x1_num1_M192","2 7 16 2 1 1 16 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","4 2 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M160","1 8 32 5 1 8 1 1 0 ",
"EU24_k5x5_cn32_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M96","4 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","4 6 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M32","2 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn32_g1_s1x1_d1x1_b1_in64x64_p1x1_num1_M32","2 8 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn96_g1_s1x1_d1x1_b1_in32x32_p1x1_num2_M128","10 2 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn160_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M320","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M32","8 3 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn64_g1_s1x1_d1x1_b0_in64x64_p1x1_num1_M64","2 8 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M5","2 3 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn16_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M48","4 6 8 2 1 1 8 1 0 ",
"EU24_k5x5_cn24_g1_s1x1_d1x1_b1_in16x16_p2x2_num1_M64","4 2 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b0_in128x128_p0x0_num1_M4","8 2 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","8 2 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M96","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b0_in64x64_p0x0_num1_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M192","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M48","4 6 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn128_g1_s1x1_d1x1_b1_in16x16_p1x1_num2_M256","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M4","4 4 16 2 1 1 16 1 0 ",
"EU24_k4x4_cn3_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M2","1 3 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M96","1 8 32 5 1 8 1 1 0 ",
"EU24_k3x3_cn512_g1_s1x1_d1x1_b0_in16x16_p1x1_num1_M512","2 7 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s2x2_d1x1_b0_in32x32_p0x0_num1_M1024","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k11x7_cn3_g1_s3x4_d1x1_b1_in64x64_p3x2_num1_M64","4 1 16 2 1 1 16 1 0 ",
"EU24_k3x3_cn64_g1_s1x1_d1x1_b1_in64x64_p1x1_num2_M192","6 4 16 2 1 1 16 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M64","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn64_g1_s1x1_d1x1_b1_in64x64_p0x0_num1_M64","1 16 32 5 1 16 1 1 0 ",
"EU24_k1x1_cn192_g1_s1x1_d1x1_b1_in32x32_p0x0_num1_M16","8 3 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn128_g1_s1x1_d1x1_b0_in32x32_p0x0_num1_M512","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn1024_g1_s2x2_d1x1_b0_in16x16_p0x0_num1_M512","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M128","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn832_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M384","4 7 8 2 1 1 8 1 0 ",
"EU24_k1x1_cn528_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M160","1 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn480_g1_s1x1_d1x1_b1_in16x16_p0x0_num1_M64","8 3 8 2 1 1 8 1 0 ",
"EU24_k3x3_cn3_g1_s2x2_d1x1_b1_in256x256_p1x1_num1_M13","1 1 1 4 1 1 1 0 1 ",
"EU24_k1x1_cn256_g1_s2x2_d1x1_b0_in64x64_p0x0_num1_M512","2 8 32 5 1 8 1 1 0 ",
"EU24_k1x1_cn512_g1_s1x1_d1x1_b1_in16x16_p0x0_num2_M24","8 3 8 2 1 1 8 1 0 ",
"EU24_k5x5_cn16_g1_s1x1_d1x1_b1_in32x32_p2x2_num1_M32","4 3 16 2 1 1 16 1 0 ",
};
#endif

@ -0,0 +1,90 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _OPENCV_GREENTEA_MATH_FUNCTIONS_HPP_
#define _OPENCV_GREENTEA_MATH_FUNCTIONS_HPP_
#include "../../precomp.hpp"
#include "common.hpp"
namespace cv
{
namespace dnn
{
namespace ocl4dnn
{
#ifdef HAVE_OPENCL
enum CBLAS_TRANSPOSE {CblasNoTrans=111, CblasTrans=112, CblasConjTrans=113};
template<typename Dtype>
bool ocl4dnnGEMMCommon(const CBLAS_TRANSPOSE TransB,
const int32_t M, const int32_t N, const int32_t K,
const UMat A, const UMat B,
const UMat B_image, UMat C,
const size_t max_image_size);
template<typename Dtype>
ocl::Image2D ocl4dnnGEMMCopyBufferToImage(UMat buffer, int offset,
bool is_matrix_a, bool transpose,
bool padding, int padded_height,
int padded_width, int height,
int width, int ld);
template<typename Dtype>
bool ocl4dnnGEMV(const CBLAS_TRANSPOSE TransA,
const int32_t M, const int32_t N, const Dtype alpha,
const UMat A, const int32_t offA, const UMat x,
const int32_t offx, const Dtype beta, UMat y,
const int32_t offy);
template<typename Dtype>
bool ocl4dnnAXPY(const int32_t N, const Dtype alpha,
const UMat x, const int32_t offx, UMat y,
const int32_t offy);
#endif // HAVE_OPENCL
} // namespace ocl4dnn
} // namespace dnn
} // namespce cv
#endif

@ -0,0 +1,473 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _OPENCV_LIBDNN_HPP_
#define _OPENCV_LIBDNN_HPP_
#include "../../precomp.hpp"
#include <iomanip>
#include <map>
#include <memory>
#include <string>
#include <vector>
#include "common.hpp"
namespace cv { namespace dnn { namespace ocl4dnn {
#ifdef HAVE_OPENCL
struct OCL4DNNConvConfig
{
OCL4DNNConvConfig() :
kernel(1, 1),
pad(0, 0),
stride(1, 1),
dilation(1, 1),
group(1),
bias_term(false)
{}
MatShape in_shape;
MatShape out_shape;
Size kernel;
Size pad;
Size stride;
Size dilation;
int group; // = 1;
bool bias_term; // = false;
};
template<typename Dtype>
class OCL4DNNConvSpatial
{
public:
explicit OCL4DNNConvSpatial(OCL4DNNConvConfig config);
~OCL4DNNConvSpatial();
bool Forward(const UMat& bottom_data, const UMat& weight,
const UMat& bias,
UMat& top_data, int32_t batch_size);
private:
struct kernelConfig
{
std::string kernelName;
float executionTime;
size_t local_work_size[3];
size_t global_work_size[3];
int32_t workItem_output[3];
bool verified;
bool tested;
bool swizzle_weights;
bool use_null_local;
int32_t kernelType;
kernelConfig()
{}
kernelConfig(const std::string& name, const size_t* global_size, const size_t* local_size,
const int32_t* workItem,
bool swizzle,
int32_t type = 0)
: executionTime(0)
{
kernelName = name;
for (int32_t x = 0; x < 3; x++)
{
local_work_size[x] = local_size ? local_size[x] : 1;
global_work_size[x] = global_size[x];
workItem_output[x] = workItem[x];
}
swizzle_weights = swizzle;
use_null_local = local_size == NULL;
verified = false;
tested = false;
kernelType = type;
}
};
struct tunerParam
{
int kernelType;
int blockWidth;
int blockHeight;
int blockDepth;
tunerParam(int type, int w, int h, int d)
{
kernelType = type;
blockWidth = w;
blockHeight= h;
blockDepth = d;
}
};
inline void addDef(const char* name)
{
options_ << " -D " << name;
}
inline void addDef(const char* name, const int value)
{
options_ << " -D " << name << "=" << value;
}
inline void addDef(const char* name, const float value)
{
options_ << " -D " << name << "=(float)" << value;
}
inline void addDef(const char* name, const double value)
{
options_ << " -D " << name << "=(double)" << value;
}
inline void addDef(const char* name, const char* value)
{
options_ << " -D " << name << "=" << value;
}
void useFirstAvailable(const UMat &bottom,
UMat &top,
const UMat &weight,
const UMat &bias,
int32_t numImages,
UMat &verifyTop);
void setupKernel();
void collectCommonInformation();
void setupKernelDetails(int32_t kernelType,
int32_t blockM,
int32_t blockK,
int32_t blockN);
ocl::Program compileKernel();
typedef std::map<std::string, ocl::Program> phash_t;
phash_t phash;
void calculateBenchmark(const UMat &bottom, UMat &verifyTop,
const UMat &weight, const UMat &bias,
int32_t numImages);
void setupConvolution(const UMat &bottom,
UMat &top,
const UMat &weight,
const UMat &bias,
int32_t numImags,
UMat &verifyTop);
bool createConvolutionKernel(int32_t kernelType,
int32_t blockWidth,
int32_t blockHeight,
int32_t blockDepth);
bool setupIDLF(int32_t blockWidth,
int32_t blockHeight,
int32_t blockDepth);
bool createBasicKernel(int32_t blockWidth,
int32_t blockHeight,
int32_t blockDepth);
bool createGEMMLikeConvKernel(int32_t blockWidth,
int32_t blockHeight,
int32_t blockDepth);
void CreateSubBuffer(const UMat& buffer, UMat& sub_buffer,
int32_t offset, int32_t size, bool write_only);
bool convolve(const UMat &bottom, UMat &top,
const UMat &weight, const UMat &bias,
int32_t numImages,
kernelConfig* config,
const cv::ocl::Queue& queue);
float timedConvolve(const UMat &bottom, UMat &top,
const UMat &weight, const UMat &bias,
int32_t numImages, kernelConfig* config);
bool verifyResult(const UMat &bottom,
UMat &top,
const UMat &weight,
const UMat &bias,
int32_t numImages,
kernelConfig* config,
UMat &verifyTop);
bool swizzleWeight(const UMat &weight,
int32_t swizzled_factor,
bool interleave = false);
void generateKey();
std::string generateSpecificKey(int32_t type, int32_t blockWidth,
int32_t blockHeight,
int32_t blockDepth);
void cacheTunedConfig();
bool loadTunedConfig();
void saveTunedConfig();
bool loadCachedConfig();
void unloadProgram(const std::string& kernelName);
void prepareKernel(const UMat &bottom, UMat &top,
const UMat &weight, const UMat &bias,
int32_t numImages);
bool setupKernelByConfig(int x, int y, int z, int type,
int lx, int ly, int lz,
bool swizzle, bool nullLocal);
void generateTunerItems(std::vector< cv::Ptr<tunerParam> > &tunerItems);
int32_t group_;
bool bias_term_;
UMat swizzled_weights_umat;
int32_t bottom_index_;
int32_t output_h_;
int32_t output_w_;
int32_t kernel_h_;
int32_t kernel_w_;
int32_t height_;
int32_t width_;
int32_t pad_h_;
int32_t pad_w_;
int32_t stride_h_;
int32_t stride_w_;
int32_t dilation_h_;
int32_t dilation_w_;
/// M_ is the channel dimension of the output for a single group, which is the
/// leading dimension of the filter matrix.
int32_t M_;
bool tuned_;
std::string key_, key_sanitized_;
std::string short_key_;
std::string kernel_name_;
std::string cache_path_;
bool use_cache_path_; // true if cache_path_ directory exists
bool force_auto_tuning_;
int32_t kernel_index_;
std::vector< cv::Ptr<kernelConfig> > kernelQueue;
cv::Ptr<kernelConfig> bestKernelConfig;
int32_t bottom_dim_;
int32_t top_dim_;
int32_t num_;
int32_t channels_;
int32_t num_output_;
int32_t kernelType_;
int32_t blockM_;
int32_t blockK_;
int32_t blockN_;
std::stringstream options_;
cv::ocl::ProgramSource src_;
int32_t prev_kernel_type_;
};
typedef enum {
LIBDNN_POOLING_METHOD_MAX = 0,
LIBDNN_POOLING_METHOD_AVE = 1,
LIBDNN_POOLING_METHOD_STO = 2
} ocl4dnnPoolingMethod_t;
struct OCL4DNNPoolConfig
{
OCL4DNNPoolConfig() :
kernel(1, 1),
pad(0, 0),
stride(1, 1),
dilation(1, 1),
channels(0),
pool_method(LIBDNN_POOLING_METHOD_MAX),
global_pooling(false)
{}
MatShape in_shape;
MatShape out_shape;
Size kernel;
Size pad;
Size stride;
Size dilation;
int channels;
ocl4dnnPoolingMethod_t pool_method; // = LIBDNN_POOLING_METHOD_MAX;
bool global_pooling; // = false;
};
template<typename Dtype>
class OCL4DNNPool
{
public:
explicit OCL4DNNPool(OCL4DNNPoolConfig config);
~OCL4DNNPool();
bool Forward(const UMat& bottom_data,
UMat& top_data,
UMat& top_mask);
private:
UMat mask_idx_;
// Pooling parameters
std::vector<int32_t> pad_;
std::vector<int32_t> stride_;
std::vector<int32_t> kernel_shape_;
std::vector<int32_t> im_in_shape_;
std::vector<int32_t> im_out_shape_;
ocl4dnnPoolingMethod_t pool_method_;
int32_t count_;
int32_t batch_size_;
int32_t channels_;
int32_t kernel_h_;
int32_t kernel_w_;
int32_t stride_h_;
int32_t stride_w_;
int32_t pad_h_;
int32_t pad_w_;
int32_t height_;
int32_t width_;
int32_t pooled_height_;
int32_t pooled_width_;
};
struct OCL4DNNInnerProductConfig
{
OCL4DNNInnerProductConfig() :
num_output(0), M(0), K(0),
bias_term(false), transpose(false), phase_test(true)
{}
int num_output;
int M;
int K;
bool bias_term;
bool transpose; // = false;
bool phase_test; // = true;
};
template<typename Dtype>
class OCL4DNNInnerProduct
{
public:
explicit OCL4DNNInnerProduct(OCL4DNNInnerProductConfig config);
~OCL4DNNInnerProduct();
bool Forward(const UMat& bottom_data,
const UMat& weight,
const UMat& bias,
UMat& top_data);
private:
OCL4DNNInnerProductConfig config_;
int32_t axis_;
int32_t num_output_;
int32_t M_;
int32_t N_;
int32_t K_;
bool bias_term_;
bool transpose_;
bool image_copied_;
bool phase_test_;
};
typedef enum {
LRNParameter_NormRegion_ACROSS_CHANNELS = 0,
LRNParameter_NormRegion_WITHIN_CHANNEL = 1
} LRNParameter_NormRegion_WITHIN_CHANNEL_t;
struct OCL4DNNLRNConfig
{
OCL4DNNLRNConfig() :
phase_test(true)
{}
MatShape in_shape;
LRNParameter_NormRegion_WITHIN_CHANNEL_t lrn_type;
bool phase_test; // = true;
int local_size;
float alpha;
float beta;
float k;
bool norm_by_size;
int32_t batch_size;
int32_t channels;
int32_t height;
int32_t width;
};
template<typename Dtype>
class OCL4DNNLRN
{
public:
explicit OCL4DNNLRN(OCL4DNNLRNConfig config);
bool Forward(const UMat& bottom_data, UMat& top_data);
private:
bool crossChannelForward(const UMat& bottom_data, UMat& top_data);
LRNParameter_NormRegion_WITHIN_CHANNEL_t lrn_type_;
bool phase_test_;
int32_t size_;
Dtype alpha_;
Dtype beta_;
Dtype k_;
int32_t num_;
int32_t channels_;
int32_t height_;
int32_t width_;
bool norm_by_size_;
};
struct OCL4DNNSoftmaxConfig
{
OCL4DNNSoftmaxConfig()
{}
MatShape in_shape;
int axis;
int channels;
};
template<typename Dtype>
class OCL4DNNSoftmax
{
public:
explicit OCL4DNNSoftmax(OCL4DNNSoftmaxConfig config);
~OCL4DNNSoftmax();
bool Forward(const UMat& bottom_data, UMat& top_data);
private:
int32_t softmax_axis_;
int32_t inner_num_;
int32_t outer_num_;
int32_t channels_;
int32_t count_;
bool use_slm_;
UMat scale_data_;
};
#endif // HAVE_OPENCL
} // namespace ocl4dnn
} // namespace dnn
} // namespce cv
#endif

@ -0,0 +1,57 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "opencl_kernels_dnn.hpp"
using namespace cv;
#ifdef HAVE_OPENCL
bool clOptionSupport(cv::String option)
{
cv::String errmsg;
ocl::Program program = ocl::Context::getDefault().getProg(ocl::dnn::dummy_oclsrc, option, errmsg);
return program.ptr() ? true : false;
}
#endif // HAVE_OPENCL

@ -0,0 +1,538 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "math_functions.hpp"
#include <vector>
#include "opencl_kernels_dnn.hpp"
namespace cv
{
namespace dnn
{
namespace ocl4dnn
{
#ifdef HAVE_OPENCL
// Create and copy buffer to image for GEMM's matrix A and B.
// Will return image to caller if the input image is NULL. Otherwise,
// will use the image directly. It's caller's responsibility to
// release the created image.
template<typename Dtype>
ocl::Image2D ocl4dnnGEMMCopyBufferToImage(UMat buffer, int offset,
bool is_matrix_a, bool transpose,
bool padding, int padded_height,
int padded_width, int height,
int width, int ld)
{
ocl::Context ctx = ocl::Context::getDefault();
ocl::Queue queue = ocl::Queue::getDefault();
ocl::Image2D image;
if (!is_matrix_a && transpose)
{
if (ld == width)
{
image = ocl::Image2D(buffer);
} else {
// For matrix B with transpose, we need to handle them differently.
// As we can't use the sub group block read to get a row easily,
// we have to use CL_FLOAT type with read_imagef to get the row.
UMat mat(height, width, CV_32FC1);
image = ocl::Image2D(mat);
ocl::Kernel oclk_gemm_copy("gemm_buffer_copy_image_transpose_float", ocl::dnn::gemm_image_oclsrc);
size_t global_copy[2];
global_copy[0] = width;
global_copy[1] = height;
oclk_gemm_copy.set(0, ocl::KernelArg::PtrReadOnly(buffer));
oclk_gemm_copy.set(1, image);
oclk_gemm_copy.set(2, offset);
oclk_gemm_copy.set(3, width);
oclk_gemm_copy.set(4, height);
oclk_gemm_copy.set(5, ld);
oclk_gemm_copy.run(2, global_copy, NULL, false);
}
} else {
if (!padding)
{
// copy without padding.
image = ocl::Image2D(buffer);
} else {
UMat mat(padded_height, padded_width, CV_8UC4);
image = ocl::Image2D(mat);
ocl::Kernel oclk_gemm_copy("gemm_buffer_copy_image_no_transpose_float",
ocl::dnn::gemm_image_oclsrc);
size_t global_copy[2];
global_copy[0] = padded_width;
global_copy[1] = padded_height;
oclk_gemm_copy.set(0, ocl::KernelArg::PtrReadOnly(buffer));
oclk_gemm_copy.set(1, image);
oclk_gemm_copy.set(2, offset);
oclk_gemm_copy.set(3, width);
oclk_gemm_copy.set(4, height);
oclk_gemm_copy.set(5, ld);
oclk_gemm_copy.run(2, global_copy, NULL, false);
}
}
return image;
}
template
ocl::Image2D ocl4dnnGEMMCopyBufferToImage<float>(UMat buffer, int offset,
bool is_matrix_a, bool transpose,
bool padding, int padded_height,
int padded_width, int height,
int width, int ld);
enum gemm_type_t
{
GEMM_TYPE_NONE = 0,
GEMM_TYPE_FAST_IMAGE_32_1,
GEMM_TYPE_FAST_IMAGE_32_2,
GEMM_TYPE_FAST_IMAGE_B_IMAGE,
GEMM_TYPE_MAX
};
template<typename Dtype>
static bool ocl4dnnFastImageGEMM(const CBLAS_TRANSPOSE TransA,
const CBLAS_TRANSPOSE TransB, const int32_t M,
const int32_t N, const int32_t K, const Dtype alpha,
const UMat A, const int32_t offA, const UMat B,
const int32_t offB, const Dtype beta, UMat C,
const int32_t offC, bool is_image_a, bool is_image_b,
enum gemm_type_t gemm_type,
const size_t max_image_size)
{
CHECK_EQ(gemm_type == GEMM_TYPE_FAST_IMAGE_32_1 || gemm_type == GEMM_TYPE_FAST_IMAGE_32_2 ||
gemm_type == GEMM_TYPE_FAST_IMAGE_B_IMAGE, true) << "Invalid fast image gemm type." << std::endl;
if (is_image_a)
{
CHECK_EQ(offA, 0) << "Invalid input image offset." << std::endl;
return false;
}
if (is_image_b)
{
CHECK_EQ(offB, 0) << "Invalid input image offset." << std::endl;
return false;
}
int widthA = (TransA == CblasNoTrans) ? K : M;
int heightA = (TransA == CblasNoTrans) ? M : K;
int widthB = (TransB == CblasNoTrans) ? N : K;
int heightB = (TransB == CblasNoTrans) ? K : N;
int ldA = widthA;
int ldB = widthB;
int ldC = N;
int A_start_x = 0, A_start_y = 0, B_start_x = 0;
int B_start_y = 0, C_start_x = 0, C_start_y = 0;
int blocksize = 1024;
if (gemm_type == GEMM_TYPE_FAST_IMAGE_B_IMAGE)
blocksize = max_image_size;
int blockA_width = blocksize;
int blockA_height = blocksize;
int blockB_width = blocksize;
int blockB_height = blocksize;
int blockC_width = blocksize;
int blockC_height = blocksize;
int use_buffer_indicator = 8;
// To fix the edge problem casued by the sub group block read.
// we have to pad the image if it's not multiple of tile.
// just padding one line is enough as the sub group block read
// will clamp to edge according to the spec.
ocl::Context ctx = ocl::Context::getDefault();
ocl::Queue queue = ocl::Queue::getDefault();
ocl::Image2D ImA;
ocl::Image2D ImB;
std::string kernel_name("gemm_");
if (gemm_type == GEMM_TYPE_FAST_IMAGE_32_1 || gemm_type == GEMM_TYPE_FAST_IMAGE_B_IMAGE)
kernel_name += "32_1_";
else
kernel_name += "32_2_";
if (TransA == CblasNoTrans)
kernel_name += "N";
else
kernel_name += "T";
if (TransB == CblasNoTrans)
{
kernel_name += "N_";
} else {
kernel_name += "T_";
if (is_image_b || (K % use_buffer_indicator != 0))
{
kernel_name += "SCALAR_";
} else {
kernel_name += "BUFFER_";
}
}
if (alpha == 1)
kernel_name += "1_";
else
kernel_name += "0_";
if (beta == 0)
kernel_name += "0";
else
kernel_name += "1";
kernel_name += "_float";
ocl::Kernel oclk_gemm_float(kernel_name.c_str(), ocl::dnn::gemm_image_oclsrc);
if (oclk_gemm_float.empty())
return false;
while (C_start_y < M)
{
blockC_width = std::min(static_cast<int>(N) - C_start_x, blocksize);
blockC_height = std::min(static_cast<int>(M) - C_start_y, blocksize);
int isFirstColBlock = 1;
for (int k = 0; k < K; k += blocksize)
{
blockA_width = std::min(widthA - A_start_x, blocksize);
blockA_height = std::min(heightA - A_start_y, blocksize);
blockB_width = std::min(widthB - B_start_x, blocksize);
blockB_height = std::min(heightB - B_start_y, blocksize);
int block_Ksize = std::min(static_cast<int>(K) - k, blocksize);
int padded_k = block_Ksize + ((block_Ksize & 7) ? (8 - (block_Ksize & 7)) : 0);
int imageA_w = (TransA == CblasNoTrans) ? padded_k : blockA_width;
int imageA_h = (TransA == CblasNoTrans) ? blockA_height : padded_k;
int imageB_w = (TransB == CblasNoTrans) ? blockB_width : padded_k;
int imageB_h = (TransB == CblasNoTrans) ? padded_k : blockB_height;
int blockA_offset = offA + A_start_y * ldA + A_start_x;
int blockB_offset = offB + B_start_y * ldB + B_start_x;
int blockC_offset = offC + C_start_y * ldC + C_start_x;
if (TransB == CblasNoTrans)
{
bool padding_A = false;
bool padding_B = false;
if (!is_image_a && !is_image_b)
{
if (M * K < N * K)
padding_B = true;
else
padding_A = true;
}
if (!is_image_a)
{
ImA = ocl4dnnGEMMCopyBufferToImage<Dtype>(A, blockA_offset,
true, TransA != CblasNoTrans,
padding_A, imageA_h, imageA_w,
blockA_height, blockA_width, ldA);
}
if (!is_image_b)
{
ImB = ocl4dnnGEMMCopyBufferToImage<Dtype>(B, blockB_offset,
false, false,
padding_B, imageB_h, imageB_w,
blockB_height, blockB_width, ldB);
}
} else {
// We will use normal read_imagef to read image B when B has transpose.
// thus we don't need to pad image A at all.
if (!is_image_a)
{
bool padding;
padding = !is_image_b;
ImA = ocl4dnnGEMMCopyBufferToImage<Dtype>(A, blockA_offset,
true, TransA != CblasNoTrans,
padding, imageA_h, imageA_w,
blockA_height, blockA_width, ldA);
}
if (!is_image_b && (K % use_buffer_indicator != 0))
{
ImB = ocl4dnnGEMMCopyBufferToImage<Dtype>(B, blockB_offset,
false, true, false, imageB_h, imageB_w,
blockB_height, blockB_width, ldB);
}
}
size_t global[2];
if (gemm_type == GEMM_TYPE_FAST_IMAGE_32_1 || gemm_type == GEMM_TYPE_FAST_IMAGE_B_IMAGE)
{
global[0] = (size_t)( blockC_width + 7 ) & ~7;
} else {
global[0] = (size_t)( (blockC_width / 2 ) + 7 ) ^ ~7;
}
global[1] = (size_t)(blockC_height + 31) / 32;
size_t local[2];
local[0] = 8;
local[1] = 1;
cl_uint arg_idx = 0;
if (is_image_a)
oclk_gemm_float.set(arg_idx++, ocl::KernelArg::PtrReadOnly(A));
else
oclk_gemm_float.set(arg_idx++, ImA);
if (TransB == CblasNoTrans || is_image_b || (K % use_buffer_indicator != 0))
{
if (is_image_b)
oclk_gemm_float.set(arg_idx++, ocl::KernelArg::PtrReadOnly(B));
else
oclk_gemm_float.set(arg_idx++, ImB);
} else {
oclk_gemm_float.set(arg_idx++, ocl::KernelArg::PtrReadOnly(B));
oclk_gemm_float.set(arg_idx++, blockB_offset);
oclk_gemm_float.set(arg_idx++, ldB);
}
oclk_gemm_float.set(arg_idx++, ocl::KernelArg::PtrWriteOnly(C));
oclk_gemm_float.set(arg_idx++, blockC_offset);
oclk_gemm_float.set(arg_idx++, blockC_height);
oclk_gemm_float.set(arg_idx++, blockC_width);
oclk_gemm_float.set(arg_idx++, ldC);
oclk_gemm_float.set(arg_idx++, alpha);
oclk_gemm_float.set(arg_idx++, beta);
oclk_gemm_float.set(arg_idx++, padded_k);
if (TransB != CblasNoTrans)
oclk_gemm_float.set(arg_idx++, block_Ksize);
oclk_gemm_float.set(arg_idx++, isFirstColBlock);
if (!oclk_gemm_float.run(2, global, local, false))
return false;
if (TransA == CblasNoTrans)
A_start_x += blockA_width;
else
A_start_y += blockA_height;
if (TransB == CblasNoTrans)
B_start_y += blockB_height;
else
B_start_x += blockB_width;
isFirstColBlock = 0;
}
C_start_x += blockC_width;
if (TransA == CblasNoTrans)
A_start_x = 0;
else
A_start_y = 0;
if (TransB == CblasNoTrans)
{
B_start_x += blockB_width;
B_start_y = 0;
} else {
B_start_y += blockB_height;
B_start_x = 0;
}
if (C_start_x >= N)
{
C_start_x = 0;
B_start_x = 0;
B_start_y = 0;
C_start_y += blockC_height;
if (TransA == CblasNoTrans)
A_start_y += blockA_height;
else
A_start_x += blockA_width;
}
}
return true;
}
template<typename Dtype>
bool ocl4dnnGEMMCommon(const CBLAS_TRANSPOSE TransB,
const int32_t M, const int32_t N, const int32_t K,
const UMat A, const UMat B,
const UMat B_image, UMat C,
const size_t max_image_size)
{
gemm_type_t gemm_type = GEMM_TYPE_FAST_IMAGE_32_1;
if (gemm_type == GEMM_TYPE_FAST_IMAGE_32_1 ||
gemm_type == GEMM_TYPE_FAST_IMAGE_32_2)
{
return ocl4dnnFastImageGEMM<Dtype>(CblasNoTrans, TransB, M, N, K,
(Dtype)1., A, 0, B, 0, (Dtype)0., C,
0, false, false, gemm_type, max_image_size);
}
else if (gemm_type == GEMM_TYPE_FAST_IMAGE_B_IMAGE)
{
return ocl4dnnFastImageGEMM<Dtype>(CblasNoTrans, TransB, M, N, K,
(Dtype)1., A, 0, B_image, 0, (Dtype)0., C,
0, false, true,
GEMM_TYPE_FAST_IMAGE_B_IMAGE,
max_image_size);
}
return false;
}
template bool ocl4dnnGEMMCommon<float>(const CBLAS_TRANSPOSE TransB,
const int32_t M, const int32_t N, const int32_t K,
const UMat A, const UMat B,
const UMat B_image, UMat C,
const size_t max_image_size);
template<typename Dtype>
bool ocl4dnnGEMV(const CBLAS_TRANSPOSE TransA,
const int32_t M, const int32_t N, const Dtype alpha,
const UMat A, const int32_t offA, const UMat x,
const int32_t offx, const Dtype beta, UMat y,
const int32_t offy)
{
return false;
}
template<>
bool ocl4dnnGEMV<float>(const CBLAS_TRANSPOSE TransA,
const int32_t M, const int32_t N, const float alpha,
const UMat A, const int32_t offA, const UMat x,
const int32_t offx, const float beta, UMat y,
const int32_t offy)
{
ocl::Queue queue = ocl::Queue::getDefault();
bool ret = false;
if (TransA == CblasNoTrans)
{
ocl::Kernel k(CL_KERNEL_SELECT("matvec_mul4"), cv::ocl::dnn::matvec_mul_oclsrc);
if (k.empty())
return false;
uint row_size = M;
uint col_size = N;
size_t localsize[] = { 128 };
size_t globalsize[] = { row_size / 4 * localsize[0] };
uint argId = 0;
k.set(argId++, ocl::KernelArg::PtrReadOnly(A));
k.set(argId++, offA);
k.set(argId++, cl_uint(col_size));
k.set(argId++, cl_uint(col_size%4));
k.set(argId++, ocl::KernelArg::PtrReadOnly(x));
k.set(argId++, offx);
k.set(argId++, alpha);
k.set(argId++, beta);
k.set(argId++, ocl::KernelArg::PtrWriteOnly(y));
k.set(argId++, offy);
k.set(argId++, NULL, localsize[0] * sizeof(cl_float4));
ret = k.run(1, globalsize, localsize, false);
if ((row_size % 4) != 0 && ret)
{
ocl::Kernel k_1(CL_KERNEL_SELECT("matvec_mul1"), cv::ocl::dnn::matvec_mul_oclsrc);
size_t localsize[] = { 128 };
size_t globalsize[] = { row_size % 4 * localsize[0] };
uint row_offset = row_size - (row_size % 4);
uint argId = 0;
k_1.set(argId++, ocl::KernelArg::PtrReadOnly(A));
k_1.set(argId++, offA);
k_1.set(argId++, cl_uint(col_size));
k_1.set(argId++, cl_uint(row_offset));
k_1.set(argId++, cl_uint(col_size%4));
k_1.set(argId++, ocl::KernelArg::PtrReadOnly(x));
k_1.set(argId++, offx);
k_1.set(argId++, alpha);
k_1.set(argId++, beta);
k_1.set(argId++, ocl::KernelArg::PtrWriteOnly(y));
k_1.set(argId++, offy);
k_1.set(argId++, NULL, localsize[0] * sizeof(cl_float));
ret = k_1.run(1, globalsize, localsize, false);
}
}
return ret;
}
template<typename Dtype>
bool ocl4dnnAXPY(const int32_t N, const Dtype alpha,
const UMat X, const int32_t offX, UMat Y,
const int32_t offY)
{
ocl::Context ctx = ocl::Context::getDefault();
ocl::Kernel oclk_axpy(CL_KERNEL_SELECT("axpy"), cv::ocl::dnn::math_oclsrc);
if (oclk_axpy.empty())
return false;
size_t global[] = { 128 * 128 };
size_t local[] = { 128 };
cl_uint argIdx = 0;
oclk_axpy.set(argIdx++, N);
oclk_axpy.set(argIdx++, alpha);
oclk_axpy.set(argIdx++, ocl::KernelArg::PtrReadOnly(X));
oclk_axpy.set(argIdx++, offX);
oclk_axpy.set(argIdx++, ocl::KernelArg::PtrWriteOnly(Y));
oclk_axpy.set(argIdx++, offY);
return oclk_axpy.run(1, global, local, false);
}
template bool ocl4dnnAXPY<float>(const int32_t N, const float alpha,
const UMat X, const int32_t offX,
UMat Y, const int32_t offY);
#endif // HAVE_OPENCL
} // namespace ocl4dnn
} // namespace dnn
} // namespce cv

File diff suppressed because it is too large Load Diff

@ -0,0 +1,108 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "math_functions.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNInnerProduct<Dtype>::OCL4DNNInnerProduct(OCL4DNNInnerProductConfig config)
{
bias_term_ = config.bias_term;
transpose_ = config.transpose;
N_ = num_output_ = config.num_output;
M_ = config.M;
K_ = config.K;
phase_test_ = config.phase_test;
image_copied_ = false;
}
template<typename Dtype>
OCL4DNNInnerProduct<Dtype>::~OCL4DNNInnerProduct()
{
}
template<typename Dtype>
bool OCL4DNNInnerProduct<Dtype>::Forward(const UMat& bottom,
const UMat& weight,
const UMat& bias,
UMat& top)
{
bool ret;
if (M_ == 1)
{
ret = ocl4dnnGEMV<Dtype>(CblasNoTrans, N_, K_, (Dtype) 1.,
weight, 0, bottom, 0, (Dtype) 0., top, 0);
if (bias_term_ && ret)
ret = ocl4dnnAXPY<Dtype>(N_, 1, bias, 0, top, 0);
return ret;
}
else
{
ret = false;
size_t max_image_size = std::min(ocl::Device::getDefault().image2DMaxWidth(),
ocl::Device::getDefault().image2DMaxHeight());
if (M_ <= max_image_size &&
N_ <= max_image_size &&
K_ <= max_image_size &&
cv::traits::Depth<Dtype>::value == CV_32F &&
ocl::Device::getDefault().intelSubgroupsSupport())
{
ret = ocl4dnnGEMMCommon<Dtype>(transpose_ ? CblasNoTrans : CblasTrans,
M_, N_, K_, bottom, weight, UMat(), top,
max_image_size);
}
return ret;
}
}
template class OCL4DNNInnerProduct<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL

@ -0,0 +1,126 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNLRN<Dtype>::OCL4DNNLRN(OCL4DNNLRNConfig config)
{
lrn_type_ = config.lrn_type;
phase_test_ = config.phase_test;
size_ = config.local_size;
CHECK_EQ(size_ % 2, 1)<< "LRN only supports odd values for local_size";
alpha_ = config.alpha;
beta_ = config.beta;
k_ = config.k;
norm_by_size_ = config.norm_by_size;
num_ = config.batch_size;
channels_ = config.channels;
height_ = config.height;
width_ = config.width;
}
template<typename Dtype>
bool OCL4DNNLRN<Dtype>::Forward(const UMat& bottom, UMat& top)
{
bool ret = true;
if (!ocl::Device::getDefault().intelSubgroupsSupport())
return false;
switch (lrn_type_)
{
case LRNParameter_NormRegion_ACROSS_CHANNELS:
ret = crossChannelForward(bottom, top);
break;
case LRNParameter_NormRegion_WITHIN_CHANNEL:
//TODO
//WithinChannelForward(bottom_data, top_data);
ret = false;
break;
default:
ret = false;
LOG(FATAL)<< "Unknown normalization region.";
}
return ret;
}
template<typename Dtype>
bool OCL4DNNLRN<Dtype>::crossChannelForward(const UMat& bottom, UMat& top)
{
ocl::Queue queue = ocl::Queue::getDefault();
CHECK_EQ(phase_test_, true) << "Only support forward inference.";
cl_uint argIdx = 0;
int32_t n_threads = num_ * height_ * width_;
size_t global_work_size_[1] = {(size_t)n_threads};
String opts = clOptionSupport("-cl-no-subgroup-ifp") ? " -cl-no-subgroup-ifp " : "";
ocl::Kernel oclk_lrn_fill;
if (!oclk_lrn_fill.create(CL_KERNEL_SELECT("lrn_full_no_scale"), ocl::dnn::ocl4dnn_lrn_oclsrc, opts))
return false;
oclk_lrn_fill.set(argIdx++, n_threads);
oclk_lrn_fill.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_lrn_fill.set(argIdx++, num_);
oclk_lrn_fill.set(argIdx++, channels_);
oclk_lrn_fill.set(argIdx++, height_);
oclk_lrn_fill.set(argIdx++, width_);
oclk_lrn_fill.set(argIdx++, size_);
int size_norm_factor = norm_by_size_ ? size_ : 1;
oclk_lrn_fill.set(argIdx++, alpha_ / size_norm_factor);
oclk_lrn_fill.set(argIdx++, k_);
oclk_lrn_fill.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_lrn_fill.set(argIdx++, -beta_);
return oclk_lrn_fill.run(1, global_work_size_, NULL, false);
}
template class OCL4DNNLRN<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL

@ -0,0 +1,213 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include <string>
#include <vector>
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNPool<Dtype>::OCL4DNNPool(OCL4DNNPoolConfig config)
{
int dims = config.in_shape.size();
int spatial_dims = 2;
batch_size_ = config.in_shape[0];
channels_ = config.channels;
pool_method_ = config.pool_method;
for (int i = 0; i < spatial_dims; ++i)
{
kernel_shape_.push_back(i == 0 ? config.kernel.height : config.kernel.width);
pad_.push_back(i == 0 ? config.pad.height : config.pad.width);
stride_.push_back(i == 0 ? config.stride.height : config.stride.width);
im_in_shape_.push_back(config.in_shape[dims - spatial_dims + i]);
im_out_shape_.push_back(config.out_shape[dims - spatial_dims + i]);
}
kernel_h_ = kernel_shape_[0];
kernel_w_ = kernel_shape_[1];
stride_h_ = stride_[0];
stride_w_ = stride_[1];
pad_h_ = pad_[0];
pad_w_ = pad_[1];
height_ = im_in_shape_[0];
width_ = im_in_shape_[1];
pooled_height_ = im_out_shape_[0];
pooled_width_ = im_out_shape_[1];
count_ = 1;
for (int i = 0; i < config.out_shape.size(); ++i)
{
count_ *= config.out_shape[i];
}
}
template<typename Dtype>
OCL4DNNPool<Dtype>::~OCL4DNNPool()
{
mask_idx_.release();
}
template<typename Dtype>
bool OCL4DNNPool<Dtype>::Forward(const UMat& bottom,
UMat& top,
UMat& top_mask)
{
bool ret = true;
ocl::Queue queue = ocl::Queue::getDefault();
size_t global[] = { 128 * 128 };
size_t local[] = { 128 };
cl_uint argIdx = 0;
// support 2D case
switch (pool_method_)
{
case LIBDNN_POOLING_METHOD_MAX:
{
if (top_mask.empty() && mask_idx_.empty())
{
mask_idx_.create(1, count_, CV_32FC1);
}
ocl::Kernel oclk_max_pool_forward(CL_KERNEL_SELECT("max_pool_forward"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_max_pool_forward.empty())
return false;
argIdx = 0;
oclk_max_pool_forward.set(argIdx++, count_);
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_max_pool_forward.set(argIdx++, batch_size_);
oclk_max_pool_forward.set(argIdx++, channels_);
oclk_max_pool_forward.set(argIdx++, height_);
oclk_max_pool_forward.set(argIdx++, width_);
oclk_max_pool_forward.set(argIdx++, pooled_height_);
oclk_max_pool_forward.set(argIdx++, pooled_width_);
oclk_max_pool_forward.set(argIdx++, kernel_h_);
oclk_max_pool_forward.set(argIdx++, kernel_w_);
oclk_max_pool_forward.set(argIdx++, stride_h_);
oclk_max_pool_forward.set(argIdx++, stride_w_);
oclk_max_pool_forward.set(argIdx++, pad_h_);
oclk_max_pool_forward.set(argIdx++, pad_w_);
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_max_pool_forward.set(argIdx++, mask_idx_.empty() ? 0 : 1);
if (mask_idx_.empty())
oclk_max_pool_forward.set(argIdx++, (void *)NULL);
else
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(mask_idx_));
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top_mask));
ret = oclk_max_pool_forward.run(1, global, local, false);
}
break;
case LIBDNN_POOLING_METHOD_AVE:
{
ocl::Kernel oclk_ave_pool_forward(CL_KERNEL_SELECT("ave_pool_forward"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_ave_pool_forward.empty())
return false;
argIdx = 0;
oclk_ave_pool_forward.set(argIdx++, count_);
oclk_ave_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_ave_pool_forward.set(argIdx++, batch_size_);
oclk_ave_pool_forward.set(argIdx++, channels_);
oclk_ave_pool_forward.set(argIdx++, height_);
oclk_ave_pool_forward.set(argIdx++, width_);
oclk_ave_pool_forward.set(argIdx++, pooled_height_);
oclk_ave_pool_forward.set(argIdx++, pooled_width_);
oclk_ave_pool_forward.set(argIdx++, kernel_h_);
oclk_ave_pool_forward.set(argIdx++, kernel_w_);
oclk_ave_pool_forward.set(argIdx++, stride_h_);
oclk_ave_pool_forward.set(argIdx++, stride_w_);
oclk_ave_pool_forward.set(argIdx++, pad_h_);
oclk_ave_pool_forward.set(argIdx++, pad_w_);
oclk_ave_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
ret = oclk_ave_pool_forward.run(1, global, local, false);
}
break;
case LIBDNN_POOLING_METHOD_STO:
{
ocl::Kernel oclk_sto_pool_forward(CL_KERNEL_SELECT("sto_pool_forward_test"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_sto_pool_forward.empty())
return false;
argIdx = 0;
oclk_sto_pool_forward.set(argIdx++, count_);
oclk_sto_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_sto_pool_forward.set(argIdx++, batch_size_);
oclk_sto_pool_forward.set(argIdx++, channels_);
oclk_sto_pool_forward.set(argIdx++, height_);
oclk_sto_pool_forward.set(argIdx++, width_);
oclk_sto_pool_forward.set(argIdx++, pooled_height_);
oclk_sto_pool_forward.set(argIdx++, pooled_width_);
oclk_sto_pool_forward.set(argIdx++, kernel_h_);
oclk_sto_pool_forward.set(argIdx++, kernel_w_);
oclk_sto_pool_forward.set(argIdx++, stride_h_);
oclk_sto_pool_forward.set(argIdx++, stride_w_);
oclk_sto_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
ret = oclk_sto_pool_forward.run(1, global, local, false);
}
break;
default:
{
ret = false;
LOG(FATAL)<< "Unknown pooling method.";
}
}
return ret;
}
template class OCL4DNNPool<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL

@ -0,0 +1,135 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include <vector>
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNSoftmax<Dtype>::OCL4DNNSoftmax(OCL4DNNSoftmaxConfig config)
{
softmax_axis_ = config.axis;
channels_ = config.channels;
inner_num_ = 1;
outer_num_ = 1;
count_ = 1;
int32_t scale_sz = 1;
for (int32_t i = softmax_axis_ + 1; i < config.in_shape.size(); i++)
inner_num_ *= config.in_shape[i];
use_slm_ = (config.in_shape[softmax_axis_] * inner_num_ + inner_num_ * 17) <= 8192;
for (int32_t i = 0; i < softmax_axis_; i++)
outer_num_ *= config.in_shape[i];
count_ = inner_num_ + outer_num_;
std::vector<int32_t> scale_dims = config.in_shape;
scale_dims[softmax_axis_] = use_slm_ ? 1 : 17;
for (int32_t i = 0; i < scale_dims.size(); i++)
scale_sz *= scale_dims[i];
scale_data_.create(1, scale_sz, CV_32FC1);
}
template<typename Dtype>
OCL4DNNSoftmax<Dtype>::~OCL4DNNSoftmax()
{
scale_data_.release();
}
template<typename Dtype>
bool OCL4DNNSoftmax<Dtype>::Forward(const UMat& bottom, UMat& top)
{
bool ret = false;
ocl::Queue queue = ocl::Queue::getDefault();
bool intel_subgroup = ocl::Device::getDefault().intelSubgroupsSupport();
if (intel_subgroup && inner_num_ < 128)
{
String opts = clOptionSupport("-cl-no-subgroup-ifp") ? " -cl-no-subgroup-ifp " : "";
String kname;
ocl::Kernel oclk_softmax_forward_kernel;
if (use_slm_)
kname = CL_KERNEL_SELECT("softmax_forward_slm");
else
kname = CL_KERNEL_SELECT("softmax_forward");
if (!oclk_softmax_forward_kernel.create(kname.c_str(), ocl::dnn::softmax_loss_oclsrc, opts))
return false;
size_t global_size[] = { 256, (size_t)outer_num_, 1 };
size_t local_size[] = { 256, 1, 1 };
cl_uint argIdx = 0;
if (use_slm_)
{
oclk_softmax_forward_kernel.set(argIdx++, outer_num_);
oclk_softmax_forward_kernel.set(argIdx++, channels_);
oclk_softmax_forward_kernel.set(argIdx++, inner_num_);
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(scale_data_));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_softmax_forward_kernel.set(argIdx++, NULL, channels_ * inner_num_* sizeof(Dtype));
oclk_softmax_forward_kernel.set(argIdx++, NULL, inner_num_* sizeof(Dtype));
oclk_softmax_forward_kernel.set(argIdx++, NULL, 16 * inner_num_* sizeof(Dtype));
}
else
{
oclk_softmax_forward_kernel.set(argIdx++, outer_num_);
oclk_softmax_forward_kernel.set(argIdx++, channels_);
oclk_softmax_forward_kernel.set(argIdx++, inner_num_);
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(scale_data_));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
}
ret = oclk_softmax_forward_kernel.run(3, global_size, local_size, false);
}
return ret;
}
template class OCL4DNNSoftmax<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL

@ -1,3 +1,45 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void ReLUForward(const int count, __global const T* in, __global T* out
#ifndef RELU_NO_SLOPE
, T negative_slope

@ -0,0 +1,26 @@
__kernel void batchnorm(__global const T *src, int src_offset,
__global const float *meanMat,
float varMeanScale,
__global const float *invStdMat,
__global const float *weight,
__global const float *bias,
int hasWeight, int hasBias,
int width, int height, int channel,
__global T *dst, int dst_offset)
{
int x = get_global_id(0);
int y = get_global_id(1);
int c = get_global_id(2);
if (x >= width || y >= height || c >= channel)
return;
float mean = meanMat[c] * varMeanScale;
float invstd = invStdMat[c];
float w = hasWeight ? weight[c] : 1;
float b = hasBias ? bias[c] : 0;
int index = y * width + x + c * width * height;
T val = (src[index + src_offset] - mean) * w * invstd + b;
dst[index + dst_offset] = val;
}

@ -0,0 +1,45 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void null_kernel_float(float arg) {
float out = arg;
}

@ -0,0 +1,60 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void concat(const int nthreads,
__global const Dtype* in_data,
const int num_concats,
const int concat_size,
const int top_concat_axis,
const int bottom_concat_axis,
const int offset_concat_axis,
__global Dtype* out_data) {
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0)) {
const int total_concat_size = concat_size * bottom_concat_axis;
const int concat_num = index / total_concat_size;
const int concat_index = index % total_concat_size;
const int top_index = concat_index
+ (concat_num * top_concat_axis + offset_concat_axis) * concat_size;
out_data[top_index] = in_data[index];
}
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,73 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
__kernel void TEMPLATE(copyWeightsSwizzled, Dtype)
(__global Dtype* weightIn,
__global Dtype* weightOut,
const int kernel_w,
const int kernel_h,
const int channels,
const int outputs,
const int swizzleFactor) {
unsigned int sX = get_global_id(0);
//Original location
//Output location
int outputSublayer = channels / swizzleFactor;
int outputSublayerIndex = channels % swizzleFactor;
int filter = sX / (kernel_w*kernel_h*channels);
int kernel_X = sX % kernel_w;
int kernel_Y = (sX / kernel_w) % kernel_h;
int kernel_C = (sX / (kernel_w * kernel_h)) % channels;
int FP = filter / swizzleFactor;
int F1 = filter % swizzleFactor;
weightOut[FP*(kernel_w*kernel_h*channels*swizzleFactor) + kernel_C*(kernel_w*kernel_h*swizzleFactor) + kernel_Y*(kernel_w*swizzleFactor) + kernel_X*swizzleFactor + F1]
= weightIn[filter*(kernel_w*kernel_h*channels) + kernel_C*(kernel_w*kernel_h) + kernel_Y*kernel_w + kernel_X];
}

@ -0,0 +1,43 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void dummy_kernel()
{
}

@ -0,0 +1,635 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
// Types used for parameters, offset computations and so on
#define int_tp int
#define uint_tp unsigned int
#define Dtype float
#define Dtype2 float2
#define Dtype4 float4
#define Dtype8 float8
#define as_Dtype as_float
#define as_Dtype2 as_float2
#define as_Dtype4 as_float4
#define as_Dtype8 as_float8
#define KERNEL_ARG_DTYPE float
#if defined(cl_intel_subgroups)
#pragma OPENCL EXTENSION cl_intel_subgroups : enable
#endif
#define TILE_M 32
#define TILE_K 8
// common block to calculate (alpha * AxB + beta * C) and output to destination image.
#define SUBGROUP_BLOCK_READ8( __image, __coord ) intel_sub_group_block_read8( __image, __coord )
#define SHUFFLE_TYPE2(val) val
#define SHUFFLE_TYPE8(val) val
#define READ_IMAGE(__image, __coord) read_imagef(__image, sampler, __coord)
#define SIZE_OF_ELEMENT sizeof(uint)
#define SIMD_SIZE_GEMM 8
#define TILE_N 8
//#define USE_IMAGE_C
#ifdef USE_IMAGE_C
#define BLOCKC_READ8( _C, _coordC ) as_Dtype8( intel_sub_group_block_read8( _C, _coordC ) )
#define BLOCKC_WRITE8( _C, _coordC, _val ) intel_sub_group_block_write8( _C, _coordC, as_uint8( _val ) )
#define MATC_PARAMETER __read_only image2d_t C, __write_only image2d_t dst
#define GEMM_OUTPUT(ALPHA1, BETA_NOT0) GEMM_OUTPUT_EXT(ALPHA1, BETA_NOT0, C, dst, sizeof(uint))
#else
#define BLOCKC_READ8( _C, _coordC ) \
(Dtype8) ( (_coordC.x + get_local_id(0) < N && _coordC.y < M) ? _C[ _coordC.y * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 1 < M) ? _C[ ( _coordC.y + 1 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 2 < M) ? _C[ ( _coordC.y + 2 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 3 < M) ? _C[ ( _coordC.y + 3 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 4 < M) ? _C[ ( _coordC.y + 4 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 5 < M) ? _C[ ( _coordC.y + 5 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 6 < M) ? _C[ ( _coordC.y + 6 ) * ldc + _coordC.x + get_local_id(0) ] : 0, \
(_coordC.x + get_local_id(0) < N && _coordC.y + 7 < M) ? _C[ ( _coordC.y + 7 ) * ldc + _coordC.x + get_local_id(0) ] : 0)
#define BLOCKC_WRITE8( _C, _coordC, _val) do {\
if (_coordC.x + get_local_id(0) < N) { \
if (_coordC.y < M) \
_C[ _coordC.y * ldc + _coordC.x + get_local_id(0) ] = _val.s0; \
if (_coordC.y + 1 < M) \
_C[ ( _coordC.y + 1 )* ldc + _coordC.x + get_local_id(0) ] = _val.s1; \
if (_coordC.y + 2 < M) \
_C[ ( _coordC.y + 2 )* ldc + _coordC.x + get_local_id(0) ] = _val.s2; \
if (_coordC.y + 3 < M) \
_C[ ( _coordC.y + 3 )* ldc + _coordC.x + get_local_id(0) ] = _val.s3; \
if (_coordC.y + 4 < M) \
_C[ ( _coordC.y + 4 )* ldc + _coordC.x + get_local_id(0) ] = _val.s4; \
if (_coordC.y + 5 < M) \
_C[ ( _coordC.y + 5 )* ldc + _coordC.x + get_local_id(0) ] = _val.s5; \
if (_coordC.y + 6 < M) \
_C[ ( _coordC.y + 6 )* ldc + _coordC.x + get_local_id(0) ] = _val.s6; \
if (_coordC.y + 7 < M) \
_C[ ( _coordC.y + 7 )* ldc + _coordC.x + get_local_id(0) ] = _val.s7; \
}} while(0)
#define MATC_PARAMETER __global Dtype * C, const int offC, const int M, const int N, const int ldc
#define GEMM_OUTPUT(ALPHA1, BETA_NOT0) GEMM_OUTPUT_EXT(ALPHA1, BETA_NOT0, (C + offC), (C + offC), 1)
#endif
#define GEMM_OUTPUT_EXT(ALPHA1, BETA_NOT0, _C, _dst, _C_step) \
int2 coordDst = (int2)( ( group_x * TILE_N ) * _C_step, ( group_y * TILE_M ) ); \
int2 coordC = coordDst; \
Dtype8 blockC00; \
Dtype8 blockC01; \
Dtype8 blockC02; \
Dtype8 blockC03; \
if (BETA_NOT0) { \
blockC00 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC01 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC02 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC03 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); \
if (!ALPHA1) { \
blockC00 = mad(blockAxB00, (Dtype8)alpha, blockC00); \
blockC01 = mad(blockAxB01, (Dtype8)alpha, blockC01); \
blockC02 = mad(blockAxB02, (Dtype8)alpha, blockC02); \
blockC03 = mad(blockAxB03, (Dtype8)alpha, blockC03); \
} else { \
blockC00 += blockAxB00; \
blockC01 += blockAxB01; \
blockC02 += blockAxB02; \
blockC03 += blockAxB03; \
} \
} else { \
blockC00 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC01 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC02 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); coordC.y += 8; \
blockC03 = isFirstColBlock ? BLOCKC_READ8( _C, coordC ) * beta : BLOCKC_READ8( _C, coordC ); \
if (!ALPHA1) { \
blockC00 = mad(blockAxB00, (Dtype8)alpha, blockC00); \
blockC01 = mad(blockAxB01, (Dtype8)alpha, blockC01); \
blockC02 = mad(blockAxB02, (Dtype8)alpha, blockC02); \
blockC03 = mad(blockAxB03, (Dtype8)alpha, blockC03); \
} else { \
blockC00 += blockAxB00; \
blockC01 += blockAxB01; \
blockC02 += blockAxB02; \
blockC03 += blockAxB03; \
} \
} \
BLOCKC_WRITE8( _dst, coordDst, blockC00 ); coordDst.y += 8; \
BLOCKC_WRITE8( _dst, coordDst, blockC01 ); coordDst.y += 8; \
BLOCKC_WRITE8( _dst, coordDst, blockC02 ); coordDst.y += 8; \
BLOCKC_WRITE8( _dst, coordDst, blockC03 );
// Get the specified column of the block of the block
#define TRANSPOSE_BLOCK_8( _block, _col ) \
(Dtype8)( intel_sub_group_shuffle( _block.s0, _col ), \
intel_sub_group_shuffle( _block.s1, _col ), \
intel_sub_group_shuffle( _block.s2, _col ), \
intel_sub_group_shuffle( _block.s3, _col ), \
intel_sub_group_shuffle( _block.s4, _col ), \
intel_sub_group_shuffle( _block.s5, _col ), \
intel_sub_group_shuffle( _block.s6, _col ), \
intel_sub_group_shuffle( _block.s7, _col ) );
// A's column block multiply B 's row block.
#define MULTIPLY_BLOCKS_8x8( _result, _blockA, _blockB ) \
{ \
const Dtype8 acol0 = TRANSPOSE_BLOCK_8( _blockA, 0 ); \
const Dtype8 acol1 = TRANSPOSE_BLOCK_8( _blockA, 1 ); \
const Dtype8 acol2 = TRANSPOSE_BLOCK_8( _blockA, 2 ); \
const Dtype8 acol3 = TRANSPOSE_BLOCK_8( _blockA, 3 ); \
const Dtype8 acol4 = TRANSPOSE_BLOCK_8( _blockA, 4 ); \
const Dtype8 acol5 = TRANSPOSE_BLOCK_8( _blockA, 5 ); \
const Dtype8 acol6 = TRANSPOSE_BLOCK_8( _blockA, 6 ); \
const Dtype8 acol7 = TRANSPOSE_BLOCK_8( _blockA, 7 ); \
_result = mad( (Dtype8)(_blockB.s0), acol0, _result ); \
_result = mad( (Dtype8)(_blockB.s1), acol1, _result ); \
_result = mad( (Dtype8)(_blockB.s2), acol2, _result ); \
_result = mad( (Dtype8)(_blockB.s3), acol3, _result ); \
_result = mad( (Dtype8)(_blockB.s4), acol4, _result ); \
_result = mad( (Dtype8)(_blockB.s5), acol5, _result ); \
_result = mad( (Dtype8)(_blockB.s6), acol6, _result ); \
_result = mad( (Dtype8)(_blockB.s7), acol7, _result ); \
}
#define GEMM_NN(ALPHA1, BETA_NOT0) \
__attribute__((intel_reqd_sub_group_size(SIMD_SIZE_GEMM))) \
__attribute__((reqd_work_group_size(SIMD_SIZE_GEMM, 1, 1))) \
__kernel void TEMPLATE(gemm_32_1_NN_ ##ALPHA1 ##_ ##BETA_NOT0, Dtype)( \
__read_only image2d_t A, \
__read_only image2d_t B, \
MATC_PARAMETER, \
KERNEL_ARG_DTYPE alpha_in, \
KERNEL_ARG_DTYPE beta_in, \
int width0, \
int isFirstColBlock) \
{ \
const Dtype alpha = (Dtype)alpha_in; \
const Dtype beta = (Dtype)beta_in; \
const int group_x = get_group_id(0); \
const int group_y = get_group_id(1); \
Dtype8 blockAxB00 = 0.0f; \
Dtype8 blockAxB01 = 0.0f; \
Dtype8 blockAxB02 = 0.0f; \
Dtype8 blockAxB03 = 0.0f; \
int2 coordA = (int2)( 0, group_y * TILE_M ); \
int2 coordB = (int2)( ( group_x * TILE_N ) * SIZE_OF_ELEMENT, 0 ); \
do \
{ \
int2 coordBTemp = coordB; \
Dtype8 blockB00 = as_Dtype8( SUBGROUP_BLOCK_READ8( B, coordBTemp ) ); coordB.y += TILE_K; \
int2 coordATemp = coordA; \
Dtype8 blockA00 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA01 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA02 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA03 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordA.x += TILE_K * SIZE_OF_ELEMENT; \
MULTIPLY_BLOCKS_8x8( blockAxB00, blockA00, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB01, blockA01, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB02, blockA02, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB03, blockA03, blockB00 ); \
} \
while( coordB.y < width0 ); \
GEMM_OUTPUT(ALPHA1, BETA_NOT0); \
}
GEMM_NN(1, 0) // ALPHA == 1, BETA == 0
GEMM_NN(1, 1) // ALPHA == 1, BETA != 0
GEMM_NN(0, 0) // ALPHA != 1, BETA == 0
GEMM_NN(0, 1) // ALPHA != 1, BETA != 0
#undef TRANSPOSE_BLOCK_8
#undef MULTIPLY_BLOCKS_8x8
#undef GEMM_NN
// replicate the first row to column block.
#define TRANSPOSE_BLOCK_8(_vec, _col) \
(Dtype8)( intel_sub_group_shuffle(_vec, _col + 0), \
intel_sub_group_shuffle(_vec, _col + 1), \
intel_sub_group_shuffle(_vec, _col + 2), \
intel_sub_group_shuffle(_vec, _col + 3), \
intel_sub_group_shuffle(_vec, _col + 4), \
intel_sub_group_shuffle(_vec, _col + 5), \
intel_sub_group_shuffle(_vec, _col + 6), \
intel_sub_group_shuffle(_vec, _col + 7) )
#define MULTIPLY_BLOCKS_8x8( _result, _blockA, _blockB, _col ) \
{ \
_result = mad( (Dtype8)(_blockB.s0), TRANSPOSE_BLOCK_8(_blockA.s0, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s1), TRANSPOSE_BLOCK_8(_blockA.s1, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s2), TRANSPOSE_BLOCK_8(_blockA.s2, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s3), TRANSPOSE_BLOCK_8(_blockA.s3, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s4), TRANSPOSE_BLOCK_8(_blockA.s4, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s5), TRANSPOSE_BLOCK_8(_blockA.s5, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s6), TRANSPOSE_BLOCK_8(_blockA.s6, _col), _result ); \
_result = mad( (Dtype8)(_blockB.s7), TRANSPOSE_BLOCK_8(_blockA.s7, _col), _result ); \
}
#define GEMM_TN(ALPHA1, BETA_NOT0) \
__attribute__((intel_reqd_sub_group_size(SIMD_SIZE_GEMM))) \
__attribute__((reqd_work_group_size(SIMD_SIZE_GEMM, 1, 1))) \
__kernel void TEMPLATE(gemm_32_1_TN_ ##ALPHA1 ##_ ##BETA_NOT0,Dtype)( \
__read_only image2d_t A, \
__read_only image2d_t B, \
MATC_PARAMETER, \
KERNEL_ARG_DTYPE alpha_in, \
KERNEL_ARG_DTYPE beta_in, \
int width0, \
int isFirstColBlock) \
{ \
const Dtype alpha = (Dtype)alpha_in; \
const Dtype beta = (Dtype)beta_in; \
const int group_x = get_group_id(0);\
const int group_y = get_group_id(1);\
Dtype8 blockAxB00 = 0.0f;\
Dtype8 blockAxB01 = 0.0f;\
Dtype8 blockAxB02 = 0.0f;\
Dtype8 blockAxB03 = 0.0f;\
int2 coordA = (int2)( group_y * TILE_M * SIZE_OF_ELEMENT, 0 );\
int2 coordB = (int2)( ( group_x * TILE_N ) * SIZE_OF_ELEMENT, 0 );\
do\
{\
int2 coordBTemp = coordB;\
Dtype8 blockB00 = as_Dtype8( SUBGROUP_BLOCK_READ8( B, coordBTemp ) ); coordB.y += TILE_K;\
int2 coordATemp = coordA;\
Dtype8 blockA00 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT;\
Dtype8 blockA01 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT;\
Dtype8 blockA02 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT;\
Dtype8 blockA03 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordA.y += TILE_K;\
MULTIPLY_BLOCKS_8x8( blockAxB00, blockA00, blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB01, blockA01, blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB02, blockA02, blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB03, blockA03, blockB00, 0 ); \
} \
while( coordB.y < width0 ); \
GEMM_OUTPUT(ALPHA1, BETA_NOT0); \
}
GEMM_TN(1, 0) // ALPHA == 1, BETA == 0
GEMM_TN(1, 1) // ALPHA == 1, BETA != 0
GEMM_TN(0, 0) // ALPHA != 1, BETA == 0
GEMM_TN(0, 1) // ALPHA != 1, BETA != 0
#undef MULTIPLY_BLOCKS_8x8
#undef TRANSPOSE_BLOCK_8
#undef GEMM_TN
// The same as GEMM_NN
#define TRANSPOSE_BLOCK_8( _block, _col ) \
(Dtype8)( intel_sub_group_shuffle( _block.s0, _col), \
intel_sub_group_shuffle( _block.s1, _col), \
intel_sub_group_shuffle( _block.s2, _col), \
intel_sub_group_shuffle( _block.s3, _col), \
intel_sub_group_shuffle( _block.s4, _col), \
intel_sub_group_shuffle( _block.s5, _col), \
intel_sub_group_shuffle( _block.s6, _col), \
intel_sub_group_shuffle( _block.s7, _col) )
#define MULTIPLY_BLOCKS_8x8( _result, _blockA, _blockB ) \
{ \
const Dtype8 acol0 = TRANSPOSE_BLOCK_8( _blockA, 0 ); \
const Dtype8 acol1 = TRANSPOSE_BLOCK_8( _blockA, 1 ); \
const Dtype8 acol2 = TRANSPOSE_BLOCK_8( _blockA, 2 ); \
const Dtype8 acol3 = TRANSPOSE_BLOCK_8( _blockA, 3 ); \
const Dtype8 acol4 = TRANSPOSE_BLOCK_8( _blockA, 4 ); \
const Dtype8 acol5 = TRANSPOSE_BLOCK_8( _blockA, 5 ); \
const Dtype8 acol6 = TRANSPOSE_BLOCK_8( _blockA, 6 ); \
const Dtype8 acol7 = TRANSPOSE_BLOCK_8( _blockA, 7 ); \
_result = mad( (Dtype8)_blockB.s0, acol0, _result ); \
_result = mad( (Dtype8)_blockB.s1, acol1, _result ); \
_result = mad( (Dtype8)_blockB.s2, acol2, _result ); \
_result = mad( (Dtype8)_blockB.s3, acol3, _result ); \
_result = mad( (Dtype8)_blockB.s4, acol4, _result ); \
_result = mad( (Dtype8)_blockB.s5, acol5, _result ); \
_result = mad( (Dtype8)_blockB.s6, acol6, _result ); \
_result = mad( (Dtype8)_blockB.s7, acol7, _result ); \
}
#define GEMM_NT(ALPHA1, BETA_NOT0, VECSCALAR, VECSIZE) \
__attribute__((intel_reqd_sub_group_size(SIMD_SIZE_GEMM))) \
__attribute__((reqd_work_group_size(SIMD_SIZE_GEMM, 1, 1))) \
__kernel void TEMPLATE(gemm_32_1_NT_ ##VECSCALAR ##_ ##ALPHA1 ##_ ##BETA_NOT0,Dtype)( \
__read_only image2d_t A, \
MATB_PARAMETER, \
MATC_PARAMETER, \
KERNEL_ARG_DTYPE alpha_in, \
KERNEL_ARG_DTYPE beta_in, \
int padded_k, \
int k, \
int isFirstColBlock) \
{ \
const Dtype alpha = (Dtype)alpha_in; \
const Dtype beta = (Dtype)beta_in; \
const int group_x = get_group_id(0); \
const int group_y = get_group_id(1); \
Dtype8 blockAxB00 = 0.0f; \
Dtype8 blockAxB01 = 0.0f; \
Dtype8 blockAxB02 = 0.0f; \
Dtype8 blockAxB03 = 0.0f; \
int2 coordA = (int2)( 0, group_y * TILE_M ); \
int2 coordB = (int2)( 0, ( group_x * TILE_N )); \
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST; \
do \
{ \
Dtype8 blockB00; \
BLOCKB_READ8(blockB00, B, coordB); \
int2 coordATemp = coordA; \
Dtype8 blockA00 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA01 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA02 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.y += 8; \
Dtype8 blockA03 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordA.x += TILE_K * SIZE_OF_ELEMENT; \
MULTIPLY_BLOCKS_8x8( blockAxB00, blockA00, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB01, blockA01, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB02, blockA02, blockB00 ); \
MULTIPLY_BLOCKS_8x8( blockAxB03, blockA03, blockB00 ); \
} \
while( coordB.x < padded_k / VECSIZE ); \
GEMM_OUTPUT(ALPHA1, BETA_NOT0); \
}
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
_blockb.s0123 = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s4567 = READ_IMAGE(_B, _coordBTemp); _coordB.x += 2;
#define MATB_PARAMETER __read_only image2d_t B
GEMM_NT(1, 0, VEC4, 4) // ALPHA == 1, BETA == 0
GEMM_NT(1, 1, VEC4, 4) // ALPHA == 1, BETA != 0
GEMM_NT(0, 0, VEC4, 4) // ALPHA != 1, BETA == 0
GEMM_NT(0, 1, VEC4, 4) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
const __global Dtype *B_read = (__global Dtype *)(_B + (_coordBTemp.y * ldb) + _coordBTemp.x + offB); \
_blockb = vload8(0, B_read); \
_coordB.x += TILE_K;
#define MATB_PARAMETER __global Dtype *B, int offB, int ldb
GEMM_NT(1, 0, BUFFER, 1) // ALPHA == 1, BETA == 0
GEMM_NT(1, 1, BUFFER, 1) // ALPHA == 1, BETA != 0
GEMM_NT(0, 0, BUFFER, 1) // ALPHA != 1, BETA == 0
GEMM_NT(0, 1, BUFFER, 1) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
Dtype4 temp; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s0 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s1 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s2 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s3 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s4 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s5 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s6 = temp.s0; \
temp = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s7 = temp.s0; \
_coordB.x += 8;
#define MATB_PARAMETER __read_only image2d_t B
GEMM_NT(1, 0, SCALAR, 1) // ALPHA == 1, BETA == 0
GEMM_NT(1, 1, SCALAR, 1) // ALPHA == 1, BETA != 0
GEMM_NT(0, 0, SCALAR, 1) // ALPHA != 1, BETA == 0
GEMM_NT(0, 1, SCALAR, 1) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#undef MULTIPLY_BLOCKS_8x8
#undef TRANSPOSE_BLOCK_8
#undef GEMM_NT
//The same as GEMM_TN.
#define TRANSPOSE_BLOCK_8(_vec, _col) \
(Dtype8)( intel_sub_group_shuffle(_vec, _col + 0), \
intel_sub_group_shuffle(_vec, _col + 1), \
intel_sub_group_shuffle(_vec, _col + 2), \
intel_sub_group_shuffle(_vec, _col + 3), \
intel_sub_group_shuffle(_vec, _col + 4), \
intel_sub_group_shuffle(_vec, _col + 5), \
intel_sub_group_shuffle(_vec, _col + 6), \
intel_sub_group_shuffle(_vec, _col + 7) );
#define MULTIPLY_BLOCKS_8x8( _result, _blockA, _blockB, _col ) \
{ \
const Dtype8 acol0 = TRANSPOSE_BLOCK_8( _blockA.s0, _col ); \
const Dtype8 acol1 = TRANSPOSE_BLOCK_8( _blockA.s1, _col ); \
const Dtype8 acol2 = TRANSPOSE_BLOCK_8( _blockA.s2, _col ); \
const Dtype8 acol3 = TRANSPOSE_BLOCK_8( _blockA.s3, _col ); \
const Dtype8 acol4 = TRANSPOSE_BLOCK_8( _blockA.s4, _col ); \
const Dtype8 acol5 = TRANSPOSE_BLOCK_8( _blockA.s5, _col ); \
const Dtype8 acol6 = TRANSPOSE_BLOCK_8( _blockA.s6, _col ); \
const Dtype8 acol7 = TRANSPOSE_BLOCK_8( _blockA.s7, _col ); \
_result = mad( (Dtype8)_blockB.s0, acol0, _result ); \
_result = mad( (Dtype8)_blockB.s1, acol1, _result ); \
_result = mad( (Dtype8)_blockB.s2, acol2, _result ); \
_result = mad( (Dtype8)_blockB.s3, acol3, _result ); \
_result = mad( (Dtype8)_blockB.s4, acol4, _result ); \
_result = mad( (Dtype8)_blockB.s5, acol5, _result ); \
_result = mad( (Dtype8)_blockB.s6, acol6, _result ); \
_result = mad( (Dtype8)_blockB.s7, acol7, _result ); \
}
#define GEMM_TT(ALPHA1, BETA_NOT0, VECSCALAR, VECSIZE) \
__attribute__((intel_reqd_sub_group_size(SIMD_SIZE_GEMM))) \
__attribute__((reqd_work_group_size(SIMD_SIZE_GEMM, 1, 1))) \
__kernel void TEMPLATE(gemm_32_1_TT_ ##VECSCALAR ##_ ##ALPHA1 ##_ ##BETA_NOT0, Dtype)( \
__read_only image2d_t A, \
MATB_PARAMETER, \
MATC_PARAMETER, \
KERNEL_ARG_DTYPE alpha_in, \
KERNEL_ARG_DTYPE beta_in, \
int padded_k, \
int k, \
int isFirstColBlock) \
{ \
const Dtype alpha = (Dtype)alpha_in; \
const Dtype beta = (Dtype)beta_in; \
const int group_x = get_group_id(0); \
const int group_y = get_group_id(1); \
Dtype8 blockAxB00 = 0.0f; \
Dtype8 blockAxB01 = 0.0f; \
Dtype8 blockAxB02 = 0.0f; \
Dtype8 blockAxB03 = 0.0f; \
int2 coordA = (int2)( group_y * TILE_M * SIZE_OF_ELEMENT, 0 ); \
int2 coordB = (int2)( 0, ( group_x * TILE_N )); \
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST; \
do \
{ \
Dtype8 blockB00; \
BLOCKB_READ8(blockB00, B, coordB); \
int2 coordATemp = coordA; \
Dtype8 blockA00 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT; \
Dtype8 blockA01 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT; \
Dtype8 blockA02 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordATemp.x += 8 * SIZE_OF_ELEMENT; \
Dtype8 blockA03 = as_Dtype8( SUBGROUP_BLOCK_READ8( A, coordATemp ) ); coordA.y += TILE_K; \
MULTIPLY_BLOCKS_8x8( blockAxB00, blockA00 , blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB01, blockA01 , blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB02, blockA02 , blockB00, 0 ); \
MULTIPLY_BLOCKS_8x8( blockAxB03, blockA03 , blockB00, 0 ); \
} \
while( coordB.x < padded_k / VECSIZE ); \
GEMM_OUTPUT(ALPHA1, BETA_NOT0);\
}
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
_blockb.s0123 = READ_IMAGE(_B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s4567 = READ_IMAGE(_B, _coordBTemp); _coordB.x += 2;
#define MATB_PARAMETER __read_only image2d_t B
GEMM_TT(1, 0, VEC4, 4) // ALPHA == 1, BETA == 0
GEMM_TT(1, 1, VEC4, 4) // ALPHA == 1, BETA != 0
GEMM_TT(0, 0, VEC4, 4) // ALPHA != 1, BETA == 0
GEMM_TT(0, 1, VEC4, 4) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
const __global Dtype *B_read = (__global Dtype *)(_B + (_coordBTemp.y * k) + _coordBTemp.x + offB); \
_blockb = vload8(0, B_read); \
_coordB.x += TILE_K;
#define MATB_PARAMETER __global Dtype *B, int offB, int ldb
GEMM_TT(1, 0, BUFFER, 1) // ALPHA == 1, BETA == 0
GEMM_TT(1, 1, BUFFER, 1) // ALPHA == 1, BETA != 0
GEMM_TT(0, 0, BUFFER, 1) // ALPHA != 1, BETA == 0
GEMM_TT(0, 1, BUFFER, 1) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#define BLOCKB_READ8(_blockb, _B, _coordB) \
int2 _coordBTemp = _coordB; \
_coordBTemp.y += get_local_id(0); \
Dtype4 temp; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s0 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s1 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s2 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s3 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s4 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s5 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s6 = temp.s0; \
temp = READ_IMAGE(B, _coordBTemp); _coordBTemp.x += 1; \
_blockb.s7 = temp.s0; \
_coordB.x += 8;
#define MATB_PARAMETER __read_only image2d_t B
GEMM_TT(1, 0, SCALAR, 1) // ALPHA == 1, BETA == 0
GEMM_TT(1, 1, SCALAR, 1) // ALPHA == 1, BETA != 0
GEMM_TT(0, 0, SCALAR, 1) // ALPHA != 1, BETA == 0
GEMM_TT(0, 1, SCALAR, 1) // ALPHA != 1, BETA != 0
#undef BLOCKB_READ8
#undef MATB_PARAMETER
#undef MULTIPLY_BLOCKS_8x8
#undef TRANSPOSE_BLOCK_8
#undef GEMM_TT
#undef TILE_M
#undef TILE_K
#undef TILE_N
#undef SUBGROUP_BLOCK_READ8
#undef READ_IMAGE
#undef SIZE_OF_ELEMENT
__kernel void TEMPLATE(gemm_buffer_copy_image_transpose,Dtype)(
__global Dtype* A,
__write_only image2d_t ImA,
int offA,
int width,
int height,
int ldA)
{
const int gidx = get_global_id(0);
const int gidy = get_global_id(1);
int2 coord_dst = (int2)(gidx, gidy);
__global Dtype* A_off = A + offA;
Dtype srcA = A_off[gidy * ldA + gidx];
write_imagef(ImA, coord_dst, (Dtype4)srcA);
}
__kernel void TEMPLATE(gemm_buffer_copy_image_no_transpose,Dtype)(
__global Dtype* A,
__write_only image2d_t ImA,
int offA,
int width,
int height,
int ldA)
{
const int gidx = get_global_id(0);
const int gidy = get_global_id(1);
int2 coord_dst = (int2)(gidx, gidy);
if (gidx >= width || gidy >= height) {
write_imageui(ImA, coord_dst, (uint4)0);
return;
}
__global Dtype* A_off = A + offA;
uint4 srcA = convert_uint4(as_uchar4(A_off[gidy * ldA + gidx]));
write_imageui(ImA, coord_dst, srcA);
}

@ -0,0 +1,55 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
__kernel void TEMPLATE(axpy,Dtype)(const int n, const Dtype alpha, __global const Dtype* x,
const int offx, __global Dtype* y,
const int offy) {
for (int index = get_global_id(0); index < n; index += get_global_size(0)) {
Dtype src = x[offx + index];
Dtype dst = y[offy + index];
y[offy + index] = alpha * src + dst;
}
}

@ -0,0 +1,191 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
__kernel void TEMPLATE(matvec_mul4,Dtype)(
__global const float * A,
int offA,
unsigned int A_col_size,
unsigned int trail_item,
__global const float * v,
int offv,
float alpha,
float beta,
__global float4 * result,
int offr,
__local float4 * work)
{
unsigned int row_gid = get_group_id(0);
unsigned int lid = get_local_id(0);
const __global float *src0_read = A + row_gid * 4 * A_col_size + offA;
const __global float *src1_read = v + offv;
result = (__global float4*)((__global float*)result + offr);
float4 dot0 = (float4)(0.f);
float4 dot1 = (float4)(0.f);
float4 dot2 = (float4)(0.f);
float4 dot3 = (float4)(0.f);
unsigned int i = lid;
while( i < A_col_size / 4) {
const float4 a0 = vload4(i, src0_read);
const float4 a1 = vload4(i, src0_read + A_col_size);
const float4 a2 = vload4(i, src0_read + 2 * A_col_size);
const float4 a3 = vload4(i, src0_read + 3 * A_col_size);
const float4 b0 = vload4(i, src1_read);
dot0 += a0 * b0;
dot1 += a1 * b0;
dot2 += a2 * b0;
dot3 += a3 * b0;
i += get_local_size(0);
}
work[lid].s0 = dot0.x + dot0.y + dot0.z + dot0.w;
work[lid].s1 = dot1.x + dot1.y + dot1.z + dot1.w;
work[lid].s2 = dot2.x + dot2.y + dot2.z + dot2.w;
work[lid].s3 = dot3.x + dot3.y + dot3.z + dot3.w;
if(i == A_col_size / 4)
{
if(trail_item != 0)
{
const __global float *src0_trail = src0_read + i * 4;
const __global float *src1_trail = src1_read + i * 4;
for(unsigned int i = 0; i < trail_item; ++i) {
const float at0 = src0_trail[i];
const float at1 = src0_trail[i + A_col_size];
const float at2 = src0_trail[i + 2 * A_col_size];
const float at3 = src0_trail[i + 3 * A_col_size];
const float bt = src1_trail[i];
work[lid].s0 += at0 * bt;
work[lid].s1 += at1 * bt;
work[lid].s2 += at2 * bt;
work[lid].s3 += at3 * bt;
}
}
}
for(unsigned int stride=get_local_size(0)/2 ; stride>0 ; stride>>=1) {
barrier(CLK_LOCAL_MEM_FENCE);
if(lid < stride)
work[lid] += work[lid+stride];
}
if(lid == 0) {
if(beta == (Dtype)0)
result[row_gid] = alpha * work[0];
else
result[row_gid] = alpha * work[0] + beta * result[row_gid];
}
}
/* This kernel used for the trailing rows when row_of_A %4 !=0 */
__kernel void TEMPLATE(matvec_mul1,Dtype)(
__global const float * A,
int offA,
unsigned int A_col_size,
unsigned int row_offset,
unsigned int trail_item,
__global const float * v,
int offv,
float alpha,
float beta,
__global float * result,
int offr,
__local float * work)
{
unsigned int row_gid = get_group_id(0);
unsigned int lid = get_local_id(0);
const __global float *src0_read = A + (row_offset + row_gid) * A_col_size + offA;
const __global float *src1_read = v + + offv;
result = result + offr;
float4 dot0 = (float4)(0.f);
unsigned int i = lid;
while( i < A_col_size / 4)
{
const float4 a0 = vload4(i, src0_read);
const float4 b0 = vload4(i, src1_read);
dot0 += a0 * b0;
i += get_local_size(0);
}
work[lid] = dot0.x + dot0.y + dot0.z + dot0.w;
if(i == A_col_size / 4)
{
if(trail_item != 0)
{
const __global float *src0_trail = src0_read + i * 4;
const __global float *src1_trail = src1_read + i * 4;
for(unsigned int i = 0; i < trail_item; ++i) {
const float at0 = src0_trail[i];
const float bt = src1_trail[i];
work[lid] += at0 * bt;
}
}
}
for(unsigned int stride=get_local_size(0)/2 ; stride>0 ; stride>>=1) {
barrier(CLK_LOCAL_MEM_FENCE);
if(lid < stride)
work[lid] += work[lid+stride];
}
if(lid == 0) {
if(beta == (Dtype)0) {
result[row_gid+row_offset] = alpha * work[0];
} else {
result[row_gid+row_offset] *= beta;
result[row_gid+row_offset] += alpha * work[0];
}
}
}

@ -0,0 +1,96 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
__kernel void TEMPLATE(lrn_full_no_scale,Dtype)(const int nthreads, __global const Dtype* in,
const int num, const int channels,
const int height, const int width, const int size,
const Dtype alpha_over_size, const Dtype k,
__global Dtype* const out,
const Dtype negative_beta) {
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0)) {
// find out the local offset
const int w = index % width;
const int h = (index / width) % height;
const int n = index / width / height;
const int offset = (n * channels * height + h) * width + w;
const int step = height * width;
__global const Dtype* in_off = in + offset;
__global Dtype* out_off = out + offset;
Dtype scale_val;
int head = 0;
const int pre_pad = (size - 1) / 2;
const int post_pad = size - pre_pad - 1;
Dtype accum_scale = 0;
// fill the scale at [n, :, h, w]
// accumulate values
while (head < post_pad && head < channels) {
accum_scale += in_off[head * step] * in_off[head * step];
++head;
}
// both add and subtract
while (head < channels) {
accum_scale += in_off[head * step] * in_off[head * step];
if (head - size >= 0) {
accum_scale -= in_off[(head - size) * step]
* in_off[(head - size) * step];
}
scale_val = k + accum_scale * alpha_over_size;
out_off[(head - post_pad) * step] = in_off[(head - post_pad) * step] * (Dtype)native_powr((float)scale_val, (float)negative_beta);
++head;
}
// subtract only
while (head < channels + post_pad) {
if (head - size >= 0) {
accum_scale -= in_off[(head - size) * step]
* in_off[(head - size) * step];
}
scale_val = k + accum_scale * alpha_over_size;
out_off[(head - post_pad) * step] = in_off[(head - post_pad) * step] * (Dtype)native_powr((float)scale_val, (float)negative_beta);
++head;
}
}
}

@ -0,0 +1,177 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
void TEMPLATE(max_pool_forward_impl, Dtype)(
const int nthreads, __global const Dtype* bottom_data, const int num,
const int channels, const int height, const int width,
const int pooled_height, const int pooled_width, const int kernel_h,
const int kernel_w, const int stride_h, const int stride_w, const int pad_h,
const int pad_w,
__global Dtype* top_data,
const int use_mask, __global int* mask, __global Dtype* top_mask, bool no_mask)
{
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0))
{
const int pw = index % pooled_width;
const int ph = (index / pooled_width) % pooled_height;
const int c = (index / pooled_width / pooled_height) % channels;
const int n = index / pooled_width / pooled_height / channels;
int hstart = ph * stride_h - pad_h;
int wstart = pw * stride_w - pad_w;
const int hend = min(hstart + kernel_h, height);
const int wend = min(wstart + kernel_w, width);
hstart = max(hstart, (int)0);
wstart = max(wstart, (int)0);
Dtype maxval = -FLT_MAX;
int maxidx = -1;
__global const Dtype* bottom_slice = bottom_data
+ (n * channels + c) * height * width;
for (int h = hstart; h < hend; ++h) {
for (int w = wstart; w < wend; ++w) {
if (bottom_slice[h * width + w] > maxval) {
maxidx = h * width + w;
maxval = bottom_slice[maxidx];
}
}
}
top_data[index] = maxval;
if (!no_mask) {
if (use_mask == 1) {
mask[index] = maxidx;
} else {
top_mask[index] = maxidx;
}
}
}
}
__kernel void TEMPLATE(max_pool_forward, Dtype)(
const int nthreads, __global const Dtype* bottom_data, const int num,
const int channels, const int height, const int width,
const int pooled_height, const int pooled_width, const int kernel_h,
const int kernel_w, const int stride_h, const int stride_w, const int pad_h,
const int pad_w,
__global Dtype* top_data,
const int use_mask, __global int* mask, __global Dtype* top_mask)
{
TEMPLATE(max_pool_forward_impl, Dtype)(
nthreads, bottom_data, num, channels, height, width,
pooled_height, pooled_width, kernel_h,
kernel_w, stride_h, stride_w, pad_h, pad_w, top_data, use_mask, mask, top_mask, false
);
}
__kernel void TEMPLATE(ave_pool_forward, Dtype)(
const int nthreads, __global const Dtype* const bottom_data, const int num,
const int channels, const int height, const int width,
const int pooled_height, const int pooled_width, const int kernel_h,
const int kernel_w, const int stride_h, const int stride_w, const int pad_h,
const int pad_w, __global Dtype* top_data)
{
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0))
{
{
const int pw = index % pooled_width;
const int ph = (index / pooled_width) % pooled_height;
const int c = (index / pooled_width / pooled_height) % channels;
const int n = index / pooled_width / pooled_height / channels;
int hstart = ph * stride_h - pad_h;
int wstart = pw * stride_w - pad_w;
int hend = min(hstart + kernel_h, height + pad_h);
int wend = min(wstart + kernel_w, width + pad_w);
const int pool_size = (hend - hstart) * (wend - wstart);
hstart = max(hstart, (int)0);
wstart = max(wstart, (int)0);
hend = min(hend, height);
wend = min(wend, width);
Dtype aveval = 0;
__global const Dtype* bottom_slice = bottom_data
+ (n * channels + c) * height * width;
for (int h = hstart; h < hend; ++h) {
for (int w = wstart; w < wend; ++w) {
aveval += bottom_slice[h * width + w];
}
}
top_data[index] = aveval / pool_size;
}
}
}
__kernel void TEMPLATE(sto_pool_forward_test,Dtype)(
const int nthreads, __global const Dtype* const bottom_data, const int num,
const int channels, const int height, const int width,
const int pooled_height, const int pooled_width, const int kernel_h,
const int kernel_w, const int stride_h, const int stride_w,
__global Dtype* top_data)
{
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0))
{
const int pw = index % pooled_width;
const int ph = (index / pooled_width) % pooled_height;
const int c = (index / pooled_width / pooled_height) % channels;
const int n = index / pooled_width / pooled_height / channels;
const int hstart = ph * stride_h;
const int hend = min(hstart + kernel_h, height);
const int wstart = pw * stride_w;
const int wend = min(wstart + kernel_w, width);
// We set cumsum to be 0 to avoid divide-by-zero problems
Dtype cumsum = FLT_MIN;
Dtype cumvalues = 0.;
__global const Dtype* bottom_slice = bottom_data
+ (n * channels + c) * height * width;
// First pass: get sum
for (int h = hstart; h < hend; ++h) {
for (int w = wstart; w < wend; ++w) {
cumsum += bottom_slice[h * width + w];
cumvalues += bottom_slice[h * width + w] * bottom_slice[h * width + w];
}
}
top_data[index] = cumvalues / cumsum;
}
}

@ -70,6 +70,10 @@ __kernel void kernel_channel_div(const int count,
if(index < count) {
int n = index / channels / spatial_dim;
int s = index % spatial_dim;
data[index] /= channel_sum[n * spatial_dim + s];
T v = data[index] / channel_sum[n * spatial_dim + s];
#ifdef LOG_SOFTMAX
v = log(v);
#endif
data[index] = v;
}
}
}

@ -0,0 +1,182 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
#if defined(cl_intel_subgroups)
#pragma OPENCL EXTENSION cl_intel_subgroups : enable
#endif
__kernel void TEMPLATE(softmax_forward_slm,Dtype)(const int num, const int channels,
const int spatial_dim,
__global Dtype* scale,
__global const Dtype* data,
__global Dtype* out,
__local Dtype *out_tmp,
__local Dtype *scale_tmp,
__local Dtype *group_tmp) {
int n = get_global_id(1);
for (int index = get_global_id(0), s = 0; index < spatial_dim * get_local_size(0); index +=
get_global_size(0), ++s) {
float maxval = -FLT_MAX;
for (int c = get_global_id(0); c < channels; c += get_global_size(0)) {
Dtype tmp = data[(n * channels + c) * spatial_dim + s];
maxval = max((Dtype)tmp, (Dtype)maxval);
}
maxval = sub_group_reduce_max(maxval * 100000);
//if (get_sub_group_local_id() == 0)
group_tmp[get_sub_group_id() * spatial_dim + s] = maxval;
}
barrier(CLK_LOCAL_MEM_FENCE);
for (int index = get_global_id(0); index < spatial_dim * get_max_sub_group_size(); index +=
get_global_size(0)) {
int s = index / get_max_sub_group_size();
Dtype maxval = sub_group_reduce_max(group_tmp[get_sub_group_local_id() * spatial_dim + s]);
//if (get_sub_group_local_id() == 0)
scale_tmp[s] = maxval / 100000;
}
barrier(CLK_LOCAL_MEM_FENCE);
for (int index = get_global_id(0); index < channels * spatial_dim;
index += get_global_size(0)) {
int s = index % spatial_dim;
out_tmp[index] = exp(data[n * channels * spatial_dim + index] - scale_tmp[s]);
}
barrier(CLK_LOCAL_MEM_FENCE);
for (int index = get_global_id(0), s = 0; index < spatial_dim * get_local_size(0); index +=
get_global_size(0), ++s) {
Dtype sum = 0;
for (int c = get_global_id(0); c < channels; c += get_global_size(0)) {
sum += out_tmp[c * spatial_dim + s];
}
sum = sub_group_reduce_add(sum * 100000);
group_tmp[get_sub_group_id() * spatial_dim + s] = sum;
}
barrier(CLK_LOCAL_MEM_FENCE);
for (int index = get_global_id(0); index < spatial_dim * get_max_sub_group_size(); index +=
get_global_size(0)) {
int s = index / get_max_sub_group_size();
Dtype sum = sub_group_reduce_add(group_tmp[get_sub_group_local_id() * spatial_dim + s]);
//if (get_sub_group_local_id() == 0)
scale_tmp[s] = sum / 100000;
}
barrier(CLK_LOCAL_MEM_FENCE);
for (int index = get_global_id(0); index < channels * spatial_dim;
index += get_global_size(0)) {
int s = index % spatial_dim;
out[n * channels * spatial_dim + index] = out_tmp[index] / scale_tmp[s];
}
}
__kernel void TEMPLATE(softmax_forward,Dtype)(const int num, const int channels,
const int spatial_dim,
__global Dtype* scale,
__global const Dtype* data,
__global Dtype* out) {
int n = get_global_id(1);
__global Dtype *group_tmp = scale + spatial_dim * num + n * get_max_sub_group_size() * spatial_dim;
for (int index = get_global_id(0), s = 0; index < spatial_dim * get_local_size(0); index +=
get_global_size(0), ++s) {
float maxval = -FLT_MAX;
for (int c = get_global_id(0); c < channels; c += get_global_size(0)) {
Dtype tmp = data[(n * channels + c) * spatial_dim + s];
maxval = max((Dtype)tmp, (Dtype)maxval);
}
maxval = sub_group_reduce_max(maxval * 100000);
//if (get_sub_group_local_id() == 0)
group_tmp[get_sub_group_id() * spatial_dim + s] = maxval;
}
barrier(CLK_GLOBAL_MEM_FENCE);
for (int index = get_global_id(0); index < spatial_dim * get_max_sub_group_size(); index +=
get_global_size(0)) {
int s = index / get_max_sub_group_size();
Dtype maxval = sub_group_reduce_max(group_tmp[get_sub_group_local_id() * spatial_dim + s]);
//if (get_sub_group_local_id() == 0)
scale[n * spatial_dim + s] = maxval / 100000;
}
barrier(CLK_GLOBAL_MEM_FENCE);
for (int index = get_global_id(0); index < channels * spatial_dim;
index += get_global_size(0)) {
int s = index % spatial_dim;
out[n * channels * spatial_dim + index] = exp(data[n * channels * spatial_dim + index] - scale[n * spatial_dim + s]);
}
barrier(CLK_GLOBAL_MEM_FENCE);
for (int index = get_global_id(0), s = 0; index < spatial_dim * get_local_size(0); index +=
get_global_size(0), ++s) {
Dtype sum = 0;
for (int c = get_global_id(0); c < channels; c += get_global_size(0)) {
sum += out[n * channels * spatial_dim + c * spatial_dim + s];
}
sum = sub_group_reduce_add(sum * 100000);
group_tmp[get_sub_group_id() * spatial_dim + s] = sum;
}
barrier(CLK_GLOBAL_MEM_FENCE);
for (int index = get_global_id(0); index < spatial_dim * get_max_sub_group_size(); index +=
get_global_size(0)) {
int s = index / get_max_sub_group_size();
Dtype sum = sub_group_reduce_add(group_tmp[get_sub_group_local_id() * spatial_dim + s]);
//if (get_sub_group_local_id() == 0)
scale[n * spatial_dim + s] = sum / 100000;
}
barrier(CLK_GLOBAL_MEM_FENCE);
for (int index = get_global_id(0); index < channels * spatial_dim;
index += get_global_size(0)) {
int s = index % spatial_dim;
out[n * channels * spatial_dim + index] /= scale[n * spatial_dim + s];
}
}

@ -40,6 +40,8 @@
//M*/
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/core/opencl/ocl_defs.hpp>
#include <opencv2/core/utils/trace.hpp>
#include <opencv2/core/softfloat.hpp> // int32_t (MSVS 2010-2013)
#include "cvconfig.h"

@ -73,6 +73,26 @@ TEST(Reproducibility_GoogLeNet, Accuracy)
normAssert(out, ref);
}
OCL_TEST(Reproducibility_GoogLeNet, Accuracy)
{
Net net = readNetFromCaffe(findDataFile("dnn/bvlc_googlenet.prototxt", false),
findDataFile("dnn/bvlc_googlenet.caffemodel", false));
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(DNN_TARGET_OPENCL);
std::vector<Mat> inpMats;
inpMats.push_back( imread(_tf("googlenet_0.png")) );
inpMats.push_back( imread(_tf("googlenet_1.png")) );
ASSERT_TRUE(!inpMats[0].empty() && !inpMats[1].empty());
net.setInput(blobFromImages(inpMats, 1.0f, Size(), Scalar(), false), "data");
Mat out = net.forward("prob");
Mat ref = blobFromNPY(_tf("googlenet_prob.npy"));
normAssert(out, ref);
}
TEST(IntermediateBlobs_GoogLeNet, Accuracy)
{
Net net = readNetFromCaffe(findDataFile("dnn/bvlc_googlenet.prototxt", false),
@ -99,6 +119,35 @@ TEST(IntermediateBlobs_GoogLeNet, Accuracy)
}
}
OCL_TEST(IntermediateBlobs_GoogLeNet, Accuracy)
{
Net net = readNetFromCaffe(findDataFile("dnn/bvlc_googlenet.prototxt", false),
findDataFile("dnn/bvlc_googlenet.caffemodel", false));
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(DNN_TARGET_OPENCL);
std::vector<String> blobsNames;
blobsNames.push_back("conv1/7x7_s2");
blobsNames.push_back("conv1/relu_7x7");
blobsNames.push_back("inception_4c/1x1");
blobsNames.push_back("inception_4c/relu_1x1");
std::vector<Mat> outs;
Mat in = blobFromImage(imread(_tf("googlenet_0.png")), 1.0f, Size(), Scalar(), false);
net.setInput(in, "data");
net.forward(outs, blobsNames);
CV_Assert(outs.size() == blobsNames.size());
for (int i = 0; i < blobsNames.size(); i++)
{
std::string filename = blobsNames[i];
std::replace( filename.begin(), filename.end(), '/', '#');
Mat ref = blobFromNPY(_tf("googlenet_" + filename + ".npy"));
normAssert(outs[i], ref, "", 1E-4, 1E-2);
}
}
TEST(SeveralCalls_GoogLeNet, Accuracy)
{
Net net = readNetFromCaffe(findDataFile("dnn/bvlc_googlenet.prototxt", false),
@ -128,4 +177,36 @@ TEST(SeveralCalls_GoogLeNet, Accuracy)
normAssert(outs[0], ref, "", 1E-4, 1E-2);
}
OCL_TEST(SeveralCalls_GoogLeNet, Accuracy)
{
Net net = readNetFromCaffe(findDataFile("dnn/bvlc_googlenet.prototxt", false),
findDataFile("dnn/bvlc_googlenet.caffemodel", false));
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(DNN_TARGET_OPENCL);
std::vector<Mat> inpMats;
inpMats.push_back( imread(_tf("googlenet_0.png")) );
inpMats.push_back( imread(_tf("googlenet_1.png")) );
ASSERT_TRUE(!inpMats[0].empty() && !inpMats[1].empty());
net.setInput(blobFromImages(inpMats, 1.0f, Size(), Scalar(), false), "data");
Mat out = net.forward();
Mat ref = blobFromNPY(_tf("googlenet_prob.npy"));
normAssert(out, ref);
std::vector<String> blobsNames;
blobsNames.push_back("conv1/7x7_s2");
std::vector<Mat> outs;
Mat in = blobFromImage(inpMats[0], 1.0f, Size(), Scalar(), false);
net.setInput(in, "data");
net.forward(outs, blobsNames);
CV_Assert(outs.size() == blobsNames.size());
ref = blobFromNPY(_tf("googlenet_conv1#7x7_s2.npy"));
normAssert(outs[0], ref, "", 1E-4, 1E-2);
}
}

@ -98,7 +98,8 @@ void runLayer(Ptr<Layer> layer, std::vector<Mat> &inpBlobs, std::vector<Mat> &ou
}
void testLayerUsingCaffeModels(String basename, bool useCaffeModel = false, bool useCommonInputBlob = true)
void testLayerUsingCaffeModels(String basename, int targetId = DNN_TARGET_CPU,
bool useCaffeModel = false, bool useCommonInputBlob = true)
{
String prototxt = _tf(basename + ".prototxt");
String caffemodel = _tf(basename + ".caffemodel");
@ -111,6 +112,9 @@ void testLayerUsingCaffeModels(String basename, bool useCaffeModel = false, bool
Net net = readNetFromCaffe(prototxt, (useCaffeModel) ? caffemodel : String());
ASSERT_FALSE(net.empty());
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(targetId);
Mat inp = blobFromNPY(inpfile);
Mat ref = blobFromNPY(outfile);
@ -122,47 +126,82 @@ void testLayerUsingCaffeModels(String basename, bool useCaffeModel = false, bool
TEST(Layer_Test_Softmax, Accuracy)
{
testLayerUsingCaffeModels("layer_softmax");
testLayerUsingCaffeModels("layer_softmax");
}
OCL_TEST(Layer_Test_Softmax, Accuracy)
{
testLayerUsingCaffeModels("layer_softmax", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_LRN_spatial, Accuracy)
{
testLayerUsingCaffeModels("layer_lrn_spatial");
testLayerUsingCaffeModels("layer_lrn_spatial");
}
OCL_TEST(Layer_Test_LRN_spatial, Accuracy)
{
testLayerUsingCaffeModels("layer_lrn_spatial", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_LRN_channels, Accuracy)
{
testLayerUsingCaffeModels("layer_lrn_channels");
testLayerUsingCaffeModels("layer_lrn_channels");
}
OCL_TEST(Layer_Test_LRN_channels, Accuracy)
{
testLayerUsingCaffeModels("layer_lrn_channels", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_Convolution, Accuracy)
{
testLayerUsingCaffeModels("layer_convolution", true);
testLayerUsingCaffeModels("layer_convolution", DNN_TARGET_CPU, true);
}
OCL_TEST(Layer_Test_Convolution, Accuracy)
{
testLayerUsingCaffeModels("layer_convolution", DNN_TARGET_OPENCL, true);
}
TEST(Layer_Test_DeConvolution, Accuracy)
{
testLayerUsingCaffeModels("layer_deconvolution", true, false);
testLayerUsingCaffeModels("layer_deconvolution", DNN_TARGET_CPU, true, false);
}
TEST(Layer_Test_InnerProduct, Accuracy)
{
testLayerUsingCaffeModels("layer_inner_product", true);
testLayerUsingCaffeModels("layer_inner_product", DNN_TARGET_CPU, true);
}
OCL_TEST(Layer_Test_InnerProduct, Accuracy)
{
testLayerUsingCaffeModels("layer_inner_product", DNN_TARGET_OPENCL, true);
}
TEST(Layer_Test_Pooling_max, Accuracy)
{
testLayerUsingCaffeModels("layer_pooling_max");
testLayerUsingCaffeModels("layer_pooling_max");
}
OCL_TEST(Layer_Test_Pooling_max, Accuracy)
{
testLayerUsingCaffeModels("layer_pooling_max", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_Pooling_ave, Accuracy)
{
testLayerUsingCaffeModels("layer_pooling_ave");
testLayerUsingCaffeModels("layer_pooling_ave");
}
OCL_TEST(Layer_Test_Pooling_ave, Accuracy)
{
testLayerUsingCaffeModels("layer_pooling_ave", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_MVN, Accuracy)
{
testLayerUsingCaffeModels("layer_mvn");
testLayerUsingCaffeModels("layer_mvn");
}
void testReshape(const MatShape& inputShape, const MatShape& targetShape,
@ -207,22 +246,32 @@ TEST(Layer_Test_Reshape, Accuracy)
TEST(Layer_Test_BatchNorm, Accuracy)
{
testLayerUsingCaffeModels("layer_batch_norm", true);
testLayerUsingCaffeModels("layer_batch_norm", DNN_TARGET_CPU, true);
}
TEST(Layer_Test_ReLU, Accuracy)
{
testLayerUsingCaffeModels("layer_relu");
testLayerUsingCaffeModels("layer_relu");
}
OCL_TEST(Layer_Test_ReLU, Accuracy)
{
testLayerUsingCaffeModels("layer_relu", DNN_TARGET_OPENCL);
}
TEST(Layer_Test_Dropout, Accuracy)
{
testLayerUsingCaffeModels("layer_dropout");
testLayerUsingCaffeModels("layer_dropout");
}
TEST(Layer_Test_Concat, Accuracy)
{
testLayerUsingCaffeModels("layer_concat");
testLayerUsingCaffeModels("layer_concat");
}
OCL_TEST(Layer_Test_Concat, Accuracy)
{
testLayerUsingCaffeModels("layer_concat", DNN_TARGET_OPENCL);
}
//template<typename XMat>

@ -44,6 +44,7 @@
#include "test_precomp.hpp"
#include "npy_blob.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#include <opencv2/ts/ocl_test.hpp>
namespace cvtest
{
@ -70,7 +71,7 @@ TEST(Torch_Importer, simple_read)
ASSERT_FALSE(net.empty());
}
static void runTorchNet(String prefix, String outLayerName = "",
static void runTorchNet(String prefix, int targetId = DNN_TARGET_CPU, String outLayerName = "",
bool check2ndBlob = false, bool isBinary = false)
{
String suffix = (isBinary) ? ".dat" : ".txt";
@ -78,6 +79,9 @@ static void runTorchNet(String prefix, String outLayerName = "",
Net net = readNetFromTorch(_tf(prefix + "_net" + suffix), isBinary);
ASSERT_FALSE(net.empty());
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(targetId);
Mat inp, outRef;
ASSERT_NO_THROW( inp = readTorchBlob(_tf(prefix + "_input" + suffix), isBinary) );
ASSERT_NO_THROW( outRef = readTorchBlob(_tf(prefix + "_output" + suffix), isBinary) );
@ -103,9 +107,19 @@ TEST(Torch_Importer, run_convolution)
runTorchNet("net_conv");
}
OCL_TEST(Torch_Importer, run_convolution)
{
runTorchNet("net_conv", DNN_TARGET_OPENCL);
}
TEST(Torch_Importer, run_pool_max)
{
runTorchNet("net_pool_max", "", true);
runTorchNet("net_pool_max", DNN_TARGET_CPU, "", true);
}
OCL_TEST(Torch_Importer, run_pool_max)
{
runTorchNet("net_pool_max", DNN_TARGET_OPENCL, "", true);
}
TEST(Torch_Importer, run_pool_ave)
@ -113,12 +127,17 @@ TEST(Torch_Importer, run_pool_ave)
runTorchNet("net_pool_ave");
}
OCL_TEST(Torch_Importer, run_pool_ave)
{
runTorchNet("net_pool_ave", DNN_TARGET_OPENCL);
}
TEST(Torch_Importer, run_reshape)
{
runTorchNet("net_reshape");
runTorchNet("net_reshape_batch");
runTorchNet("net_reshape_single_sample");
runTorchNet("net_reshape_channels", "", false, true);
runTorchNet("net_reshape_channels", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, run_linear)
@ -128,13 +147,19 @@ TEST(Torch_Importer, run_linear)
TEST(Torch_Importer, run_paralel)
{
runTorchNet("net_parallel", "l5_torchMerge");
runTorchNet("net_parallel", DNN_TARGET_CPU, "l5_torchMerge");
}
TEST(Torch_Importer, run_concat)
{
runTorchNet("net_concat", "l5_torchMerge");
runTorchNet("net_depth_concat", "", false, true);
runTorchNet("net_concat", DNN_TARGET_CPU, "l5_torchMerge");
runTorchNet("net_depth_concat", DNN_TARGET_CPU, "", false, true);
}
OCL_TEST(Torch_Importer, run_concat)
{
runTorchNet("net_concat", DNN_TARGET_OPENCL, "l5_torchMerge");
runTorchNet("net_depth_concat", DNN_TARGET_OPENCL, "", false, true);
}
TEST(Torch_Importer, run_deconv)
@ -163,37 +188,49 @@ TEST(Torch_Importer, net_softmax)
runTorchNet("net_softmax_spatial");
}
OCL_TEST(Torch_Importer, net_softmax)
{
runTorchNet("net_softmax", DNN_TARGET_OPENCL);
runTorchNet("net_softmax_spatial", DNN_TARGET_OPENCL);
}
TEST(Torch_Importer, net_logsoftmax)
{
runTorchNet("net_logsoftmax");
runTorchNet("net_logsoftmax_spatial");
}
OCL_TEST(Torch_Importer, net_logsoftmax)
{
runTorchNet("net_logsoftmax", DNN_TARGET_OPENCL);
runTorchNet("net_logsoftmax_spatial", DNN_TARGET_OPENCL);
}
TEST(Torch_Importer, net_lp_pooling)
{
runTorchNet("net_lp_pooling_square", "", false, true);
runTorchNet("net_lp_pooling_power", "", false, true);
runTorchNet("net_lp_pooling_square", DNN_TARGET_CPU, "", false, true);
runTorchNet("net_lp_pooling_power", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, net_conv_gemm_lrn)
{
runTorchNet("net_conv_gemm_lrn", "", false, true);
runTorchNet("net_conv_gemm_lrn", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, net_inception_block)
{
runTorchNet("net_inception_block", "", false, true);
runTorchNet("net_inception_block", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, net_normalize)
{
runTorchNet("net_normalize", "", false, true);
runTorchNet("net_normalize", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, net_padding)
{
runTorchNet("net_padding", "", false, true);
runTorchNet("net_spatial_zero_padding", "", false, true);
runTorchNet("net_padding", DNN_TARGET_CPU, "", false, true);
runTorchNet("net_spatial_zero_padding", DNN_TARGET_CPU, "", false, true);
}
TEST(Torch_Importer, ENet_accuracy)
@ -245,6 +282,62 @@ TEST(Torch_Importer, OpenFace_accuracy)
normAssert(out, outRef);
}
OCL_TEST(Torch_Importer, OpenFace_accuracy)
{
const string model = findDataFile("dnn/openface_nn4.small2.v1.t7", false);
Net net = readNetFromTorch(model);
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(DNN_TARGET_OPENCL);
Mat sample = imread(findDataFile("cv/shared/lena.png", false));
Mat sampleF32(sample.size(), CV_32FC3);
sample.convertTo(sampleF32, sampleF32.type());
sampleF32 /= 255;
resize(sampleF32, sampleF32, Size(96, 96), 0, 0, INTER_NEAREST);
Mat inputBlob = blobFromImage(sampleF32);
net.setInput(inputBlob);
Mat out = net.forward();
Mat outRef = readTorchBlob(_tf("net_openface_output.dat"), true);
normAssert(out, outRef);
}
OCL_TEST(Torch_Importer, ENet_accuracy)
{
Net net;
{
const string model = findDataFile("dnn/Enet-model-best.net", false);
Ptr<Importer> importer = createTorchImporter(model, true);
ASSERT_TRUE(importer != NULL);
importer->populateNet(net);
}
net.setPreferableBackend(DNN_BACKEND_DEFAULT);
net.setPreferableTarget(DNN_TARGET_OPENCL);
Mat sample = imread(_tf("street.png", false));
Mat inputBlob = blobFromImage(sample, 1./255);
net.setInput(inputBlob, "");
Mat out = net.forward();
Mat ref = blobFromNPY(_tf("torch_enet_prob.npy", false));
// Due to numerical instability in Pooling-Unpooling layers (indexes jittering)
// thresholds for ENet must be changed. Accuracy of resuults was checked on
// Cityscapes dataset and difference in mIOU with Torch is 10E-4%
normAssert(ref, out, "", 0.00044, 0.44);
const int N = 3;
for (int i = 0; i < N; i++)
{
net.setInput(inputBlob, "");
Mat out = net.forward();
normAssert(ref, out, "", 0.00044, 0.44);
}
}
}
#endif

Loading…
Cancel
Save