mirror of https://github.com/opencv/opencv.git
Merge pull request #20406 from MarkGHX:gsoc_2021_webnn
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN * Add WebNN backend for OpenCV DNN Module Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp Add WebNN head files into OpenCV 3rd partiy files Create webnn.hpp update cmake Complete README and add OpenCVDetectWebNN.cmake file add webnn.cpp Modify webnn.cpp Can successfully compile the codes for creating a MLContext Update webnn.cpp Update README.md Update README.md Update README.md Update README.md Update cmake files and update README.md Update OpenCVDetectWebNN.cmake and README.md Update OpenCVDetectWebNN.cmake Fix OpenCVDetectWebNN.cmake and update README.md Add source webnn_cpp.cpp and libary libwebnn_proc.so Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp update dnn.cpp update op_webnn update op_webnn Update op_webnn.hpp update op_webnn.cpp & hpp Update op_webnn.hpp Update op_webnn update the skeleton Update op_webnn.cpp Update op_webnn Update op_webnn.cpp Update op_webnn.cpp Update op_webnn.hpp update op_webnn update op_webnn Solved the problems of released variables. Fixed the bugs in op_webnn.cpp Implement op_webnn Implement Relu by WebNN API Update dnn.cpp for better test Update elementwise_layers.cpp Implement ReLU6 Update elementwise_layers.cpp Implement SoftMax using WebNN API Implement Reshape by WebNN API Implement PermuteLayer by WebNN API Implement PoolingLayer using WebNN API Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Implement poolingLayer by WebNN API and add more detailed logs Update dnn.cpp Update dnn.cpp Remove redundant codes and add more logs for poolingLayer Add more logs in the pooling layer implementation Fix the indent issue and resolve the compiling issue Fix the build problems Fix the build issue FIx the build issue Update dnn.cpp Update dnn.cpp * Fix the build issue * Implement BatchNorm Layer by WebNN API * Update convolution_layer.cpp This is a temporary file for Conv2d layer implementation * Integrate some general functions into op_webnn.cpp&hpp * Update const_layer.cpp * Update convolution_layer.cpp Still have some bugs that should be fixed. * Update conv2d layer and fc layer still have some problems to be fixed. * update constLayer, conv layer, fc layer There are still some bugs to be fixed. * Fix the build issue * Update concat_layer.cpp Still have some bugs to be fixed. * Update conv2d layer, fully connected layer and const layer * Update convolution_layer.cpp * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron) * Delete bib19450.aux * Add WebNN backend for OpenCV DNN Module Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp Add WebNN head files into OpenCV 3rd partiy files Create webnn.hpp update cmake Complete README and add OpenCVDetectWebNN.cmake file add webnn.cpp Modify webnn.cpp Can successfully compile the codes for creating a MLContext Update webnn.cpp Update README.md Update README.md Update README.md Update README.md Update cmake files and update README.md Update OpenCVDetectWebNN.cmake and README.md Update OpenCVDetectWebNN.cmake Fix OpenCVDetectWebNN.cmake and update README.md Add source webnn_cpp.cpp and libary libwebnn_proc.so Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp update dnn.cpp update op_webnn update op_webnn Update op_webnn.hpp update op_webnn.cpp & hpp Update op_webnn.hpp Update op_webnn update the skeleton Update op_webnn.cpp Update op_webnn Update op_webnn.cpp Update op_webnn.cpp Update op_webnn.hpp update op_webnn update op_webnn Solved the problems of released variables. Fixed the bugs in op_webnn.cpp Implement op_webnn Implement Relu by WebNN API Update dnn.cpp for better test Update elementwise_layers.cpp Implement ReLU6 Update elementwise_layers.cpp Implement SoftMax using WebNN API Implement Reshape by WebNN API Implement PermuteLayer by WebNN API Implement PoolingLayer using WebNN API Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Implement poolingLayer by WebNN API and add more detailed logs Update dnn.cpp Update dnn.cpp Remove redundant codes and add more logs for poolingLayer Add more logs in the pooling layer implementation Fix the indent issue and resolve the compiling issue Fix the build problems Fix the build issue FIx the build issue Update dnn.cpp Update dnn.cpp * Fix the build issue * Implement BatchNorm Layer by WebNN API * Update convolution_layer.cpp This is a temporary file for Conv2d layer implementation * Integrate some general functions into op_webnn.cpp&hpp * Update const_layer.cpp * Update convolution_layer.cpp Still have some bugs that should be fixed. * Update conv2d layer and fc layer still have some problems to be fixed. * update constLayer, conv layer, fc layer There are still some bugs to be fixed. * Update conv2d layer, fully connected layer and const layer * Update convolution_layer.cpp * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron) * Update dnn.cpp * Fix Error in dnn.cpp * Resolve duplication in conditions in convolution_layer.cpp * Fixed the issues in the comments * Fix building issue * Update tutorial * Fixed comments * Address the comments * Update CMakeLists.txt * Offer more accurate perf test on native * Add better perf tests for both native and web * Modify per tests for better results * Use more latest version of Electron * Support latest WebNN Clamp op * Add definition of HAVE_WEBNN macro * Support group convolution * Implement Scale_layer using WebNN * Add Softmax option for native classification example * Fix comments * Fix commentspull/21115/head
parent
12c1e1d149
commit
1fcf7ba5bc
32 changed files with 2261 additions and 24 deletions
@ -0,0 +1,49 @@ |
||||
if(NOT EMSCRIPTEN) |
||||
if(WITH_WEBNN) |
||||
ocv_check_environment_variables(WEBNN_HEADER_DIRS) |
||||
ocv_check_environment_variables(WEBNN_INCLUDE_DIRS) |
||||
ocv_check_environment_variables(WEBNN_LIBRARIES) |
||||
if(NOT DEFINED WEBNN_HEADER_DIRS) |
||||
set(WEBNN_HEADER_DIRS "$ENV{WEBNN_NATIVE_DIR}/gen/src/include") |
||||
endif() |
||||
if(NOT DEFINED WEBNN_INCLUDE_DIRS) |
||||
set(WEBNN_INCLUDE_DIRS "$ENV{WEBNN_NATIVE_DIR}/../../src/include") |
||||
endif() |
||||
if(NOT DEFINED WEBNN_LIBRARIES) |
||||
set(WEBNN_LIBRARIES "$ENV{WEBNN_NATIVE_DIR}/libwebnn_native.so;$ENV{WEBNN_NATIVE_DIR}/libwebnn_proc.so") |
||||
endif() |
||||
endif() |
||||
try_compile(VALID_WEBNN |
||||
"${OpenCV_BINARY_DIR}" |
||||
SOURCES "${OpenCV_SOURCE_DIR}/cmake/checks/webnn.cpp" |
||||
"$ENV{WEBNN_NATIVE_DIR}/gen/src/webnn/webnn_cpp.cpp" |
||||
CMAKE_FLAGS "-DINCLUDE_DIRECTORIES:STRING=${WEBNN_INCLUDE_DIRS}\;${WEBNN_HEADER_DIRS}" |
||||
"-DLINK_LIBRARIES:STRING=${WEBNN_LIBRARIES}" |
||||
OUTPUT_VARIABLE TRY_OUT |
||||
) |
||||
else() |
||||
try_compile(VALID_WEBNN |
||||
"${OpenCV_BINARY_DIR}" |
||||
SOURCES "${OpenCV_SOURCE_DIR}/cmake/checks/webnn.cpp" |
||||
OUTPUT_VARIABLE TRY_OUT |
||||
) |
||||
endif() |
||||
|
||||
if(NOT VALID_WEBNN) |
||||
if(NOT EMSCRIPTEN) |
||||
message(WARNING "Can't use WebNN-native") |
||||
return() |
||||
else() |
||||
message(WARNING "Can't use WebNN") |
||||
return() |
||||
endif() |
||||
else() |
||||
set(HAVE_WEBNN ON) |
||||
message(STATUS "Set HAVE_WEBNN = ${HAVE_WEBNN}") |
||||
endif() |
||||
|
||||
if(NOT EMSCRIPTEN) |
||||
message(AUTHOR_WARNING "Use WebNN-native") |
||||
else() |
||||
message(AUTHOR_WARNING "Use WebNN") |
||||
endif() |
@ -0,0 +1,23 @@ |
||||
#include <webnn/webnn_cpp.h> |
||||
#include <webnn/webnn.h> |
||||
#ifdef __EMSCRIPTEN__ |
||||
#include <emscripten.h> |
||||
#include <emscripten/html5.h> |
||||
#include <emscripten/html5_webnn.h> |
||||
#else |
||||
#include <webnn/webnn_proc.h> |
||||
#include <webnn_native/WebnnNative.h> |
||||
#endif |
||||
|
||||
|
||||
int main(int /*argc*/, char** /*argv*/) |
||||
{ |
||||
#ifdef __EMSCRIPTEN__ |
||||
ml::Context ml_context = ml::Context(emscripten_webnn_create_context()); |
||||
#else |
||||
WebnnProcTable backendProcs = webnn_native::GetProcs(); |
||||
webnnProcSetProcs(&backendProcs); |
||||
ml::Context ml_context = ml::Context(webnn_native::CreateContext()); |
||||
#endif |
||||
return 0; |
||||
} |
@ -0,0 +1,269 @@ |
||||
<!DOCTYPE html> |
||||
<html> |
||||
|
||||
<head> |
||||
<meta charset="utf-8"> |
||||
<title>Image Classification Example</title> |
||||
<link href="js_example_style.css" rel="stylesheet" type="text/css" /> |
||||
<script src="./webnn-polyfill.js"></script> |
||||
</head> |
||||
|
||||
<body> |
||||
<h2>Image Classification Example</h2> |
||||
<p> |
||||
This tutorial shows you how to write an image classification example with OpenCV.js.<br> |
||||
To try the example you should click the <b>modelFile</b> button(and <b>configFile</b> button if needed) to upload inference model. |
||||
You can find the model URLs and parameters in the <a href="#appendix">model info</a> section. |
||||
Then You should change the parameters in the first code snippet according to the uploaded model. |
||||
Finally click <b>Try it</b> button to see the result. You can choose any other images.<br> |
||||
</p> |
||||
|
||||
<div class="control"><button id="tryIt" disabled>Try it</button></div> |
||||
<div> |
||||
<table cellpadding="0" cellspacing="0" width="0" border="0"> |
||||
<tr> |
||||
<td> |
||||
<canvas id="canvasInput" width="400" height="400"></canvas> |
||||
</td> |
||||
<td> |
||||
<table style="visibility: hidden;" id="result"> |
||||
<thead> |
||||
<tr> |
||||
<th scope="col">#</th> |
||||
<th scope="col" width=300>Label</th> |
||||
<th scope="col">Probability</th> |
||||
</tr> |
||||
</thead> |
||||
<tbody> |
||||
<tr> |
||||
<th scope="row">1</th> |
||||
<td id="label0" align="center"></td> |
||||
<td id="prob0" align="center"></td> |
||||
</tr> |
||||
<tr> |
||||
<th scope="row">2</th> |
||||
<td id="label1" align="center"></td> |
||||
<td id="prob1" align="center"></td> |
||||
</tr> |
||||
<tr> |
||||
<th scope="row">3</th> |
||||
<td id="label2" align="center"></td> |
||||
<td id="prob2" align="center"></td> |
||||
</tr> |
||||
</tbody> |
||||
</table> |
||||
<p id='status' align="left"></p> |
||||
</td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
canvasInput <input type="file" id="fileInput" name="file" accept="image/*"> |
||||
</div> |
||||
</td> |
||||
<td></td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
modelFile <input type="file" id="modelFile"> |
||||
</div> |
||||
</td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
configFile <input type="file" id="configFile"> |
||||
</div> |
||||
</td> |
||||
</tr> |
||||
</table> |
||||
</div> |
||||
|
||||
<div> |
||||
<p class="err" id="errorMessage"></p> |
||||
</div> |
||||
|
||||
<div> |
||||
<h3>Help function</h3> |
||||
<p>1.The parameters for model inference which you can modify to investigate more models.</p> |
||||
<textarea class="code" rows="13" cols="100" id="codeEditor" spellcheck="false"></textarea> |
||||
<p>2.Main loop in which will read the image from canvas and do inference once.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor1" spellcheck="false"></textarea> |
||||
<p>3.Load labels from txt file and process it into an array.</p> |
||||
<textarea class="code" rows="7" cols="100" id="codeEditor2" spellcheck="false"></textarea> |
||||
<p>4.Get blob from image as input for net, and standardize it with <b>mean</b> and <b>std</b>.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor3" spellcheck="false"></textarea> |
||||
<p>5.Fetch model file and save to emscripten file system once click the input button.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor4" spellcheck="false"></textarea> |
||||
<p>6.The post-processing, including softmax if needed and get the top classes from the output vector.</p> |
||||
<textarea class="code" rows="35" cols="100" id="codeEditor5" spellcheck="false"></textarea> |
||||
</div> |
||||
|
||||
<div id="appendix"> |
||||
<h2>Model Info:</h2> |
||||
</div> |
||||
|
||||
<script src="utils.js" type="text/javascript"></script> |
||||
<script src="js_dnn_example_helper.js" type="text/javascript"></script> |
||||
|
||||
<script id="codeSnippet" type="text/code-snippet"> |
||||
inputSize = [224,224]; |
||||
mean = [104, 117, 123]; |
||||
std = 1; |
||||
swapRB = false; |
||||
|
||||
// record if need softmax function for post-processing |
||||
needSoftmax = false; |
||||
|
||||
// url for label file, can from local or Internet |
||||
labelsUrl = "https://raw.githubusercontent.com/opencv/opencv/master/samples/data/dnn/classification_classes_ILSVRC2012.txt"; |
||||
</script> |
||||
|
||||
<script id="codeSnippet1" type="text/code-snippet"> |
||||
main = async function() { |
||||
const labels = await loadLables(labelsUrl); |
||||
const input = getBlobFromImage(inputSize, mean, std, swapRB, 'canvasInput'); |
||||
let net = cv.readNet(configPath, modelPath); |
||||
net.setPreferableBackend(6); |
||||
net.setInput(input); |
||||
let result = net.forward(); |
||||
const start = performance.now(); |
||||
for (i=0;i<200;i++) |
||||
{ |
||||
result = net.forward(); |
||||
} |
||||
const time = performance.now()-start; |
||||
const probs = softmax(result); |
||||
const classes = getTopClasses(probs, labels); |
||||
|
||||
updateResult(classes, time/200); |
||||
input.delete(); |
||||
net.delete(); |
||||
result.delete(); |
||||
} |
||||
</script> |
||||
|
||||
<script id="codeSnippet5" type="text/code-snippet"> |
||||
softmax = function(result) { |
||||
let arr = result.data32F; |
||||
if (needSoftmax) { |
||||
const maxNum = Math.max(...arr); |
||||
const expSum = arr.map((num) => Math.exp(num - maxNum)).reduce((a, b) => a + b); |
||||
return arr.map((value, index) => { |
||||
return Math.exp(value - maxNum) / expSum; |
||||
}); |
||||
} else { |
||||
return arr; |
||||
} |
||||
} |
||||
</script> |
||||
|
||||
<script type="text/javascript"> |
||||
let jsonUrl = "js_image_classification_model_info.json"; |
||||
drawInfoTable(jsonUrl, 'appendix'); |
||||
|
||||
let utils = new Utils('errorMessage'); |
||||
utils.loadCode('codeSnippet', 'codeEditor'); |
||||
utils.loadCode('codeSnippet1', 'codeEditor1'); |
||||
|
||||
let loadLablesCode = 'loadLables = ' + loadLables.toString(); |
||||
document.getElementById('codeEditor2').value = loadLablesCode; |
||||
let getBlobFromImageCode = 'getBlobFromImage = ' + getBlobFromImage.toString(); |
||||
document.getElementById('codeEditor3').value = getBlobFromImageCode; |
||||
let loadModelCode = 'loadModel = ' + loadModel.toString(); |
||||
document.getElementById('codeEditor4').value = loadModelCode; |
||||
|
||||
utils.loadCode('codeSnippet5', 'codeEditor5'); |
||||
let getTopClassesCode = 'getTopClasses = ' + getTopClasses.toString(); |
||||
document.getElementById('codeEditor5').value += '\n' + '\n' + getTopClassesCode; |
||||
|
||||
let canvas = document.getElementById('canvasInput'); |
||||
let ctx = canvas.getContext('2d'); |
||||
let img = new Image(); |
||||
img.crossOrigin = 'anonymous'; |
||||
img.src = 'space_shuttle.jpg'; |
||||
img.onload = function() { |
||||
ctx.drawImage(img, 0, 0, canvas.width, canvas.height); |
||||
}; |
||||
|
||||
let tryIt = document.getElementById('tryIt'); |
||||
tryIt.addEventListener('click', () => { |
||||
initStatus(); |
||||
document.getElementById('status').innerHTML = 'Running function main()...'; |
||||
utils.executeCode('codeEditor'); |
||||
utils.executeCode('codeEditor1'); |
||||
if (modelPath === "") { |
||||
document.getElementById('status').innerHTML = 'Runing failed.'; |
||||
utils.printError('Please upload model file by clicking the button first.'); |
||||
} else { |
||||
setTimeout(main, 1); |
||||
} |
||||
}); |
||||
|
||||
let fileInput = document.getElementById('fileInput'); |
||||
fileInput.addEventListener('change', (e) => { |
||||
initStatus(); |
||||
loadImageToCanvas(e, 'canvasInput'); |
||||
}); |
||||
|
||||
let configPath = ""; |
||||
let configFile = document.getElementById('configFile'); |
||||
configFile.addEventListener('change', async (e) => { |
||||
initStatus(); |
||||
configPath = await loadModel(e); |
||||
document.getElementById('status').innerHTML = `The config file '${configPath}' is created successfully.`; |
||||
}); |
||||
|
||||
let modelPath = ""; |
||||
let modelFile = document.getElementById('modelFile'); |
||||
modelFile.addEventListener('change', async (e) => { |
||||
initStatus(); |
||||
modelPath = await loadModel(e); |
||||
document.getElementById('status').innerHTML = `The model file '${modelPath}' is created successfully.`; |
||||
configPath = ""; |
||||
configFile.value = ""; |
||||
}); |
||||
|
||||
utils.loadOpenCv(() => { |
||||
tryIt.removeAttribute('disabled'); |
||||
}); |
||||
|
||||
var main = async function() {}; |
||||
var softmax = function(result){}; |
||||
var getTopClasses = function(mat, labels, topK = 3){}; |
||||
|
||||
utils.executeCode('codeEditor1'); |
||||
utils.executeCode('codeEditor2'); |
||||
utils.executeCode('codeEditor3'); |
||||
utils.executeCode('codeEditor4'); |
||||
utils.executeCode('codeEditor5'); |
||||
|
||||
function updateResult(classes, time) { |
||||
try{ |
||||
classes.forEach((c,i) => { |
||||
let labelElement = document.getElementById('label'+i); |
||||
let probElement = document.getElementById('prob'+i); |
||||
labelElement.innerHTML = c.label; |
||||
probElement.innerHTML = c.prob + '%'; |
||||
}); |
||||
let result = document.getElementById('result'); |
||||
result.style.visibility = 'visible'; |
||||
document.getElementById('status').innerHTML = `<b>Model:</b> ${modelPath}<br> |
||||
<b>Inference time:</b> ${time.toFixed(2)} ms`; |
||||
} catch(e) { |
||||
console.log(e); |
||||
} |
||||
} |
||||
|
||||
function initStatus() { |
||||
document.getElementById('status').innerHTML = ''; |
||||
document.getElementById('result').style.visibility = 'hidden'; |
||||
utils.clearError(); |
||||
} |
||||
|
||||
</script> |
||||
|
||||
</body> |
||||
|
||||
</html> |
@ -0,0 +1,268 @@ |
||||
<!DOCTYPE html> |
||||
<html> |
||||
|
||||
<head> |
||||
<meta charset="utf-8"> |
||||
<title>Image Classification Example</title> |
||||
<link href="js_example_style.css" rel="stylesheet" type="text/css" /> |
||||
</head> |
||||
|
||||
<body> |
||||
<h2>Image Classification Example</h2> |
||||
<p> |
||||
This tutorial shows you how to write an image classification example with OpenCV.js.<br> |
||||
To try the example you should click the <b>modelFile</b> button(and <b>configFile</b> button if needed) to upload inference model. |
||||
You can find the model URLs and parameters in the <a href="#appendix">model info</a> section. |
||||
Then You should change the parameters in the first code snippet according to the uploaded model. |
||||
Finally click <b>Try it</b> button to see the result. You can choose any other images.<br> |
||||
</p> |
||||
|
||||
<div class="control"><button id="tryIt" disabled>Try it</button></div> |
||||
<div> |
||||
<table cellpadding="0" cellspacing="0" width="0" border="0"> |
||||
<tr> |
||||
<td> |
||||
<canvas id="canvasInput" width="400" height="400"></canvas> |
||||
</td> |
||||
<td> |
||||
<table style="visibility: hidden;" id="result"> |
||||
<thead> |
||||
<tr> |
||||
<th scope="col">#</th> |
||||
<th scope="col" width=300>Label</th> |
||||
<th scope="col">Probability</th> |
||||
</tr> |
||||
</thead> |
||||
<tbody> |
||||
<tr> |
||||
<th scope="row">1</th> |
||||
<td id="label0" align="center"></td> |
||||
<td id="prob0" align="center"></td> |
||||
</tr> |
||||
<tr> |
||||
<th scope="row">2</th> |
||||
<td id="label1" align="center"></td> |
||||
<td id="prob1" align="center"></td> |
||||
</tr> |
||||
<tr> |
||||
<th scope="row">3</th> |
||||
<td id="label2" align="center"></td> |
||||
<td id="prob2" align="center"></td> |
||||
</tr> |
||||
</tbody> |
||||
</table> |
||||
<p id='status' align="left"></p> |
||||
</td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
canvasInput <input type="file" id="fileInput" name="file" accept="image/*"> |
||||
</div> |
||||
</td> |
||||
<td></td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
modelFile <input type="file" id="modelFile"> |
||||
</div> |
||||
</td> |
||||
</tr> |
||||
<tr> |
||||
<td> |
||||
<div class="caption"> |
||||
configFile <input type="file" id="configFile"> |
||||
</div> |
||||
</td> |
||||
</tr> |
||||
</table> |
||||
</div> |
||||
|
||||
<div> |
||||
<p class="err" id="errorMessage"></p> |
||||
</div> |
||||
|
||||
<div> |
||||
<h3>Help function</h3> |
||||
<p>1.The parameters for model inference which you can modify to investigate more models.</p> |
||||
<textarea class="code" rows="13" cols="100" id="codeEditor" spellcheck="false"></textarea> |
||||
<p>2.Main loop in which will read the image from canvas and do inference once.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor1" spellcheck="false"></textarea> |
||||
<p>3.Load labels from txt file and process it into an array.</p> |
||||
<textarea class="code" rows="7" cols="100" id="codeEditor2" spellcheck="false"></textarea> |
||||
<p>4.Get blob from image as input for net, and standardize it with <b>mean</b> and <b>std</b>.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor3" spellcheck="false"></textarea> |
||||
<p>5.Fetch model file and save to emscripten file system once click the input button.</p> |
||||
<textarea class="code" rows="17" cols="100" id="codeEditor4" spellcheck="false"></textarea> |
||||
<p>6.The post-processing, including softmax if needed and get the top classes from the output vector.</p> |
||||
<textarea class="code" rows="35" cols="100" id="codeEditor5" spellcheck="false"></textarea> |
||||
</div> |
||||
|
||||
<div id="appendix"> |
||||
<h2>Model Info:</h2> |
||||
</div> |
||||
|
||||
<script src="utils_webnn_electron.js" type="text/javascript"></script> |
||||
<script src="js_dnn_example_helper.js" type="text/javascript"></script> |
||||
|
||||
<script id="codeSnippet" type="text/code-snippet"> |
||||
inputSize = [224,224]; |
||||
mean = [104, 117, 123]; |
||||
std = 1; |
||||
swapRB = false; |
||||
|
||||
// record if need softmax function for post-processing |
||||
needSoftmax = false; |
||||
|
||||
// url for label file, can from local or Internet |
||||
labelsUrl = "https://raw.githubusercontent.com/opencv/opencv/master/samples/data/dnn/classification_classes_ILSVRC2012.txt"; |
||||
</script> |
||||
|
||||
<script id="codeSnippet1" type="text/code-snippet"> |
||||
main = async function() { |
||||
const labels = await loadLables(labelsUrl); |
||||
const input = getBlobFromImage(inputSize, mean, std, swapRB, 'canvasInput'); |
||||
let net = cv.readNet(configPath, modelPath); |
||||
net.setPreferableBackend(6); |
||||
net.setInput(input); |
||||
let result = net.forward(); |
||||
const start = performance.now(); |
||||
for (i=0;i<200;i++) |
||||
{ |
||||
result = net.forward(); |
||||
} |
||||
const time = performance.now()-start; |
||||
const probs = softmax(result); |
||||
const classes = getTopClasses(probs, labels); |
||||
|
||||
updateResult(classes, time/200); |
||||
input.delete(); |
||||
net.delete(); |
||||
result.delete(); |
||||
} |
||||
</script> |
||||
|
||||
<script id="codeSnippet5" type="text/code-snippet"> |
||||
softmax = function(result) { |
||||
let arr = result.data32F; |
||||
if (needSoftmax) { |
||||
const maxNum = Math.max(...arr); |
||||
const expSum = arr.map((num) => Math.exp(num - maxNum)).reduce((a, b) => a + b); |
||||
return arr.map((value, index) => { |
||||
return Math.exp(value - maxNum) / expSum; |
||||
}); |
||||
} else { |
||||
return arr; |
||||
} |
||||
} |
||||
</script> |
||||
|
||||
<script type="text/javascript"> |
||||
let jsonUrl = "js_image_classification_model_info.json"; |
||||
drawInfoTable(jsonUrl, 'appendix'); |
||||
|
||||
let utils = new Utils('errorMessage'); |
||||
utils.loadCode('codeSnippet', 'codeEditor'); |
||||
utils.loadCode('codeSnippet1', 'codeEditor1'); |
||||
|
||||
let loadLablesCode = 'loadLables = ' + loadLables.toString(); |
||||
document.getElementById('codeEditor2').value = loadLablesCode; |
||||
let getBlobFromImageCode = 'getBlobFromImage = ' + getBlobFromImage.toString(); |
||||
document.getElementById('codeEditor3').value = getBlobFromImageCode; |
||||
let loadModelCode = 'loadModel = ' + loadModel.toString(); |
||||
document.getElementById('codeEditor4').value = loadModelCode; |
||||
|
||||
utils.loadCode('codeSnippet5', 'codeEditor5'); |
||||
let getTopClassesCode = 'getTopClasses = ' + getTopClasses.toString(); |
||||
document.getElementById('codeEditor5').value += '\n' + '\n' + getTopClassesCode; |
||||
|
||||
let canvas = document.getElementById('canvasInput'); |
||||
let ctx = canvas.getContext('2d'); |
||||
let img = new Image(); |
||||
img.crossOrigin = 'anonymous'; |
||||
img.src = 'space_shuttle.jpg'; |
||||
img.onload = function() { |
||||
ctx.drawImage(img, 0, 0, canvas.width, canvas.height); |
||||
}; |
||||
|
||||
let tryIt = document.getElementById('tryIt'); |
||||
tryIt.addEventListener('click', () => { |
||||
initStatus(); |
||||
document.getElementById('status').innerHTML = 'Running function main()...'; |
||||
utils.executeCode('codeEditor'); |
||||
utils.executeCode('codeEditor1'); |
||||
if (modelPath === "") { |
||||
document.getElementById('status').innerHTML = 'Runing failed.'; |
||||
utils.printError('Please upload model file by clicking the button first.'); |
||||
} else { |
||||
setTimeout(main, 1); |
||||
} |
||||
}); |
||||
|
||||
let fileInput = document.getElementById('fileInput'); |
||||
fileInput.addEventListener('change', (e) => { |
||||
initStatus(); |
||||
loadImageToCanvas(e, 'canvasInput'); |
||||
}); |
||||
|
||||
let configPath = ""; |
||||
let configFile = document.getElementById('configFile'); |
||||
configFile.addEventListener('change', async (e) => { |
||||
initStatus(); |
||||
configPath = await loadModel(e); |
||||
document.getElementById('status').innerHTML = `The config file '${configPath}' is created successfully.`; |
||||
}); |
||||
|
||||
let modelPath = ""; |
||||
let modelFile = document.getElementById('modelFile'); |
||||
modelFile.addEventListener('change', async (e) => { |
||||
initStatus(); |
||||
modelPath = await loadModel(e); |
||||
document.getElementById('status').innerHTML = `The model file '${modelPath}' is created successfully.`; |
||||
configPath = ""; |
||||
configFile.value = ""; |
||||
}); |
||||
|
||||
utils.loadOpenCv(() => { |
||||
tryIt.removeAttribute('disabled'); |
||||
}); |
||||
|
||||
var main = async function() {}; |
||||
var softmax = function(result){}; |
||||
var getTopClasses = function(mat, labels, topK = 3){}; |
||||
|
||||
utils.executeCode('codeEditor1'); |
||||
utils.executeCode('codeEditor2'); |
||||
utils.executeCode('codeEditor3'); |
||||
utils.executeCode('codeEditor4'); |
||||
utils.executeCode('codeEditor5'); |
||||
|
||||
function updateResult(classes, time) { |
||||
try{ |
||||
classes.forEach((c,i) => { |
||||
let labelElement = document.getElementById('label'+i); |
||||
let probElement = document.getElementById('prob'+i); |
||||
labelElement.innerHTML = c.label; |
||||
probElement.innerHTML = c.prob + '%'; |
||||
}); |
||||
let result = document.getElementById('result'); |
||||
result.style.visibility = 'visible'; |
||||
document.getElementById('status').innerHTML = `<b>Model:</b> ${modelPath}<br> |
||||
<b>Inference time:</b> ${time.toFixed(2)} ms`; |
||||
} catch(e) { |
||||
console.log(e); |
||||
} |
||||
} |
||||
|
||||
function initStatus() { |
||||
document.getElementById('status').innerHTML = ''; |
||||
document.getElementById('result').style.visibility = 'hidden'; |
||||
utils.clearError(); |
||||
} |
||||
|
||||
</script> |
||||
|
||||
</body> |
||||
|
||||
</html> |
@ -0,0 +1,56 @@ |
||||
// Modules to control application life and create native browser window
|
||||
const {app, BrowserWindow} = require('electron') |
||||
const path = require('path') |
||||
|
||||
// Keep a global reference of the window object, if you don't, the window will
|
||||
// be closed automatically when the JavaScript object is garbage collected.
|
||||
let mainWindow = {} |
||||
|
||||
function createWindow() { |
||||
// Create the browser window.
|
||||
mainWindow = new BrowserWindow({ |
||||
width: 1220, |
||||
height: 840, |
||||
webPreferences: { |
||||
nodeIntegration: true, |
||||
contextIsolation: false, |
||||
preload: app.getAppPath()+"/node_setup.js" |
||||
} |
||||
}) |
||||
|
||||
// Load the index.html with 'numRunsParm' to run inference multiple times.
|
||||
let url = `file://${__dirname}/js_image_classification_webnn_electron.html` |
||||
const numRunsParm = '?' + process.argv[2] |
||||
mainWindow.loadURL(url + numRunsParm) |
||||
|
||||
// Emitted when the window is closed.
|
||||
mainWindow.on('closed', function() { |
||||
// Dereference the window object, usually you would store windows
|
||||
// in an array if your app supports multi windows, this is the time
|
||||
// when you should delete the corresponding element.
|
||||
mainWindow = null |
||||
}) |
||||
} |
||||
|
||||
// This method will be called when Electron has finished
|
||||
// initialization and is ready to create browser windows.
|
||||
// Some APIs can only be used after this event occurs.
|
||||
app.on('ready', createWindow) |
||||
|
||||
// Quit when all windows are closed.
|
||||
app.on('window-all-closed', function() { |
||||
// On macOS it is common for applications and their menu bar
|
||||
// to stay active until the user quits explicitly with Cmd + Q
|
||||
if (process.platform !== 'darwin') app.quit() |
||||
}) |
||||
|
||||
app.on( |
||||
'activate', |
||||
function() { |
||||
// On macOS it's common to re-create a window in the app when the
|
||||
// dock icon is clicked and there are no other windows open.
|
||||
if (mainWindow === null) createWindow() |
||||
}) |
||||
|
||||
// In this file you can include the rest of your app's specific main process
|
||||
// code. You can also put them in separate files and require them here.
|
@ -0,0 +1,12 @@ |
||||
const cv = require('./opencv'); |
||||
const webnn = require(process.env.WEBNN_NATIVE_DIR+'/../../node/lib/webnn'); |
||||
// navigator is undefined in node.js, but defined in electron.js.
|
||||
if (global.navigator === undefined) { |
||||
global.navigator = {}; |
||||
} |
||||
global.navigator.ml = webnn.ml; |
||||
global.MLContext = webnn.MLContext |
||||
global.MLGraphBuilder = webnn.MLGraphBuilder |
||||
global.MLGraph = webnn.MLGraph |
||||
global.MLOperand = webnn.MLOperand |
||||
global.cv = cv; |
@ -0,0 +1,14 @@ |
||||
{ |
||||
"name": "image_classification", |
||||
"version": "0.0.1", |
||||
"description": "An Electon.js example of image_classification using webnn-native", |
||||
"main": "main.js", |
||||
"author": "WebNN-native Authors", |
||||
"license": "Apache-2.0", |
||||
"scripts": { |
||||
"start": "electron ." |
||||
}, |
||||
"dependencies": { |
||||
"electron": "^15.1.2" |
||||
} |
||||
} |
@ -0,0 +1,159 @@ |
||||
function Utils(errorOutputId) { // eslint-disable-line no-unused-vars
|
||||
let self = this; |
||||
this.errorOutput = document.getElementById(errorOutputId); |
||||
|
||||
const OPENCV_URL = 'opencv.js'; |
||||
this.loadOpenCv = async function(onloadCallback) { |
||||
if (cv.getBuildInformation) |
||||
{ |
||||
console.log(cv.getBuildInformation()); |
||||
onloadCallback(); |
||||
} |
||||
else |
||||
{ |
||||
// WASM
|
||||
if (cv instanceof Promise) { |
||||
cv = await cv; |
||||
console.log(cv.getBuildInformation()); |
||||
onloadCallback(); |
||||
} else { |
||||
cv['onRuntimeInitialized']=()=>{ |
||||
console.log(cv.getBuildInformation()); |
||||
onloadCallback(); |
||||
} |
||||
} |
||||
} |
||||
}; |
||||
|
||||
this.createFileFromUrl = function(path, url, callback) { |
||||
let request = new XMLHttpRequest(); |
||||
request.open('GET', url, true); |
||||
request.responseType = 'arraybuffer'; |
||||
request.onload = function(ev) { |
||||
if (request.readyState === 4) { |
||||
if (request.status === 200) { |
||||
let data = new Uint8Array(request.response); |
||||
cv.FS_createDataFile('/', path, data, true, false, false); |
||||
callback(); |
||||
} else { |
||||
self.printError('Failed to load ' + url + ' status: ' + request.status); |
||||
} |
||||
} |
||||
}; |
||||
request.send(); |
||||
}; |
||||
|
||||
this.loadImageToCanvas = function(url, cavansId) { |
||||
let canvas = document.getElementById(cavansId); |
||||
let ctx = canvas.getContext('2d'); |
||||
let img = new Image(); |
||||
img.crossOrigin = 'anonymous'; |
||||
img.onload = function() { |
||||
canvas.width = img.width; |
||||
canvas.height = img.height; |
||||
ctx.drawImage(img, 0, 0, img.width, img.height); |
||||
}; |
||||
img.src = url; |
||||
}; |
||||
|
||||
this.executeCode = function(textAreaId) { |
||||
try { |
||||
this.clearError(); |
||||
let code = document.getElementById(textAreaId).value; |
||||
eval(code); |
||||
} catch (err) { |
||||
this.printError(err); |
||||
} |
||||
}; |
||||
|
||||
this.clearError = function() { |
||||
this.errorOutput.innerHTML = ''; |
||||
}; |
||||
|
||||
this.printError = function(err) { |
||||
if (typeof err === 'undefined') { |
||||
err = ''; |
||||
} else if (typeof err === 'number') { |
||||
if (!isNaN(err)) { |
||||
if (typeof cv !== 'undefined') { |
||||
err = 'Exception: ' + cv.exceptionFromPtr(err).msg; |
||||
} |
||||
} |
||||
} else if (typeof err === 'string') { |
||||
let ptr = Number(err.split(' ')[0]); |
||||
if (!isNaN(ptr)) { |
||||
if (typeof cv !== 'undefined') { |
||||
err = 'Exception: ' + cv.exceptionFromPtr(ptr).msg; |
||||
} |
||||
} |
||||
} else if (err instanceof Error) { |
||||
err = err.stack.replace(/\n/g, '<br>'); |
||||
} |
||||
this.errorOutput.innerHTML = err; |
||||
}; |
||||
|
||||
this.loadCode = function(scriptId, textAreaId) { |
||||
let scriptNode = document.getElementById(scriptId); |
||||
let textArea = document.getElementById(textAreaId); |
||||
if (scriptNode.type !== 'text/code-snippet') { |
||||
throw Error('Unknown code snippet type'); |
||||
} |
||||
textArea.value = scriptNode.text.replace(/^\n/, ''); |
||||
}; |
||||
|
||||
this.addFileInputHandler = function(fileInputId, canvasId) { |
||||
let inputElement = document.getElementById(fileInputId); |
||||
inputElement.addEventListener('change', (e) => { |
||||
let files = e.target.files; |
||||
if (files.length > 0) { |
||||
let imgUrl = URL.createObjectURL(files[0]); |
||||
self.loadImageToCanvas(imgUrl, canvasId); |
||||
} |
||||
}, false); |
||||
}; |
||||
|
||||
function onVideoCanPlay() { |
||||
if (self.onCameraStartedCallback) { |
||||
self.onCameraStartedCallback(self.stream, self.video); |
||||
} |
||||
}; |
||||
|
||||
this.startCamera = function(resolution, callback, videoId) { |
||||
const constraints = { |
||||
'qvga': {width: {exact: 320}, height: {exact: 240}}, |
||||
'vga': {width: {exact: 640}, height: {exact: 480}}}; |
||||
let video = document.getElementById(videoId); |
||||
if (!video) { |
||||
video = document.createElement('video'); |
||||
} |
||||
|
||||
let videoConstraint = constraints[resolution]; |
||||
if (!videoConstraint) { |
||||
videoConstraint = true; |
||||
} |
||||
|
||||
navigator.mediaDevices.getUserMedia({video: videoConstraint, audio: false}) |
||||
.then(function(stream) { |
||||
video.srcObject = stream; |
||||
video.play(); |
||||
self.video = video; |
||||
self.stream = stream; |
||||
self.onCameraStartedCallback = callback; |
||||
video.addEventListener('canplay', onVideoCanPlay, false); |
||||
}) |
||||
.catch(function(err) { |
||||
self.printError('Camera Error: ' + err.name + ' ' + err.message); |
||||
}); |
||||
}; |
||||
|
||||
this.stopCamera = function() { |
||||
if (this.video) { |
||||
this.video.pause(); |
||||
this.video.srcObject = null; |
||||
this.video.removeEventListener('canplay', onVideoCanPlay); |
||||
} |
||||
if (this.stream) { |
||||
this.stream.getVideoTracks()[0].stop(); |
||||
} |
||||
}; |
||||
}; |
@ -0,0 +1,249 @@ |
||||
// This file is part of OpenCV project.
|
||||
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||
// of this distribution and at http://opencv.org/license.html.
|
||||
|
||||
#include <fstream> |
||||
#include "op_webnn.hpp" |
||||
|
||||
#include <opencv2/core/utils/configuration.private.hpp> |
||||
#include <opencv2/core/utils/logger.hpp> |
||||
|
||||
#include "opencv2/core/utils/filesystem.hpp" |
||||
#include "opencv2/core/utils/filesystem.private.hpp" |
||||
|
||||
#include <opencv2/dnn/shape_utils.hpp> |
||||
|
||||
namespace cv { namespace dnn { |
||||
|
||||
#ifdef HAVE_WEBNN |
||||
|
||||
namespace webnn { |
||||
ml::Operand BuildConstant(const ml::GraphBuilder& builder, |
||||
const std::vector<int32_t>& dimensions, |
||||
const void* value, |
||||
size_t size, |
||||
ml::OperandType type) { |
||||
ml::OperandDescriptor desc; |
||||
desc.type = type; |
||||
desc.dimensions = dimensions.data(); |
||||
desc.dimensionsCount = (uint32_t)dimensions.size(); |
||||
ml::ArrayBufferView resource; |
||||
resource.buffer = const_cast<void*>(value); |
||||
resource.byteLength = size; |
||||
return builder.Constant(&desc, &resource); |
||||
} |
||||
} |
||||
|
||||
static std::string kDefaultInpLayerName = "opencv_webnn_empty_inp_layer_name"; |
||||
|
||||
static std::vector<Ptr<WebnnBackendWrapper> > |
||||
webnnWrappers(const std::vector<Ptr<BackendWrapper> >& ptrs) |
||||
{ |
||||
std::vector<Ptr<WebnnBackendWrapper> > wrappers(ptrs.size()); |
||||
for (int i = 0; i < ptrs.size(); ++i) |
||||
{ |
||||
CV_Assert(!ptrs[i].empty()); |
||||
wrappers[i] = ptrs[i].dynamicCast<WebnnBackendWrapper>(); |
||||
CV_Assert(!wrappers[i].empty()); |
||||
} |
||||
return wrappers; |
||||
} |
||||
|
||||
// WebnnNet
|
||||
WebnnNet::WebnnNet() |
||||
{ |
||||
hasNetOwner = false; |
||||
device_name = "CPU"; |
||||
|
||||
#ifdef __EMSCRIPTEN__ |
||||
context = ml::Context(emscripten_webnn_create_context()); |
||||
#else |
||||
WebnnProcTable backendProcs = webnn_native::GetProcs(); |
||||
webnnProcSetProcs(&backendProcs); |
||||
context = ml::Context(webnn_native::CreateContext()); |
||||
#endif |
||||
builder = ::ml::CreateGraphBuilder(context); |
||||
namedOperands = ::ml::CreateNamedOperands(); |
||||
} |
||||
|
||||
void WebnnNet::addOutput(const std::string& name) |
||||
{ |
||||
requestedOutputs.push_back(name); |
||||
} |
||||
|
||||
void WebnnNet::createNet(Target targetId) { |
||||
init(targetId); |
||||
} |
||||
|
||||
void WebnnNet::init(Target targetId) |
||||
{ |
||||
switch (targetId) |
||||
{ |
||||
case DNN_TARGET_CPU: |
||||
device_name = "CPU"; |
||||
break; |
||||
case DNN_TARGET_OPENCL: |
||||
device_name = "GPU"; |
||||
break; |
||||
default: |
||||
CV_Error(Error::StsNotImplemented, "Unknown target"); |
||||
}; |
||||
|
||||
graph = builder.Build(namedOperands); |
||||
CV_Assert(graph!=nullptr); |
||||
isInit = true; |
||||
} |
||||
|
||||
std::vector<ml::Operand> WebnnNet::setInputs(const std::vector<cv::Mat>& inputs, |
||||
const std::vector<std::string>& names) { |
||||
CV_Assert_N(inputs.size() == names.size()); |
||||
std::vector<ml::Operand> current_inp; |
||||
for (size_t i = 0; i < inputs.size(); i++) |
||||
{ |
||||
auto& m = inputs[i]; |
||||
|
||||
std::vector<int32_t> dimensions = webnn::getShape(m); |
||||
ml::OperandDescriptor descriptor; |
||||
descriptor.dimensions = dimensions.data(); |
||||
descriptor.dimensionsCount = dimensions.size(); |
||||
if (m.type() == CV_32F) |
||||
{ |
||||
descriptor.type = ml::OperandType::Float32; |
||||
} |
||||
else |
||||
{ |
||||
CV_Error(Error::StsNotImplemented, format("Unsupported data type %s", typeToString(m.type()).c_str())); |
||||
} |
||||
ml::Operand inputOperand = builder.Input(names[i].c_str(), &descriptor); |
||||
current_inp.push_back(std::move(inputOperand)); |
||||
} |
||||
inputNames = names; |
||||
return current_inp; |
||||
} |
||||
|
||||
void WebnnNet::setUnconnectedNodes(Ptr<WebnnBackendNode>& node) { |
||||
outputNames.push_back(node->name); |
||||
namedOperands.Set(outputNames.back().c_str(), node->operand); |
||||
} |
||||
|
||||
bool WebnnNet::isInitialized() |
||||
{ |
||||
return isInit; |
||||
} |
||||
|
||||
void WebnnNet::reset() |
||||
{ |
||||
allBlobs.clear(); |
||||
isInit = false; |
||||
} |
||||
|
||||
void WebnnNet::addBlobs(const std::vector<cv::Ptr<BackendWrapper> >& ptrs) |
||||
{ |
||||
auto wrappers = webnnWrappers(ptrs); |
||||
for (const auto& wrapper : wrappers) |
||||
{ |
||||
std::string name = wrapper->name; |
||||
name = name.empty() ? kDefaultInpLayerName : name; |
||||
allBlobs.insert({name, wrapper}); |
||||
} |
||||
} |
||||
|
||||
void WebnnNet::forward(const std::vector<Ptr<BackendWrapper> >& outBlobsWrappers, bool isAsync) |
||||
{ |
||||
CV_LOG_DEBUG(NULL, "WebnnNet::forward(" << (isAsync ? "async" : "sync") << ")"); |
||||
ml::NamedInputs named_inputs = ::ml::CreateNamedInputs(); |
||||
std::vector<ml::Input> inputs(inputNames.size()); |
||||
for (int i = 0; i < inputNames.size(); ++i) { |
||||
const std::string& name = inputNames[i]; |
||||
ml::Input& input = inputs[i]; |
||||
auto blobIt = allBlobs.find(name); |
||||
CV_Assert(blobIt != allBlobs.end()); |
||||
const Ptr<WebnnBackendWrapper> wrapper = blobIt->second; |
||||
input.resource.buffer = wrapper->host->data; |
||||
input.resource.byteLength = wrapper->size; |
||||
named_inputs.Set(name.c_str(), &input); |
||||
} |
||||
std::vector<Ptr<WebnnBackendWrapper> > outs = webnnWrappers(outBlobsWrappers); |
||||
ml::NamedOutputs named_outputs = ::ml::CreateNamedOutputs(); |
||||
std::vector<ml::ArrayBufferView> outputs(outs.size()); |
||||
for (int i = 0; i < outs.size(); ++i) { |
||||
const std::string& name = outs[i]->name; |
||||
ml::ArrayBufferView& output = outputs[i]; |
||||
output.buffer = outs[i]->host->data; |
||||
// std::cout<<"host data size: "<<outs[i]->host->total()*outs[i]->host->elemSize()<<std::endl;
|
||||
output.byteLength = outs[i]->size; |
||||
// std::cout<<"outs[i]->size: "<< outs[i]->size << std::endl;
|
||||
named_outputs.Set(name.c_str(), &output); |
||||
} |
||||
ml::ComputeGraphStatus status = graph.Compute(named_inputs, named_outputs); |
||||
if (status != ::ml::ComputeGraphStatus::Success) { |
||||
CV_Error(Error::StsAssert, format("Failed to compute: %d", int(status))); |
||||
} |
||||
} |
||||
|
||||
// WebnnBackendNode
|
||||
WebnnBackendNode::WebnnBackendNode(ml::Operand&& _operand) |
||||
: BackendNode(DNN_BACKEND_WEBNN), operand(std::move(_operand)) {} |
||||
|
||||
WebnnBackendNode::WebnnBackendNode(ml::Operand& _operand) |
||||
: BackendNode(DNN_BACKEND_WEBNN), operand(_operand) {} |
||||
|
||||
// WebnnBackendWrapper
|
||||
WebnnBackendWrapper::WebnnBackendWrapper(int targetId, cv::Mat& m) |
||||
: BackendWrapper(DNN_BACKEND_WEBNN, targetId) |
||||
{ |
||||
size = m.total() * m.elemSize(); |
||||
// buffer.reset(new char[size]);
|
||||
// std::memcpy(buffer.get(), m.data, size);
|
||||
// dimensions = getShape<int32_t>(m);
|
||||
// descriptor.dimensions = dimensions.data();
|
||||
// descriptor.dimensionsCount = dimensions.size();
|
||||
if (m.type() == CV_32F) |
||||
{ |
||||
descriptor.type = ml::OperandType::Float32; |
||||
} |
||||
else |
||||
{ |
||||
CV_Error(Error::StsNotImplemented, format("Unsupported data type %s", typeToString(m.type()).c_str())); |
||||
} |
||||
host = &m; |
||||
} |
||||
|
||||
WebnnBackendWrapper::~WebnnBackendWrapper() |
||||
{ |
||||
// nothing
|
||||
} |
||||
|
||||
void WebnnBackendWrapper::copyToHost() |
||||
{ |
||||
CV_LOG_DEBUG(NULL, "WebnnBackendWrapper::copyToHost()"); |
||||
//CV_Error(Error::StsNotImplemented, "");
|
||||
} |
||||
|
||||
void WebnnBackendWrapper::setHostDirty() |
||||
{ |
||||
CV_LOG_DEBUG(NULL, "WebnnBackendWrapper::setHostDirty()"); |
||||
//CV_Error(Error::StsNotImplemented, "");
|
||||
} |
||||
|
||||
void forwardWebnn(const std::vector<Ptr<BackendWrapper> >& outBlobsWrappers, |
||||
Ptr<BackendNode>& node, bool isAsync) |
||||
{ |
||||
CV_Assert(!node.empty()); |
||||
Ptr<WebnnBackendNode> webnnNode = node.dynamicCast<WebnnBackendNode>(); |
||||
CV_Assert(!webnnNode.empty()); |
||||
webnnNode->net->forward(outBlobsWrappers, isAsync); |
||||
} |
||||
|
||||
|
||||
#else |
||||
void forwardWebnn(const std::vector<Ptr<BackendWrapper> >& outBlobsWrappers, |
||||
Ptr<BackendNode>& operand, bool isAsync) |
||||
{ |
||||
CV_Assert(false && "WebNN is not enabled in this OpenCV build"); |
||||
} |
||||
|
||||
#endif |
||||
|
||||
} |
||||
} |
@ -0,0 +1,171 @@ |
||||
// This file is part of OpenCV project.
|
||||
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||
// of this distribution and at http://opencv.org/license.html.
|
||||
|
||||
#ifndef __OPENCV_DNN_OP_WEBNN_HPP__ |
||||
#define __OPENCV_DNN_OP_WEBNN_HPP__ |
||||
|
||||
#include "opencv2/core/cvdef.h" |
||||
#include "opencv2/core/cvstd.hpp" |
||||
#include "opencv2/dnn.hpp" |
||||
|
||||
#ifdef HAVE_WEBNN |
||||
|
||||
#include <webnn/webnn_cpp.h> |
||||
#include <webnn/webnn.h> |
||||
#ifdef __EMSCRIPTEN__ |
||||
#include <emscripten.h> |
||||
#include <emscripten/html5.h> |
||||
#include <emscripten/html5_webnn.h> |
||||
#else |
||||
#include <webnn/webnn_proc.h> |
||||
#include <webnn_native/WebnnNative.h> |
||||
#endif |
||||
|
||||
#include <unordered_map> |
||||
#include <unordered_set> |
||||
|
||||
#endif // HAVE_WEBNN
|
||||
|
||||
namespace cv { namespace dnn { |
||||
|
||||
constexpr bool haveWebnn() { |
||||
#ifdef HAVE_WEBNN |
||||
return true; |
||||
#else |
||||
return false; |
||||
#endif |
||||
} |
||||
|
||||
#ifdef HAVE_WEBNN |
||||
|
||||
class WebnnBackendNode; |
||||
class WebnnBackendWrapper; |
||||
|
||||
namespace webnn { |
||||
inline std::vector<int32_t> getShape(const Mat& mat) |
||||
{ |
||||
std::vector<int32_t> result(mat.dims); |
||||
for (int i = 0; i < mat.dims; i++) |
||||
result[i] = (int32_t)mat.size[i]; |
||||
return result; |
||||
} |
||||
|
||||
ml::Operand BuildConstant(const ml::GraphBuilder& builder, |
||||
const std::vector<int32_t>& dimensions, |
||||
const void* value, |
||||
size_t size, |
||||
ml::OperandType type); |
||||
|
||||
struct Pool2dOptions { |
||||
public: |
||||
std::vector<int32_t> windowDimensions; |
||||
std::vector<int32_t> padding; |
||||
std::vector<int32_t> strides; |
||||
std::vector<int32_t> dilations; |
||||
ml::AutoPad autoPad = ml::AutoPad::Explicit; |
||||
ml::InputOperandLayout layout = ml::InputOperandLayout::Nchw; |
||||
|
||||
const ml::Pool2dOptions* AsPtr() { |
||||
if (!windowDimensions.empty()) { |
||||
mOptions.windowDimensionsCount = windowDimensions.size(); |
||||
mOptions.windowDimensions = windowDimensions.data(); |
||||
} |
||||
if (!padding.empty()) { |
||||
mOptions.paddingCount = padding.size(); |
||||
mOptions.padding = padding.data(); |
||||
} |
||||
if (!strides.empty()) { |
||||
mOptions.stridesCount = strides.size(); |
||||
mOptions.strides = strides.data(); |
||||
} |
||||
if (!dilations.empty()) { |
||||
mOptions.dilationsCount = dilations.size(); |
||||
mOptions.dilations = dilations.data(); |
||||
} |
||||
mOptions.layout = layout; |
||||
mOptions.autoPad = autoPad; |
||||
return &mOptions; |
||||
} |
||||
|
||||
private: |
||||
ml::Pool2dOptions mOptions; |
||||
}; |
||||
} |
||||
|
||||
class WebnnNet |
||||
{ |
||||
public: |
||||
WebnnNet(); |
||||
|
||||
void addOutput(const std::string& name); |
||||
|
||||
bool isInitialized(); |
||||
void init(Target targetId); |
||||
|
||||
void forward(const std::vector<Ptr<BackendWrapper> >& outBlobsWrappers, bool isAsync); |
||||
|
||||
std::vector<ml::Operand> setInputs(const std::vector<cv::Mat>& inputs, const std::vector<std::string>& names); |
||||
|
||||
void setUnconnectedNodes(Ptr<WebnnBackendNode>& node); |
||||
void addBlobs(const std::vector<cv::Ptr<BackendWrapper> >& ptrs); |
||||
|
||||
void createNet(Target targetId); |
||||
// void setNodePtr(std::shared_ptr<ngraph::Node>* ptr);
|
||||
|
||||
void reset(); |
||||
|
||||
ml::GraphBuilder builder; |
||||
ml::Context context; |
||||
ml::Graph graph; |
||||
|
||||
std::unordered_map<std::string, cv::Ptr<WebnnBackendWrapper>> allBlobs; |
||||
|
||||
bool hasNetOwner; |
||||
std::string device_name; |
||||
bool isInit = false; |
||||
|
||||
std::vector<std::string> requestedOutputs; |
||||
|
||||
std::vector<std::string> inputNames; |
||||
std::vector<std::string> outputNames; |
||||
ml::NamedOperands namedOperands; |
||||
}; |
||||
|
||||
class WebnnBackendNode : public BackendNode |
||||
{ |
||||
public: |
||||
WebnnBackendNode(ml::Operand&& operand); |
||||
WebnnBackendNode(ml::Operand& operand); |
||||
|
||||
std::string name; |
||||
ml::Operand operand; |
||||
Ptr<WebnnNet> net; |
||||
}; |
||||
|
||||
class WebnnBackendWrapper : public BackendWrapper |
||||
{ |
||||
public: |
||||
WebnnBackendWrapper(int targetId, Mat& m); |
||||
~WebnnBackendWrapper(); |
||||
|
||||
virtual void copyToHost() CV_OVERRIDE; |
||||
virtual void setHostDirty() CV_OVERRIDE; |
||||
|
||||
std::string name; |
||||
Mat* host; |
||||
std::unique_ptr<char> buffer; |
||||
size_t size; |
||||
std::vector<int32_t> dimensions; |
||||
ml::OperandDescriptor descriptor; |
||||
}; |
||||
|
||||
#endif // HAVE_WebNN
|
||||
|
||||
void forwardWebnn(const std::vector<Ptr<BackendWrapper> >& outBlobsWrappers, |
||||
Ptr<BackendNode>& node, bool isAsync); |
||||
|
||||
}} // namespace cv::dnn
|
||||
|
||||
|
||||
#endif // __OPENCV_DNN_OP_WEBNN_HPP__
|
@ -0,0 +1,11 @@ |
||||
## Build Instructions |
||||
|
||||
### Build WebNN-native and set the environment variable |
||||
|
||||
Refer to [WebNN's build instructions](https://github.com/webmachinelearning/webnn-native) to complete the build of WebNN-native. |
||||
|
||||
Set environment variable `WEBNN_NATIVE_DIR` to enable native DNN_BACKEND_WEBNN build: `export WEBNN_NATIVE_DIR=${PATH_TO_WebNN}`. Please let `WEBNN_NATIVE_DIR` points the output directory of webnn-native build (e.g. webnn-native/out/Release). |
||||
|
||||
### Test native DNN_BACKEND_WEBNN backend |
||||
Add -DWITH_WEBNN=ON to the cmake command to build the WebNN module such as: |
||||
`cmake -DWITH_WEBNN=ON ../opencv` (according to the @ref tutorial_linux_install) |
Loading…
Reference in new issue