Merge branch 'develop' into enhance_slide

own
Bobholamovic 2 years ago
commit 635ea1d010
  1. 18
      README.md
  2. 16
      docs/apis/data.md
  3. 23
      docs/apis/infer.md
  4. 72
      docs/apis/train.md
  5. 24
      docs/data/tools.md
  6. 9
      docs/dev/dev_guide.md
  7. BIN
      docs/images/whole_picture.png
  8. 9
      docs/intro/data_prep.md
  9. 2
      docs/intro/transforms.md
  10. 26
      examples/README.md
  11. 2
      paddlers/rs_models/cd/bit.py
  12. 39
      paddlers/rs_models/seg/farseg.py
  13. 53
      paddlers/tasks/change_detector.py
  14. 37
      paddlers/tasks/classifier.py
  15. 34
      paddlers/tasks/object_detector.py
  16. 31
      paddlers/tasks/restorer.py
  17. 45
      paddlers/tasks/segmenter.py
  18. 20
      paddlers/utils/checkpoint.py
  19. 1
      test_tipc/README.md
  20. 11
      test_tipc/configs/seg/farseg/farseg_rsseg.yaml
  21. 53
      test_tipc/configs/seg/farseg/train_infer_python.txt
  22. 2
      test_tipc/docs/test_train_inference_python.md
  23. 8
      tests/deploy/test_predictor.py
  24. 29
      tests/rs_models/test_cd_models.py
  25. 6
      tests/rs_models/test_seg_models.py
  26. 2
      tools/prepare_dataset/common.py
  27. 1
      tutorials/train/README.md
  28. 2
      tutorials/train/semantic_segmentation/deeplabv3p.py
  29. 94
      tutorials/train/semantic_segmentation/farseg.py
  30. 2
      tutorials/train/semantic_segmentation/unet.py

@ -64,30 +64,35 @@ PaddleRS具有以下五大特色:
<li>ResNet50-vd</li> <li>ResNet50-vd</li>
<li>MobileNetV3</li> <li>MobileNetV3</li>
<li>HRNet</li> <li>HRNet</li>
<li>...</li>
</ul> </ul>
<b>语义分割</b><br> <b>语义分割</b><br>
<ul> <ul>
<li>UNet</li>
<li>FarSeg</li> <li>FarSeg</li>
<li>UNet</li>
<li>DeepLab V3+</li> <li>DeepLab V3+</li>
<li>...</li>
</ul> </ul>
<b>目标检测</b><br> <b>目标检测</b><br>
<ul> <ul>
<li>PP-YOLO</li> <li>PP-YOLO</li>
<li>Faster R-CNN</li> <li>Faster R-CNN</li>
<li>YOLOv3</li> <li>YOLOv3</li>
<li>...</li>
</ul> </ul>
<b>图像复原</b><br> <b>图像复原</b><br>
<ul> <ul>
<li>DRNet</li> <li>DRNet</li>
<li>LESRCNN</li> <li>LESRCNN</li>
<li>ESRGAN</li> <li>ESRGAN</li>
<li>...</li>
</ul> </ul>
<b>变化检测</b><br> <b>变化检测</b><br>
<ul> <ul>
<li>DSIFN</li> <li>DSIFN</li>
<li>STANet</li> <li>STANet</li>
<li>ChangeStar</li> <li>ChangeStar</li>
<li>...</li>
</ul> </ul>
</td> </td>
<td> <td>
@ -114,6 +119,7 @@ PaddleRS具有以下五大特色:
<li>ReduceDim</li> <li>ReduceDim</li>
<li>SelectBand</li> <li>SelectBand</li>
<li>RandomSwap</li> <li>RandomSwap</li>
<li>...</li>
</ul> </ul>
</td> </td>
<td> <td>
@ -122,12 +128,15 @@ PaddleRS具有以下五大特色:
<li>coco to mask</li> <li>coco to mask</li>
<li>mask to shpfile</li> <li>mask to shpfile</li>
<li>mask to geojson</li> <li>mask to geojson</li>
<li>...</li>
</ul> </ul>
<b>数据预处理</b><br> <b>数据预处理</b><br>
<ul> <ul>
<li>影像切片</li> <li>影像切片</li>
<li>影像配准</li> <li>影像配准</li>
<li>波段选择</li> <li>波段选择</li>
<li>辐射校正</li>
<li>...</li>
</ul> </ul>
</td> </td>
<td> <td>
@ -135,7 +144,7 @@ PaddleRS具有以下五大特色:
<ul> <ul>
<li>待更</li> <li>待更</li>
</ul> </ul>
<b>遥感语义分割</b><br> <b>遥感图像分割</b><br>
<ul> <ul>
<li>待更</li> <li>待更</li>
</ul> </ul>
@ -147,7 +156,7 @@ PaddleRS具有以下五大特色:
<ul> <ul>
<li>待更</li> <li>待更</li>
</ul> </ul>
<b>遥感影像超分</b><br> <b>遥感图像复原</b><br>
<ul> <ul>
<li>待更</li> <li>待更</li>
</ul> </ul>
@ -191,8 +200,9 @@ PaddleRS目录树中关键部分如下:
* [智能标注工具EISeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/EISeg) * [智能标注工具EISeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/EISeg)
* [遥感影像处理工具集](./docs/data/tools.md) * [遥感影像处理工具集](./docs/data/tools.md)
* 组件介绍 * 组件介绍
* [数据预处理/数据增强](./docs/intro/transforms.md) * [数据集预处理脚本](./docs/intro/data_prep.md)
* [模型库](./docs/intro/model_zoo.md) * [模型库](./docs/intro/model_zoo.md)
* [数据变换算子](./docs/intro/transforms.md)
* 模型训练 * 模型训练
* [模型训练API说明](./docs/apis/train.md) * [模型训练API说明](./docs/apis/train.md)
* 模型部署 * 模型部署

@ -86,6 +86,22 @@
### 图像复原数据集`ResDataset` ### 图像复原数据集`ResDataset`
`ResDataset`定义在:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/datasets/res_dataset.py
其初始化参数列表如下:
|参数名称|类型|参数说明|默认值|
|-------|----|--------|-----|
|`data_dir`|`str`|数据集存放目录。||
|`file_list`|`str`|file list路径。file list是一个文本文件,其中每一行包含一个样本的路径信息。`ResDataset`对file list的具体要求请参见下文。||
|`transforms`|`paddlers.transforms.Compose`|对输入数据应用的数据变换算子。||
|`num_workers`|`int` \| `str`|加载数据时使用的辅助进程数。若设置为`'auto'`,则按照如下规则确定使用进程数:当CPU核心数大于16时,使用8个数据读取辅助进程;否则,使用CPU核心数一半数量的辅助进程。|`'auto'`|
|`shuffle`|`bool`|是否随机打乱数据集中的样本。|`False`|
|`sr_factor`|`int` \| `None`|对于超分辨率重建任务,指定为超分辨率倍数;对于其它任务,指定为`None`。|`None`|
`ResDataset`对file list的要求如下:
- file list中的每一行应该包含2个以空格分隔的项,依次表示输入影像(例如超分辨率重建任务中的低分辨率影像)相对`data_dir`的路径以及目标影像(例如超分辨率重建任务中的高分辨率影像)相对`data_dir`的路径。
### 图像分割数据集`SegDataset` ### 图像分割数据集`SegDataset`

@ -89,7 +89,28 @@ def predict(self, img_file, transforms=None):
#### `BaseRestorer.predict()` #### `BaseRestorer.predict()`
接口形式:
```python
def predict(self, img_file, transforms=None):
```
输入参数:
|参数名称|类型|参数说明|默认值|
|-------|----|--------|-----|
|`img_file`|`list[str\|np.ndarray]` \| `str` \| `np.ndarray`|输入影像数据(NumPy数组形式)或输入影像路径。若需要一次性预测一组影像,以列表包含这些影像的数据或路径(每幅影像对应列表中的一个元素)。||
|`transforms`|`paddlers.transforms.Compose` \| `None`|对输入数据应用的数据变换算子。若为`None`,则使用训练器在验证阶段使用的数据变换算子。|`None`|
返回格式:
若`img_file`是一个字符串或NumPy数组,则返回对象为包含下列键值对的字典:
```
{"res_map": 模型输出的复原或重建影像(以[h, w, c]格式排布)}
```
若`img_file`是一个列表,则返回对象为与`img_file`等长的列表,其中的每一项为一个字典(键值对如上所示),顺序对应`img_file`中的每个元素。
#### `BaseSegmenter.predict()` #### `BaseSegmenter.predict()`
@ -194,7 +215,7 @@ def predict(self,
|参数名称|类型|参数说明|默认值| |参数名称|类型|参数说明|默认值|
|-------|----|--------|-----| |-------|----|--------|-----|
|`img_file`|`list[str\|tuple\|np.ndarray]` \| `str` \| `tuple` \| `np.ndarray`|对于场景分类、目标检测和图像分割任务来说,该参数可为单一图像路径,或是解码后的、排列格式为[h, w, c]且具有float32类型的图像数据(表示为NumPy数组形式),或者是一组图像路径或np.ndarray对象构成的列表;对于变化检测任务来说,该参数可以为图像路径二元组(分别表示前后两个时相影像路径),或是解码后的两幅图像组成的二元组,或者是上述两种二元组之一构成的列表。|| |`img_file`|`list[str\|tuple\|np.ndarray]` \| `str` \| `tuple` \| `np.ndarray`|对于场景分类、目标检测、图像复原和图像分割任务来说,该参数可为单一图像路径,或是解码后的、排列格式为[h, w, c]且具有float32类型的图像数据(表示为NumPy数组形式),或者是一组图像路径或np.ndarray对象构成的列表;对于变化检测任务来说,该参数可以为图像路径二元组(分别表示前后两个时相影像路径),或是解码后的两幅图像组成的二元组,或者是上述两种二元组之一构成的列表。||
|`topk`|`int`|场景分类模型预测时使用,表示选取模型输出概率大小排名前`topk`的类别作为最终结果。|`1`| |`topk`|`int`|场景分类模型预测时使用,表示选取模型输出概率大小排名前`topk`的类别作为最终结果。|`1`|
|`transforms`|`paddlers.transforms.Compose`\|`None`|对输入数据应用的数据变换算子。若为`None`,则使用从`model.yml`中读取的算子。|`None`| |`transforms`|`paddlers.transforms.Compose`\|`None`|对输入数据应用的数据变换算子。若为`None`,则使用从`model.yml`中读取的算子。|`None`|
|`warmup_iters`|`int`|预热轮数,用于评估模型推理以及前后处理速度。若大于1,将预先重复执行`warmup_iters`次推理,而后才开始正式的预测及其速度评估。|`0`| |`warmup_iters`|`int`|预热轮数,用于评估模型推理以及前后处理速度。若大于1,将预先重复执行`warmup_iters`次推理,而后才开始正式的预测及其速度评估。|`0`|

@ -1,6 +1,6 @@
# PaddleRS训练API说明 # PaddleRS训练API说明
**训练器**封装了模型训练、验证、量化以及动态图推理等逻辑,定义在`paddlers/tasks/`目录下的文件中。为了方便用户使用,PaddleRS为所有支持的模型均提供了继承自父类[`BaseModel`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/base.py)的训练器,并对外提供数个API。变化检测、场景分类、图像分割以及目标检测任务对应的训练器类型分别为`BaseChangeDetector`、`BaseClassifier`、`BaseDetector`和`BaseSegmenter`。本文档介绍训练器的初始化函数以及`train()`、`evaluate()` API。 **训练器**封装了模型训练、验证、量化以及动态图推理等逻辑,定义在`paddlers/tasks/`目录下的文件中。为了方便用户使用,PaddleRS为所有支持的模型均提供了继承自父类[`BaseModel`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/base.py)的训练器,并对外提供数个API。变化检测、场景分类、目标检测、图像复原以及图像分割任务对应的训练器类型分别为`BaseChangeDetector`、`BaseClassifier`、`BaseDetector`、`BaseRestorer`和`BaseSegmenter`。本文档介绍训练器的初始化函数以及`train()`、`evaluate()` API。
## 初始化训练器 ## 初始化训练器
@ -10,27 +10,33 @@
- 一般支持设置`num_classes`、`use_mixed_loss`以及`in_channels`参数,分别表示模型输出类别数、是否使用预置的混合损失以及输入通道数。部分子类如`DSIFN`暂不支持对`in_channels`参数的设置。 - 一般支持设置`num_classes`、`use_mixed_loss`以及`in_channels`参数,分别表示模型输出类别数、是否使用预置的混合损失以及输入通道数。部分子类如`DSIFN`暂不支持对`in_channels`参数的设置。
- `use_mixed_loss`参将在未来被弃用,因此不建议使用。 - `use_mixed_loss`参将在未来被弃用,因此不建议使用。
- 可通过`losses`参数指定模型训练时使用的损失函数。`losses`需为一个字典,其中`'types'`键和`'coef'`键对应的值为两个等长的列表,分别表示损失函数对象(一个可调用对象)和损失函数的权重。例如:`losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}`在训练过程中将等价于计算如下损失函数:`1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`,其中`logits`和`labels`分别是模型输出和真值标签。
- 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/cd)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py)。 - 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/cd)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py)。
### 初始化`BaseClassifier`子类对象 ### 初始化`BaseClassifier`子类对象
- 一般支持设置`num_classes`和`use_mixed_loss`参数,分别表示模型输出类别数以及是否使用预置的混合损失。 - 一般支持设置`num_classes`和`use_mixed_loss`参数,分别表示模型输出类别数以及是否使用预置的混合损失。
- `use_mixed_loss`参将在未来被弃用,因此不建议使用。 - `use_mixed_loss`参将在未来被弃用,因此不建议使用。
- 可通过`losses`参数指定模型训练时使用的损失函数,传入实参需为`paddlers.models.clas_losses.CombinedLoss`类型对象。
- 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/clas)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py)。 - 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/clas)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py)。
### 初始化`BaseDetector`子类对象 ### 初始化`BaseDetector`子类对象
- 一般支持设置`num_classes`和`backbone`参数,分别表示模型输出类别数以及所用的骨干网络类型。相比其它任务,目标检测任务的训练器支持设置的初始化参数较多,囊括网络结构、损失函数、后处理策略等方面。 - 一般支持设置`num_classes`和`backbone`参数,分别表示模型输出类别数以及所用的骨干网络类型。相比其它任务,目标检测任务的训练器支持设置的初始化参数较多,囊括网络结构、损失函数、后处理策略等方面。
- 与分割、分类、变化检测等任务不同,检测任务不支持通过`losses`参数指定损失函数。不过对于部分训练器如`PPYOLO`,可通过`use_iou_loss`等参数定制损失函数。
- 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/det)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py)。 - 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/det)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py)。
### 初始化`BaseRestorer`子类对象 ### 初始化`BaseRestorer`子类对象
- 一般支持设置`sr_factor`参数,表示超分辨率倍数;对于不支持超分辨率重建任务的模型,`sr_factor`设置为`None`。
- 可通过`losses`参数指定模型训练时使用的损失函数,传入实参需为可调用对象或字典。手动指定的`losses`与子类的`default_loss()`方法返回值必须具有相同的格式。
- 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/res)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/restorer.py)。
### 初始化`BaseSegmenter`子类对象 ### 初始化`BaseSegmenter`子类对象
- 一般支持设置`in_channels`、`num_classes`以及`use_mixed_loss`参数,分别表示输入通道数、输出类别数以及是否使用预置的混合损失。部分模型如`FarSeg`暂不支持对`in_channels`参数的设置。 - 一般支持设置`in_channels`、`num_classes`以及`use_mixed_loss`参数,分别表示输入通道数、输出类别数以及是否使用预置的混合损失。部分模型如`FarSeg`暂不支持对`in_channels`参数的设置。
- `use_mixed_loss`参将在未来被弃用,因此不建议使用。 - `use_mixed_loss`参将在未来被弃用,因此不建议使用。
- 可通过`losses`参数指定模型训练时使用的损失函数。`losses`需为一个字典,其中`'types'`键和`'coef'`键对应的值为两个等长的列表,分别表示损失函数对象(一个可调用对象)和损失函数的权重。例如:`losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}`在训练过程中将等价于计算如下损失函数:`1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`,其中`logits`和`labels`分别是模型输出和真值标签。
- 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/seg)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmentor.py)。 - 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/seg)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmentor.py)。
## `train()` ## `train()`
@ -176,6 +182,46 @@ def train(self,
### `BaseRestorer.train()` ### `BaseRestorer.train()`
接口形式:
```python
def train(self,
num_epochs,
train_dataset,
train_batch_size=2,
eval_dataset=None,
optimizer=None,
save_interval_epochs=1,
log_interval_steps=2,
save_dir='output',
pretrain_weights='CITYSCAPES',
learning_rate=0.01,
lr_decay_power=0.9,
early_stop=False,
early_stop_patience=5,
use_vdl=True,
resume_checkpoint=None):
```
其中各参数的含义如下:
|参数名称|类型|参数说明|默认值|
|-------|----|--------|-----|
|`num_epochs`|`int`|训练的epoch数目。||
|`train_dataset`|`paddlers.datasets.ResDataset`|训练数据集。||
|`train_batch_size`|`int`|训练时使用的batch size。|`2`|
|`eval_dataset`|`paddlers.datasets.ResDataset` \| `None`|验证数据集。|`None`|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|训练时使用的优化器。若为`None`,则使用默认定义的优化器。|`None`|
|`save_interval_epochs`|`int`|训练时存储模型的间隔epoch数。|`1`|
|`log_interval_steps`|`int`|训练时打印日志的间隔step数(即迭代数)。|`2`|
|`save_dir`|`str`|存储模型的路径。|`'output'`|
|`pretrain_weights`|`str` \| `None`|预训练权重的名称/路径。若为`None`,则不适用预训练权重。|`'CITYSCAPES'`|
|`learning_rate`|`float`|训练时使用的学习率大小,适用于默认优化器。|`0.01`|
|`lr_decay_power`|`float`|学习率衰减系数,适用于默认优化器。|`0.9`|
|`early_stop`|`bool`|训练过程是否启用早停策略。|`False`|
|`early_stop_patience`|`int`|启用早停策略时的`patience`参数(参见[`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py))。|`5`|
|`use_vdl`|`bool`|是否启用VisualDL日志。|`True`|
|`resume_checkpoint`|`str` \| `None`|检查点路径。PaddleRS支持从检查点(包含先前训练过程中存储的模型权重和优化器权重)继续训练,但需注意`resume_checkpoint`与`pretrain_weights`不得同时设置为`None`以外的值。|`None`|
### `BaseSegmenter.train()` ### `BaseSegmenter.train()`
@ -280,7 +326,7 @@ def evaluate(self, eval_dataset, batch_size=1, return_details=False):
``` ```
{"top1": top1准确率, {"top1": top1准确率,
"top5": `top5准确率} "top5": top5准确率}
``` ```
### `BaseDetector.evaluate()` ### `BaseDetector.evaluate()`
@ -320,6 +366,26 @@ def evaluate(self,
### `BaseRestorer.evaluate()` ### `BaseRestorer.evaluate()`
接口形式:
```python
def evaluate(self, eval_dataset, batch_size=1, return_details=False):
```
输入参数如下:
|参数名称|类型|参数说明|默认值|
|-------|----|--------|-----|
|`eval_dataset`|`paddlers.datasets.ResDataset`|评估数据集。||
|`batch_size`|`int`|评估时使用的batch size(多卡训练时,为所有设备合计batch size)。|`1`|
|`return_details`|`bool`|*当前版本请勿手动设置此参数。*|`False`|
输出为一个`collections.OrderedDict`对象,包含如下键值对:
```
{"psnr": PSNR指标,
"ssim": SSIM指标}
```
### `BaseSegmenter.evaluate()` ### `BaseSegmenter.evaluate()`

@ -8,8 +8,9 @@ PaddleRS在`tools`目录中提供了丰富的遥感影像处理工具,包括
- `match.py`:用于实现两幅影像的配准。 - `match.py`:用于实现两幅影像的配准。
- `split.py`:用于对大幅面影像数据进行切片。 - `split.py`:用于对大幅面影像数据进行切片。
- `coco_tools/`:COCO工具合集,用于统计处理COCO格式标注文件。 - `coco_tools/`:COCO工具合集,用于统计处理COCO格式标注文件。
- `prepare_dataset/`:数据集预处理脚本合集。
## 使用示例 ## 使用说明
首先请确保您已将PaddleRS下载到本地。进入`tools`目录: 首先请确保您已将PaddleRS下载到本地。进入`tools`目录:
@ -101,3 +102,24 @@ python split.py --image_path {输入影像路径} [--mask_path {真值标签路
- `json_Merge.py`: 将多个json文件合并为一个。 - `json_Merge.py`: 将多个json文件合并为一个。
详细使用方法请参见[coco_tools使用说明](coco_tools.md)。 详细使用方法请参见[coco_tools使用说明](coco_tools.md)。
### prepare_dataset
`prepare_dataset`目录中包含一系列数据预处理脚本,主要用于预处理已下载到本地的遥感开源数据集,使其符合PaddleRS训练、验证、测试的标准。
在执行脚本前,您可以通过`--help`选项获取帮助信息。例如:
```shell
python prepare_dataset/prepare_levircd.py --help
```
以下列出了脚本中常见的命令行选项:
- `--in_dataset_dir`:下载到本地的原始数据集所在路径。示例:`--in_dataset_dir downloads/LEVIR-CD`。
- `--out_dataset_dir`:处理后的数据集存放路径。示例:`--out_dataset_dir data/levircd`。
- `--crop_size`:对于支持影像裁块的数据集,指定切分的影像块大小。示例:`--crop_size 256`。
- `--crop_stride`:对于支持影像裁块的数据集,指定切分时滑窗移动的步长。示例:`--crop_stride 256`。
- `--seed`:随机种子。可用于固定随机数生成器产生的伪随机数序列,从而得到固定的数据集划分结果。示例:`--seed 1919810`
- `--ratios`:对于支持子集随机划分的数据集,指定需要划分的各个子集的样本比例。示例:`--ratios 0.7 0.2 0.1`。
您可以在[此文档](https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/data_prep.md)中查看PaddleRS提供哪些数据集的预处理脚本。

@ -70,6 +70,15 @@ Args:
4. 在全局变量`__all__`中添加新增训练器的类名。 4. 在全局变量`__all__`中添加新增训练器的类名。
需要注意的是,对于图像复原任务,模型的前向、反向逻辑均实现在训练器定义中。对于GAN等需要用到多个网络的模型,训练器的编写请参照如下规范:
- 重写`build_net()`方法,使用`GANAdapter`维护所有网络。`GANAdapter`对象在构造时接受两个列表作为输入:第一个列表中包含所有的生成器,其中第一个元素为主生成器;第二个列表中包含所有的判别器。
- 重写`default_loss()`方法,构建损失函数。若训练过程中需要用到多个损失函数,推荐以字典的形式组织。
- 重写`default_optimizer()`方法,构建一个或多个优化器。当`build_net()`返回值的类型为`GANAdapter`时,`parameters`参数为一个字典。其中,`parameters['params_g']`是一个列表,顺序包含各个生成器的state dict;`parameters['params_d']`是一个列表,顺序包含各个判别器的state dict。若构建多个优化器,在返回时应使用`OptimizerAdapter`包装。
- 重写`run_gan()`方法,该方法接受`net`、`inputs`、`mode`、和`gan_mode`四个参数,用于执行训练过程中的某一个子任务,例如生成器的前向计算、判别器的前向计算等等。
- 重写`train_step()`方法,在其中编写模型训练过程中一次迭代的具体逻辑。通常的做法是反复调用`run_gan()`,每次调用时都根据需要构造不同的`inputs`、并使其工作在不同的`gan_mode`,并从每次返回的`outputs`字典中抽取有用的字段(如各项损失),汇总至最终结果。
GAN训练器的具体例子可以参考`ESRGAN`。
## 2 新增数据预处理/数据增强函数或算子 ## 2 新增数据预处理/数据增强函数或算子
### 2.1 新增数据预处理/数据增强函数 ### 2.1 新增数据预处理/数据增强函数

Binary file not shown.

Before

Width:  |  Height:  |  Size: 229 KiB

After

Width:  |  Height:  |  Size: 225 KiB

@ -0,0 +1,9 @@
# 数据集预处理脚本
## PaddleRS已支持的数据集预处理脚本列表
| 任务 | 数据集名称 | 数据集地址 | 预处理脚本 |
|-----|-----------|----------|----------|
| 变化检测 | LEVIR-CD | https://justchenhao.github.io/LEVIR/ | [prepare_levircd.py](https://github.com/PaddlePaddle/PaddleRS/blob/develop/tools/prepare_dataset/prepare_levircd.py) |
| 变化检测 | Season-varying | https://paperswithcode.com/dataset/cdd-dataset-season-varying | [prepare_svcd.py](https://github.com/PaddlePaddle/PaddleRS/blob/develop/tools/prepare_dataset/prepare_svcd.py) |
| 目标检测 | RSOD | https://github.com/RSIA-LIESMARS-WHU/RSOD-Dataset- | [prepare_rsod](https://github.com/PaddlePaddle/PaddleRS/blob/develop/tools/prepare_dataset/prepare_rsod.py) |

@ -1,4 +1,4 @@
# 数据预处理/数据增强 # 数据变换算子
## PaddleRS已支持的数据变换算子列表 ## PaddleRS已支持的数据变换算子列表

@ -8,9 +8,31 @@ PaddleRS提供从科学研究到产业应用的丰富示例,希望帮助遥感
## 2 社区贡献案例 ## 2 社区贡献案例
[AI Studio](https://aistudio.baidu.com/aistudio/index)是基于百度深度学习平台飞桨的人工智能学习与实训社区,提供在线编程环境、免费GPU算力、海量开源算法和开放数据,帮助开发者快速创建和部署模型。您可以在AI Studio上探索PaddleRS的更多玩法: ### 2.1 基于PaddleRS的遥感解译平台
[AI Studio上的PaddleRS相关项目](https://aistudio.baidu.com/aistudio/projectoverview/public?kw=PaddleRS) #### 小桨神瞳
<p>
<img src="https://user-images.githubusercontent.com/21275753/188320924-99c2915e-7371-4dc6-a50e-92fe11fc05a6.gif", width="400", hspace="50"> <img src="https://user-images.githubusercontent.com/21275753/188320957-f82348ee-c4cf-4799-b006-8389cb5e9380.gif", width="400">
</p>
- 作者:白菜
- 代码仓库:https://github.com/CrazyBoyM/webRS
- 演示视频:https://www.bilibili.com/video/BV1W14y1s7fs?vd_source=0de109a09b98176090b8aa3295a45bb6
#### 遥感图像智能解译平台
<p>
<img src="https://user-images.githubusercontent.com/21275753/187441111-e992e0ff-93d1-4fb3-90b2-79ff698db8d8.gif", width="400", hspace="50"> <img src="https://user-images.githubusercontent.com/21275753/187441219-08668c78-8426-4e19-ad7d-d1a22e1def49.gif", width="400">
</p>
- 作者:HHU-河马海牛队
- 代码仓库:https://github.com/terayco/Intelligent-RS-System
- 演示视频:https://www.bilibili.com/video/BV1eY4y1u7Eq/?vd_source=75a73fc15a4e8b25195728ee93a5b322
### 2.2 AI Studio上的PaddleRS相关项目
[AI Studio](https://aistudio.baidu.com/aistudio/index)是基于百度深度学习平台飞桨的人工智能学习与实训社区,提供在线编程环境、免费GPU算力、海量开源算法和开放数据,帮助开发者快速创建和部署模型。您可以[在AI Studio上探索PaddleRS的更多玩法](https://aistudio.baidu.com/aistudio/projectoverview/public?kw=PaddleRS)。
本文档收集了部分由开源爱好者贡献的精品项目: 本文档收集了部分由开源爱好者贡献的精品项目:

@ -56,7 +56,7 @@ class BIT(nn.Layer):
Default: 2. Default: 2.
enc_with_pos (bool, optional): Whether to add leanred positional embedding to the input feature sequence of the enc_with_pos (bool, optional): Whether to add leanred positional embedding to the input feature sequence of the
encoder. Default: True. encoder. Default: True.
enc_depth (int, optional): Number of attention blocks used in the encoder. Default: 1 enc_depth (int, optional): Number of attention blocks used in the encoder. Default: 1.
enc_head_dim (int, optional): Embedding dimension of each encoder head. Default: 64. enc_head_dim (int, optional): Embedding dimension of each encoder head. Default: 64.
dec_depth (int, optional): Number of attention blocks used in the decoder. Default: 8. dec_depth (int, optional): Number of attention blocks used in the decoder. Default: 8.
dec_head_dim (int, optional): Embedding dimension of each decoder head. Default: 8. dec_head_dim (int, optional): Embedding dimension of each decoder head. Default: 8.

@ -11,11 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
"""
This code is based on https://github.com/Z-Zheng/FarSeg # This code is based on https://github.com/Z-Zheng/FarSeg
Ths copyright of Z-Zheng/FarSeg is as follows: # The copyright of Z-Zheng/FarSeg is as follows:
Apache License [see LICENSE for details] # Apache License (see https://github.com/Z-Zheng/FarSeg/blob/master/LICENSE for details).
"""
import math import math
@ -164,7 +163,7 @@ class SceneRelation(nn.Layer):
return refined_feats return refined_feats
class AssymetricDecoder(nn.Layer): class AsymmetricDecoder(nn.Layer):
def __init__(self, def __init__(self,
in_channels, in_channels,
out_channels, out_channels,
@ -172,7 +171,7 @@ class AssymetricDecoder(nn.Layer):
out_feat_output_stride=4, out_feat_output_stride=4,
norm_fn=nn.BatchNorm2D, norm_fn=nn.BatchNorm2D,
num_groups_gn=None): num_groups_gn=None):
super(AssymetricDecoder, self).__init__() super(AsymmetricDecoder, self).__init__()
if norm_fn == nn.BatchNorm2D: if norm_fn == nn.BatchNorm2D:
norm_fn_args = dict(num_features=out_channels) norm_fn_args = dict(num_features=out_channels)
elif norm_fn == nn.GroupNorm: elif norm_fn == nn.GroupNorm:
@ -215,9 +214,12 @@ class AssymetricDecoder(nn.Layer):
class ResNet50Encoder(nn.Layer): class ResNet50Encoder(nn.Layer):
def __init__(self, pretrained=True): def __init__(self, in_ch=3, pretrained=True):
super(ResNet50Encoder, self).__init__() super(ResNet50Encoder, self).__init__()
self.resnet = resnet50(pretrained=pretrained) self.resnet = resnet50(pretrained=pretrained)
if in_ch != 3:
self.resnet.conv1 = nn.Conv2D(
in_ch, 64, kernel_size=7, stride=2, padding=3, bias_attr=False)
def forward(self, inputs): def forward(self, inputs):
x = inputs x = inputs
@ -237,22 +239,32 @@ class FarSeg(nn.Layer):
The FarSeg implementation based on PaddlePaddle. The FarSeg implementation based on PaddlePaddle.
The original article refers to The original article refers to
Zheng, Zhuo, et al. "Foreground-Aware Relation Network for Geospatial Object Zheng, Zhuo, et al. "Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution
Segmentation in High Spatial Resolution Remote Sensing Imagery" Remote Sensing Imagery"
(https://openaccess.thecvf.com/content_CVPR_2020/papers/Zheng_Foreground-Aware_Relation_Network_for_Geospatial_Object_Segmentation_in_High_Spatial_CVPR_2020_paper.pdf) (https://openaccess.thecvf.com/content_CVPR_2020/papers/Zheng_Foreground-Aware_Relation_Network_for_Geospatial_Object_Segmentation_in_High_Spatial_CVPR_2020_paper.pdf)
Args:
in_channels (int, optional): Number of bands of the input images. Default: 3.
num_classes (int, optional): Number of target classes. Default: 16.
fpn_ch_list (list[int]|tuple[int], optional): Channel list of the FPN. Default: (256, 512, 1024, 2048).
mid_ch (int, optional): Output channels of the FPN. Default: 256.
out_ch (int, optional): Output channels of the decoder. Default: 128.
sr_ch_list (list[int]|tuple[int], optional): Channel list of the foreground-scene relation module. Default: (256, 256, 256, 256).
pretrained_encoder (bool, optional): Whether to use a pretrained encoder. Default: True.
""" """
def __init__(self, def __init__(self,
in_channels=3,
num_classes=16, num_classes=16,
fpn_ch_list=(256, 512, 1024, 2048), fpn_ch_list=(256, 512, 1024, 2048),
mid_ch=256, mid_ch=256,
out_ch=128, out_ch=128,
sr_ch_list=(256, 256, 256, 256), sr_ch_list=(256, 256, 256, 256),
encoder_pretrained=True): pretrained_encoder=True):
super(FarSeg, self).__init__() super(FarSeg, self).__init__()
self.en = ResNet50Encoder(encoder_pretrained) self.en = ResNet50Encoder(in_channels, pretrained_encoder)
self.fpn = FPN(in_channels_list=fpn_ch_list, out_channels=mid_ch) self.fpn = FPN(in_channels_list=fpn_ch_list, out_channels=mid_ch)
self.decoder = AssymetricDecoder( self.decoder = AsymmetricDecoder(
in_channels=mid_ch, out_channels=out_ch) in_channels=mid_ch, out_channels=out_ch)
self.cls_pred_conv = nn.Conv2D(out_ch, num_classes, 1) self.cls_pred_conv = nn.Conv2D(out_ch, num_classes, 1)
self.upsample4x_op = nn.UpsamplingBilinear2D(scale_factor=4) self.upsample4x_op = nn.UpsamplingBilinear2D(scale_factor=4)
@ -273,5 +285,4 @@ class FarSeg(nn.Layer):
final_feat = self.decoder(refined_fpn_feat_list) final_feat = self.decoder(refined_fpn_feat_list)
cls_pred = self.cls_pred_conv(final_feat) cls_pred = self.cls_pred_conv(final_feat)
cls_pred = self.upsample4x_op(cls_pred) cls_pred = self.upsample4x_op(cls_pred)
cls_pred = F.softmax(cls_pred, axis=1)
return [cls_pred] return [cls_pred]

@ -31,7 +31,7 @@ import paddlers.utils.logging as logging
from paddlers.models import seg_losses from paddlers.models import seg_losses
from paddlers.transforms import Resize, decode_image from paddlers.transforms import Resize, decode_image
from paddlers.utils import get_single_card_bs from paddlers.utils import get_single_card_bs
from paddlers.utils.checkpoint import seg_pretrain_weights_dict from paddlers.utils.checkpoint import cd_pretrain_weights_dict
from .base import BaseModel from .base import BaseModel
from .utils import seg_metrics as metrics from .utils import seg_metrics as metrics
from .utils.infer_nets import InferCDNet from .utils.infer_nets import InferCDNet
@ -276,7 +276,7 @@ class BaseChangeDetector(BaseModel):
exit=True) exit=True)
if pretrain_weights is not None and resume_checkpoint is not None: if pretrain_weights is not None and resume_checkpoint is not None:
logging.error( logging.error(
"pretrain_weights and resume_checkpoint cannot be set simultaneously.", "`pretrain_weights` and `resume_checkpoint` cannot be set simultaneously.",
exit=True) exit=True)
self.labels = train_dataset.labels self.labels = train_dataset.labels
if self.losses is None: if self.losses is None:
@ -290,22 +290,29 @@ class BaseChangeDetector(BaseModel):
else: else:
self.optimizer = optimizer self.optimizer = optimizer
if pretrain_weights is not None and not osp.exists(pretrain_weights): if pretrain_weights is not None:
if pretrain_weights not in seg_pretrain_weights_dict[ if not osp.exists(pretrain_weights):
if self.model_name not in cd_pretrain_weights_dict:
logging.warning(
"Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = None
elif pretrain_weights not in cd_pretrain_weights_dict[
self.model_name]: self.model_name]:
logging.warning( logging.warning(
"Path of pretrain_weights('{}') does not exist!".format( "Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = cd_pretrain_weights_dict[
self.model_name][0]
logging.warning(
"`pretrain_weights` is forcibly set to '{}'. "
"If you don't want to use pretrained weights, "
"please set `pretrain_weights` to None.".format(
pretrain_weights)) pretrain_weights))
logging.warning("Pretrain_weights is forcibly set to '{}'. " else:
"If don't want to use pretrain weights, "
"set pretrain_weights to be None.".format(
seg_pretrain_weights_dict[self.model_name][
0]))
pretrain_weights = seg_pretrain_weights_dict[self.model_name][0]
elif pretrain_weights is not None and osp.exists(pretrain_weights):
if osp.splitext(pretrain_weights)[-1] != '.pdparams': if osp.splitext(pretrain_weights)[-1] != '.pdparams':
logging.error( logging.error(
"Invalid pretrain weights. Please specify a '.pdparams' file.", "Invalid pretrained weights. Please specify a .pdparams file.",
exit=True) exit=True)
pretrained_dir = osp.join(save_dir, 'pretrain') pretrained_dir = osp.join(save_dir, 'pretrain')
is_backbone_weights = pretrain_weights == 'IMAGENET' is_backbone_weights = pretrain_weights == 'IMAGENET'
@ -410,18 +417,18 @@ class BaseChangeDetector(BaseModel):
key-value pairs: key-value pairs:
For binary change detection (number of classes == 2), the key-value For binary change detection (number of classes == 2), the key-value
pairs are like: pairs are like:
{"iou": `intersection over union for the change class`, {"iou": intersection over union for the change class,
"f1": `F1 score for the change class`, "f1": F1 score for the change class,
"oacc": `overall accuracy`, "oacc": overall accuracy,
"kappa": ` kappa coefficient`}. "kappa": kappa coefficient}.
For multi-class change detection (number of classes > 2), the key-value For multi-class change detection (number of classes > 2), the key-value
pairs are like: pairs are like:
{"miou": `mean intersection over union`, {"miou": mean intersection over union,
"category_iou": `category-wise mean intersection over union`, "category_iou": category-wise mean intersection over union,
"oacc": `overall accuracy`, "oacc": overall accuracy,
"category_acc": `category-wise accuracy`, "category_acc": category-wise accuracy,
"kappa": ` kappa coefficient`, "kappa": kappa coefficient,
"category_F1-score": `F1 score`}. "category_F1-score": F1 score}.
""" """
self._check_transforms(eval_dataset.transforms, 'eval') self._check_transforms(eval_dataset.transforms, 'eval')

@ -246,7 +246,7 @@ class BaseClassifier(BaseModel):
exit=True) exit=True)
if pretrain_weights is not None and resume_checkpoint is not None: if pretrain_weights is not None and resume_checkpoint is not None:
logging.error( logging.error(
"pretrain_weights and resume_checkpoint cannot be set simultaneously.", "`pretrain_weights` and `resume_checkpoint` cannot be set simultaneously.",
exit=True) exit=True)
self.labels = train_dataset.labels self.labels = train_dataset.labels
if self.losses is None: if self.losses is None:
@ -262,25 +262,32 @@ class BaseClassifier(BaseModel):
else: else:
self.optimizer = optimizer self.optimizer = optimizer
if pretrain_weights is not None and not osp.exists(pretrain_weights): if pretrain_weights is not None:
if pretrain_weights not in cls_pretrain_weights_dict[ if not osp.exists(pretrain_weights):
if self.model_name not in cls_pretrain_weights_dict:
logging.warning(
"Path of `pretrain_weights` ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = None
elif pretrain_weights not in cls_pretrain_weights_dict[
self.model_name]: self.model_name]:
logging.warning( logging.warning(
"Path of pretrain_weights('{}') does not exist!".format( "Path of `pretrain_weights` ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = cls_pretrain_weights_dict[
self.model_name][0]
logging.warning(
"`pretrain_weights` is forcibly set to '{}'. "
"If you don't want to use pretrained weights, "
"set `pretrain_weights` to None.".format(
pretrain_weights)) pretrain_weights))
logging.warning("Pretrain_weights is forcibly set to '{}'. " else:
"If don't want to use pretrain weights, "
"set pretrain_weights to be None.".format(
cls_pretrain_weights_dict[self.model_name][
0]))
pretrain_weights = cls_pretrain_weights_dict[self.model_name][0]
elif pretrain_weights is not None and osp.exists(pretrain_weights):
if osp.splitext(pretrain_weights)[-1] != '.pdparams': if osp.splitext(pretrain_weights)[-1] != '.pdparams':
logging.error( logging.error(
"Invalid pretrain weights. Please specify a '.pdparams' file.", "Invalid pretrained weights. Please specify a .pdparams file.",
exit=True) exit=True)
pretrained_dir = osp.join(save_dir, 'pretrain') pretrained_dir = osp.join(save_dir, 'pretrain')
is_backbone_weights = False # pretrain_weights == 'IMAGENET' # TODO: this is backbone is_backbone_weights = False
self.net_initialize( self.net_initialize(
pretrain_weights=pretrain_weights, pretrain_weights=pretrain_weights,
save_dir=pretrained_dir, save_dir=pretrained_dir,
@ -380,8 +387,8 @@ class BaseClassifier(BaseModel):
Returns: Returns:
If `return_details` is False, return collections.OrderedDict with If `return_details` is False, return collections.OrderedDict with
key-value pairs: key-value pairs:
{"top1": `acc of top1`, {"top1": acc of top1,
"top5": `acc of top5`}. "top5": acc of top5}.
""" """
self._check_transforms(eval_dataset.transforms, 'eval') self._check_transforms(eval_dataset.transforms, 'eval')

@ -274,7 +274,7 @@ class BaseDetector(BaseModel):
exit=True) exit=True)
if pretrain_weights is not None and resume_checkpoint is not None: if pretrain_weights is not None and resume_checkpoint is not None:
logging.error( logging.error(
"pretrain_weights and resume_checkpoint cannot be set simultaneously.", "`pretrain_weights` and `resume_checkpoint` cannot be set simultaneously.",
exit=True) exit=True)
if train_dataset.__class__.__name__ == 'VOCDetDataset': if train_dataset.__class__.__name__ == 'VOCDetDataset':
train_dataset.data_fields = { train_dataset.data_fields = {
@ -323,22 +323,28 @@ class BaseDetector(BaseModel):
self.optimizer = optimizer self.optimizer = optimizer
# Initiate weights # Initiate weights
if pretrain_weights is not None and not osp.exists(pretrain_weights): if pretrain_weights is not None:
if pretrain_weights not in det_pretrain_weights_dict['_'.join( if not osp.exists(pretrain_weights):
[self.model_name, self.backbone_name])]: key = '_'.join([self.model_name, self.backbone_name])
if key not in det_pretrain_weights_dict:
logging.warning( logging.warning(
"Path of pretrain_weights('{}') does not exist!".format( "Path of pretrained weights ('{}') does not exist!".
pretrain_weights)) format(pretrain_weights))
pretrain_weights = det_pretrain_weights_dict['_'.join( pretrain_weights = None
[self.model_name, self.backbone_name])][0] elif pretrain_weights not in det_pretrain_weights_dict[key]:
logging.warning("Pretrain_weights is forcibly set to '{}'. " logging.warning(
"If you don't want to use pretrain weights, " "Path of pretrained weights ('{}') does not exist!".
"set pretrain_weights to be None.".format( format(pretrain_weights))
pretrain_weights = det_pretrain_weights_dict[key][0]
logging.warning(
"`pretrain_weights` is forcibly set to '{}'. "
"If you don't want to use pretrained weights, "
"please set `pretrain_weights` to None.".format(
pretrain_weights)) pretrain_weights))
elif pretrain_weights is not None and osp.exists(pretrain_weights): else:
if osp.splitext(pretrain_weights)[-1] != '.pdparams': if osp.splitext(pretrain_weights)[-1] != '.pdparams':
logging.error( logging.error(
"Invalid pretrain weights. Please specify a '.pdparams' file.", "Invalid pretrained weights. Please specify a .pdparams file.",
exit=True) exit=True)
pretrained_dir = osp.join(save_dir, 'pretrain') pretrained_dir = osp.join(save_dir, 'pretrain')
self.net_initialize( self.net_initialize(
@ -477,7 +483,7 @@ class BaseDetector(BaseModel):
Returns: Returns:
If `return_details` is False, return collections.OrderedDict with key-value pairs: If `return_details` is False, return collections.OrderedDict with key-value pairs:
{"bbox_mmap":`mean average precision (0.50, 11point)`}. {"bbox_mmap": mean average precision (0.50, 11point)}.
""" """
if metric is None: if metric is None:

@ -31,6 +31,7 @@ from paddlers.models import res_losses
from paddlers.transforms import Resize, decode_image from paddlers.transforms import Resize, decode_image
from paddlers.transforms.functions import calc_hr_shape from paddlers.transforms.functions import calc_hr_shape
from paddlers.utils import get_single_card_bs from paddlers.utils import get_single_card_bs
from paddlers.utils.checkpoint import res_pretrain_weights_dict
from .base import BaseModel from .base import BaseModel
from .utils.res_adapters import GANAdapter, OptimizerAdapter from .utils.res_adapters import GANAdapter, OptimizerAdapter
from .utils.infer_nets import InferResNet from .utils.infer_nets import InferResNet
@ -234,7 +235,7 @@ class BaseRestorer(BaseModel):
exit=True) exit=True)
if pretrain_weights is not None and resume_checkpoint is not None: if pretrain_weights is not None and resume_checkpoint is not None:
logging.error( logging.error(
"pretrain_weights and resume_checkpoint cannot be set simultaneously.", "`pretrain_weights` and `resume_checkpoint` cannot be set simultaneously.",
exit=True) exit=True)
if self.losses is None: if self.losses is None:
@ -256,13 +257,29 @@ class BaseRestorer(BaseModel):
else: else:
self.optimizer = optimizer self.optimizer = optimizer
if pretrain_weights is not None and not osp.exists(pretrain_weights): if pretrain_weights is not None:
logging.warning("Path of pretrain_weights('{}') does not exist!". if not osp.exists(pretrain_weights):
if self.model_name not in res_pretrain_weights_dict:
logging.warning(
"Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = None
elif pretrain_weights not in res_pretrain_weights_dict[
self.model_name]:
logging.warning(
"Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights)) format(pretrain_weights))
elif pretrain_weights is not None and osp.exists(pretrain_weights): pretrain_weights = res_pretrain_weights_dict[
self.model_name][0]
logging.warning(
"`pretrain_weights` is forcibly set to '{}'. "
"If you don't want to use pretrained weights, "
"please set `pretrain_weights` to None.".format(
pretrain_weights))
else:
if osp.splitext(pretrain_weights)[-1] != '.pdparams': if osp.splitext(pretrain_weights)[-1] != '.pdparams':
logging.error( logging.error(
"Invalid pretrain weights. Please specify a '.pdparams' file.", "Invalid pretrained weights. Please specify a .pdparams file.",
exit=True) exit=True)
pretrained_dir = osp.join(save_dir, 'pretrain') pretrained_dir = osp.join(save_dir, 'pretrain')
is_backbone_weights = pretrain_weights == 'IMAGENET' is_backbone_weights = pretrain_weights == 'IMAGENET'
@ -365,8 +382,8 @@ class BaseRestorer(BaseModel):
Returns: Returns:
If `return_details` is False, return collections.OrderedDict with If `return_details` is False, return collections.OrderedDict with
key-value pairs: key-value pairs:
{"psnr": `peak signal-to-noise ratio`, {"psnr": peak signal-to-noise ratio,
"ssim": `structural similarity`}. "ssim": structural similarity}.
""" """

@ -268,7 +268,7 @@ class BaseSegmenter(BaseModel):
exit=True) exit=True)
if pretrain_weights is not None and resume_checkpoint is not None: if pretrain_weights is not None and resume_checkpoint is not None:
logging.error( logging.error(
"pretrain_weights and resume_checkpoint cannot be set simultaneously.", "`pretrain_weights` and `resume_checkpoint` cannot be set simultaneously.",
exit=True) exit=True)
self.labels = train_dataset.labels self.labels = train_dataset.labels
if self.losses is None: if self.losses is None:
@ -282,22 +282,29 @@ class BaseSegmenter(BaseModel):
else: else:
self.optimizer = optimizer self.optimizer = optimizer
if pretrain_weights is not None and not osp.exists(pretrain_weights): if pretrain_weights is not None:
if pretrain_weights not in seg_pretrain_weights_dict[ if not osp.exists(pretrain_weights):
if self.model_name not in seg_pretrain_weights_dict:
logging.warning(
"Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = None
elif pretrain_weights not in seg_pretrain_weights_dict[
self.model_name]: self.model_name]:
logging.warning( logging.warning(
"Path of pretrain_weights('{}') does not exist!".format( "Path of pretrained weights ('{}') does not exist!".
format(pretrain_weights))
pretrain_weights = seg_pretrain_weights_dict[
self.model_name][0]
logging.warning(
"`pretrain_weights` is forcibly set to '{}'. "
"If you don't want to use pretrained weights, "
"please set `pretrain_weights` to None.".format(
pretrain_weights)) pretrain_weights))
logging.warning("Pretrain_weights is forcibly set to '{}'. " else:
"If don't want to use pretrain weights, "
"set pretrain_weights to be None.".format(
seg_pretrain_weights_dict[self.model_name][
0]))
pretrain_weights = seg_pretrain_weights_dict[self.model_name][0]
elif pretrain_weights is not None and osp.exists(pretrain_weights):
if osp.splitext(pretrain_weights)[-1] != '.pdparams': if osp.splitext(pretrain_weights)[-1] != '.pdparams':
logging.error( logging.error(
"Invalid pretrain weights. Please specify a '.pdparams' file.", "Invalid pretrained weights. Please specify a .pdparams file.",
exit=True) exit=True)
pretrained_dir = osp.join(save_dir, 'pretrain') pretrained_dir = osp.join(save_dir, 'pretrain')
is_backbone_weights = pretrain_weights == 'IMAGENET' is_backbone_weights = pretrain_weights == 'IMAGENET'
@ -399,12 +406,12 @@ class BaseSegmenter(BaseModel):
Returns: Returns:
collections.OrderedDict with key-value pairs: collections.OrderedDict with key-value pairs:
{"miou": `mean intersection over union`, {"miou": mean intersection over union,
"category_iou": `category-wise mean intersection over union`, "category_iou": category-wise mean intersection over union,
"oacc": `overall accuracy`, "oacc": overall accuracy,
"category_acc": `category-wise accuracy`, "category_acc": category-wise accuracy,
"kappa": ` kappa coefficient`, "kappa": kappa coefficient,
"category_F1-score": `F1 score`}. "category_F1-score": F1 score}.
""" """
@ -980,6 +987,7 @@ class BiSeNetV2(BaseSegmenter):
class FarSeg(BaseSegmenter): class FarSeg(BaseSegmenter):
def __init__(self, def __init__(self,
in_channels=3,
num_classes=2, num_classes=2,
use_mixed_loss=False, use_mixed_loss=False,
losses=None, losses=None,
@ -989,4 +997,5 @@ class FarSeg(BaseSegmenter):
num_classes=num_classes, num_classes=num_classes,
use_mixed_loss=use_mixed_loss, use_mixed_loss=use_mixed_loss,
losses=losses, losses=losses,
in_channels=in_channels,
**params) **params)

@ -21,20 +21,14 @@ import paddle
from . import logging from . import logging
from .download import download_and_decompress from .download import download_and_decompress
cd_pretrain_weights_dict = {}
cls_pretrain_weights_dict = { cls_pretrain_weights_dict = {
'ResNet50_vd': ['IMAGENET'], 'ResNet50_vd': ['IMAGENET'],
'MobileNetV3_small_x1_0': ['IMAGENET'], 'MobileNetV3_small_x1_0': ['IMAGENET'],
'HRNet_W18_C': ['IMAGENET'], 'HRNet_W18_C': ['IMAGENET'],
} }
seg_pretrain_weights_dict = {
'UNet': ['CITYSCAPES'],
'DeepLabV3P': ['CITYSCAPES', 'PascalVOC', 'IMAGENET'],
'FastSCNN': ['CITYSCAPES'],
'HRNet': ['CITYSCAPES', 'PascalVOC'],
'BiSeNetV2': ['CITYSCAPES']
}
det_pretrain_weights_dict = { det_pretrain_weights_dict = {
'PicoDet_ESNet_s': ['COCO', 'IMAGENET'], 'PicoDet_ESNet_s': ['COCO', 'IMAGENET'],
'PicoDet_ESNet_m': ['COCO', 'IMAGENET'], 'PicoDet_ESNet_m': ['COCO', 'IMAGENET'],
@ -74,6 +68,16 @@ det_pretrain_weights_dict = {
'MaskRCNN_ResNet101_vd_fpn': ['COCO', 'IMAGENET'] 'MaskRCNN_ResNet101_vd_fpn': ['COCO', 'IMAGENET']
} }
res_pretrain_weights_dict = {}
seg_pretrain_weights_dict = {
'UNet': ['CITYSCAPES'],
'DeepLabV3P': ['CITYSCAPES', 'PascalVOC', 'IMAGENET'],
'FastSCNN': ['CITYSCAPES'],
'HRNet': ['CITYSCAPES', 'PascalVOC'],
'BiSeNetV2': ['CITYSCAPES']
}
cityscapes_weights = { cityscapes_weights = {
'UNet_CITYSCAPES': 'UNet_CITYSCAPES':
'https://bj.bcebos.com/paddleseg/dygraph/cityscapes/unet_cityscapes_1024x512_160k/model.pdparams', 'https://bj.bcebos.com/paddleseg/dygraph/cityscapes/unet_cityscapes_1024x512_160k/model.pdparams',

@ -44,6 +44,7 @@
| 目标检测 | PP-YOLOv2 | 支持 | - | - | - | | 目标检测 | PP-YOLOv2 | 支持 | - | - | - |
| 目标检测 | YOLOv3 | 支持 | - | - | - | | 目标检测 | YOLOv3 | 支持 | - | - | - |
| 图像分割 | DeepLab V3+ | 支持 | - | - | - | | 图像分割 | DeepLab V3+ | 支持 | - | - | - |
| 图像分割 | FarSeg | 支持 | - | - | - |
| 图像分割 | UNet | 支持 | - | - | - | | 图像分割 | UNet | 支持 | - | - | - |
## 3 测试工具简介 ## 3 测试工具简介

@ -0,0 +1,11 @@
# Configurations of FarSeg with RSSeg dataset
_base_: ../_base_/rsseg.yaml
save_dir: ./test_tipc/output/seg/farseg/
model: !Node
type: FarSeg
args:
in_channels: 10
num_classes: 5

@ -0,0 +1,53 @@
===========================train_params===========================
model_name:seg:farseg
python:python
gpu_list:0|0,1
use_gpu:null|null
--precision:null
--num_epochs:lite_train_lite_infer=3|lite_train_whole_infer=3|whole_train_whole_infer=20
--save_dir:adaptive
--train_batch_size:lite_train_lite_infer=4|lite_train_whole_infer=4|whole_train_whole_infer=4
--model_path:null
--config:lite_train_lite_infer=./test_tipc/configs/seg/farseg/farseg_rsseg.yaml|lite_train_whole_infer=./test_tipc/configs/seg/farseg/farseg_rsseg.yaml|whole_train_whole_infer=./test_tipc/configs/seg/farseg/farseg_rsseg.yaml
train_model_name:best_model
null:null
##
trainer:norm
norm_train:test_tipc/run_task.py train seg
pact_train:null
fpgm_train:null
distill_train:null
null:null
null:null
##
===========================eval_params===========================
eval:null
null:null
##
===========================export_params===========================
--save_dir:adaptive
--model_dir:adaptive
--fixed_input_shape:[-1,10,512,512]
norm_export:deploy/export/export_model.py
quant_export:null
fpgm_export:null
distill_export:null
export1:null
export2:null
===========================infer_params===========================
infer_model:null
infer_export:null
infer_quant:False
inference:test_tipc/infer.py
--device:cpu|gpu
--enable_mkldnn:True
--cpu_threads:6
--batch_size:1
--use_trt:False
--precision:fp32
--model_dir:null
--config:null
--save_log_path:null
--benchmark:True
--model_name:farseg
null:null

@ -31,6 +31,7 @@ Linux GPU/CPU 基础训练推理测试的主程序为`test_train_inference_pytho
| 目标检测 | PP-YOLOv2 | 正常训练 | 正常训练 | mAP=59.37% | | 目标检测 | PP-YOLOv2 | 正常训练 | 正常训练 | mAP=59.37% |
| 目标检测 | YOLOv3 | 正常训练 | 正常训练 | mAP=47.33% | | 目标检测 | YOLOv3 | 正常训练 | 正常训练 | mAP=47.33% |
| 图像分割 | DeepLab V3+ | 正常训练 | 正常训练 | mIoU=56.05% | | 图像分割 | DeepLab V3+ | 正常训练 | 正常训练 | mIoU=56.05% |
| 图像分割 | FarSeg | 正常训练 | 正常训练 | mIoU=49.58% |
| 图像分割 | UNet | 正常训练 | 正常训练 | mIoU=55.50% | | 图像分割 | UNet | 正常训练 | 正常训练 | mIoU=55.50% |
*注:参考预测精度为whole_train_whole_infer模式下单卡训练汇报的精度数据。* *注:参考预测精度为whole_train_whole_infer模式下单卡训练汇报的精度数据。*
@ -61,6 +62,7 @@ Linux GPU/CPU 基础训练推理测试的主程序为`test_train_inference_pytho
| 目标检测 | PP-YOLOv2 | 支持 | 支持 | 1 | | 目标检测 | PP-YOLOv2 | 支持 | 支持 | 1 |
| 目标检测 | YOLOv3 | 支持 | 支持 | 1 | | 目标检测 | YOLOv3 | 支持 | 支持 | 1 |
| 图像分割 | DeepLab V3+ | 支持 | 支持 | 1 | | 图像分割 | DeepLab V3+ | 支持 | 支持 | 1 |
| 图像分割 | FarSeg | 支持 | 支持 | 1 |
| 图像分割 | UNet | 支持 | 支持 | 1 | | 图像分割 | UNet | 支持 | 支持 | 1 |
## 2 测试流程 ## 2 测试流程

@ -105,7 +105,7 @@ class TestPredictor(CommonTest):
dict_[key], expected_dict[key], rtol=1.e-4, atol=1.e-6) dict_[key], expected_dict[key], rtol=1.e-4, atol=1.e-6)
# @TestPredictor.add_tests @TestPredictor.add_tests
class TestCDPredictor(TestPredictor): class TestCDPredictor(TestPredictor):
MODULE = pdrs.tasks.change_detector MODULE = pdrs.tasks.change_detector
TRAINER_NAME_TO_EXPORT_OPTS = { TRAINER_NAME_TO_EXPORT_OPTS = {
@ -177,7 +177,7 @@ class TestCDPredictor(TestPredictor):
self.assertEqual(len(out_multi_array_t), num_inputs) self.assertEqual(len(out_multi_array_t), num_inputs)
# @TestPredictor.add_tests @TestPredictor.add_tests
class TestClasPredictor(TestPredictor): class TestClasPredictor(TestPredictor):
MODULE = pdrs.tasks.classifier MODULE = pdrs.tasks.classifier
TRAINER_NAME_TO_EXPORT_OPTS = { TRAINER_NAME_TO_EXPORT_OPTS = {
@ -242,7 +242,7 @@ class TestClasPredictor(TestPredictor):
self.check_dict_equal(out_multi_array_p, out_multi_array_t) self.check_dict_equal(out_multi_array_p, out_multi_array_t)
# @TestPredictor.add_tests @TestPredictor.add_tests
class TestDetPredictor(TestPredictor): class TestDetPredictor(TestPredictor):
MODULE = pdrs.tasks.object_detector MODULE = pdrs.tasks.object_detector
TRAINER_NAME_TO_EXPORT_OPTS = { TRAINER_NAME_TO_EXPORT_OPTS = {
@ -355,7 +355,7 @@ class TestResPredictor(TestPredictor):
self.assertEqual(len(out_multi_array_t), num_inputs) self.assertEqual(len(out_multi_array_t), num_inputs)
# @TestPredictor.add_tests @TestPredictor.add_tests
class TestSegPredictor(TestPredictor): class TestSegPredictor(TestPredictor):
MODULE = pdrs.tasks.segmenter MODULE = pdrs.tasks.segmenter
TRAINER_NAME_TO_EXPORT_OPTS = { TRAINER_NAME_TO_EXPORT_OPTS = {

@ -21,7 +21,7 @@ __all__ = [
'TestBITModel', 'TestCDNetModel', 'TestChangeStarModel', 'TestDSAMNetModel', 'TestBITModel', 'TestCDNetModel', 'TestChangeStarModel', 'TestDSAMNetModel',
'TestDSIFNModel', 'TestFCEarlyFusionModel', 'TestFCSiamConcModel', 'TestDSIFNModel', 'TestFCEarlyFusionModel', 'TestFCSiamConcModel',
'TestFCSiamDiffModel', 'TestSNUNetModel', 'TestSTANetModel', 'TestFCSiamDiffModel', 'TestSNUNetModel', 'TestSTANetModel',
'TestChangeFormerModel' 'TestChangeFormerModel', 'TestFCCDNModel'
] ]
@ -32,6 +32,9 @@ class TestCDModel(TestModel):
self.assertIsInstance(output, list) self.assertIsInstance(output, list)
self.check_output_equal(len(output), len(target)) self.check_output_equal(len(output), len(target))
for o, t in zip(output, target): for o, t in zip(output, target):
if isinstance(o, list):
self.check_output(o, t)
else:
o = o.numpy() o = o.numpy()
self.check_output_equal(o.shape, t.shape) self.check_output_equal(o.shape, t.shape)
@ -225,3 +228,27 @@ class TestChangeFormerModel(TestCDModel):
dict(**base_spec, decoder_softmax=True), dict(**base_spec, decoder_softmax=True),
dict(**base_spec, embed_dim=56) dict(**base_spec, embed_dim=56)
] # yapf: disable ] # yapf: disable
class TestFCCDNModel(TestCDModel):
MODEL_CLASS = paddlers.rs_models.cd.FCCDN
def set_specs(self):
self.specs = [
dict(in_channels=3, num_classes=2),
dict(in_channels=8, num_classes=2),
dict(in_channels=3, num_classes=8),
dict(in_channels=3, num_classes=2, _phase='eval', _stop_grad=True)
] # yapf: disable
def set_targets(self):
b = self.DEFAULT_BATCH_SIZE
h = self.DEFAULT_HW[0] // 2
w = self.DEFAULT_HW[1] // 2
tar_c2 = [
self.get_zeros_array(2), [self.get_zeros_array(1, b, h, w)] * 2
]
self.targets = [
tar_c2, tar_c2, [self.get_zeros_array(8), tar_c2[1]],
[self.get_zeros_array(2)]
]

@ -25,6 +25,9 @@ class TestSegModel(TestModel):
self.assertIsInstance(output, list) self.assertIsInstance(output, list)
self.check_output_equal(len(output), len(target)) self.check_output_equal(len(output), len(target))
for o, t in zip(output, target): for o, t in zip(output, target):
if isinstance(o, list):
self.check_output(o, t)
else:
o = o.numpy() o = o.numpy()
self.check_output_equal(o.shape, t.shape) self.check_output_equal(o.shape, t.shape)
@ -50,7 +53,8 @@ class TestFarSegModel(TestSegModel):
def set_specs(self): def set_specs(self):
self.specs = [ self.specs = [
dict(), dict(num_classes=20), dict(encoder_pretrained=False) dict(), dict(num_classes=20), dict(pretrained_encoder=False),
dict(in_channels=10)
] ]
def set_targets(self): def set_targets(self):

@ -107,7 +107,7 @@ def crop_patches(crop_size,
if max_workers < 0: if max_workers < 0:
raise ValueError("`max_workers` must be a non-negative integer!") raise ValueError("`max_workers` must be a non-negative integer!")
if subset is None: if subsets is None:
subsets = ('', ) subsets = ('', )
if max_workers == 0: if max_workers == 0:

@ -27,6 +27,7 @@
|object_detection/ppyolov2.py | 目标检测 | PP-YOLOv2 | |object_detection/ppyolov2.py | 目标检测 | PP-YOLOv2 |
|object_detection/yolov3.py | 目标检测 | YOLOv3 | |object_detection/yolov3.py | 目标检测 | YOLOv3 |
|semantic_segmentation/deeplabv3p.py | 图像分割 | DeepLab V3+ | |semantic_segmentation/deeplabv3p.py | 图像分割 | DeepLab V3+ |
|semantic_segmentation/farseg.py | 图像分割 | FarSeg |
|semantic_segmentation/unet.py | 图像分割 | UNet | |semantic_segmentation/unet.py | 图像分割 | UNet |
## 环境准备 ## 环境准备

@ -71,7 +71,7 @@ eval_dataset = pdrs.datasets.SegDataset(
# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/model_zoo.md # 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/model_zoo.md
# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py # 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
model = pdrs.tasks.seg.DeepLabV3P( model = pdrs.tasks.seg.DeepLabV3P(
input_channel=NUM_BANDS, in_channels=NUM_BANDS,
num_classes=len(train_dataset.labels), num_classes=len(train_dataset.labels),
backbone='ResNet50_vd') backbone='ResNet50_vd')

@ -0,0 +1,94 @@
#!/usr/bin/env python
# 图像分割模型FarSeg训练示例脚本
# 执行此脚本前,请确认已正确安装PaddleRS库
import paddlers as pdrs
from paddlers import transforms as T
# 数据集存放目录
DATA_DIR = './data/rsseg/'
# 训练集`file_list`文件路径
TRAIN_FILE_LIST_PATH = './data/rsseg/train.txt'
# 验证集`file_list`文件路径
EVAL_FILE_LIST_PATH = './data/rsseg/val.txt'
# 数据集类别信息文件路径
LABEL_LIST_PATH = './data/rsseg/labels.txt'
# 实验目录,保存输出的模型权重和结果
EXP_DIR = './output/farseg/'
# 下载和解压多光谱地块分类数据集
pdrs.utils.download_and_decompress(
'https://paddlers.bj.bcebos.com/datasets/rsseg.zip', path='./data/')
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/data.md
train_transforms = T.Compose([
# 读取影像
T.DecodeImg(),
# 选择前三个波段
T.SelectBand([1, 2, 3]),
# 将影像缩放到512x512大小
T.Resize(target_size=512),
# 以50%的概率实施随机水平翻转
T.RandomHorizontalFlip(prob=0.5),
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
T.ArrangeSegmenter('train')
])
eval_transforms = T.Compose([
T.DecodeImg(),
# 验证阶段与训练阶段应当选择相同的波段
T.SelectBand([1, 2, 3]),
T.Resize(target_size=512),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
T.ReloadMask(),
T.ArrangeSegmenter('eval')
])
# 分别构建训练和验证所用的数据集
train_dataset = pdrs.datasets.SegDataset(
data_dir=DATA_DIR,
file_list=TRAIN_FILE_LIST_PATH,
label_list=LABEL_LIST_PATH,
transforms=train_transforms,
num_workers=0,
shuffle=True)
eval_dataset = pdrs.datasets.SegDataset(
data_dir=DATA_DIR,
file_list=EVAL_FILE_LIST_PATH,
label_list=LABEL_LIST_PATH,
transforms=eval_transforms,
num_workers=0,
shuffle=False)
# 构建FarSeg模型
# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/model_zoo.md
# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
model = pdrs.tasks.seg.FarSeg(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
num_epochs=10,
train_dataset=train_dataset,
train_batch_size=4,
eval_dataset=eval_dataset,
save_interval_epochs=5,
# 每多少次迭代记录一次日志
log_interval_steps=4,
save_dir=EXP_DIR,
pretrain_weights=None,
# 初始学习率大小
learning_rate=0.001,
# 是否使用early stopping策略,当精度不再改善时提前终止训练
early_stop=False,
# 是否启用VisualDL日志功能
use_vdl=True,
# 指定从某个检查点继续训练
resume_checkpoint=None)

@ -71,7 +71,7 @@ eval_dataset = pdrs.datasets.SegDataset(
# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/model_zoo.md # 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/intro/model_zoo.md
# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py # 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
model = pdrs.tasks.seg.UNet( model = pdrs.tasks.seg.UNet(
input_channel=NUM_BANDS, num_classes=len(train_dataset.labels)) in_channels=NUM_BANDS, num_classes=len(train_dataset.labels))
# 执行模型训练 # 执行模型训练
model.train( model.train(

Loading…
Cancel
Save