diff --git a/.gitignore b/.gitignore index a27f58b83df73bdceadd0ec0a74a478a766f7d65..cf4a486f0b35923293f43c9ad81d34e6c7054c11 100644 --- a/.gitignore +++ b/.gitignore @@ -29,4 +29,6 @@ dist/ *.gz *.meta *.ipynb -*.gif \ No newline at end of file +*.gif + +.markdownlint.json \ No newline at end of file diff --git a/README.md b/README.md index 70e84b9495237f4851080b49af06a52586ae5617..3c7e04031069a02592862de66adb8662607dce65 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,7 @@ # DeepSparkHub -DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领域,支持主流市场智能计算场景,包括智慧城市、数字个人、医疗、教育、通信、能源等多个领域。 +DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领域,支持主流市场智能计算场景,包括智慧城市、数字个人、医疗、教 +育、通信、能源等多个领域。 ## 模型列表 @@ -8,482 +9,455 @@ DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领 #### Classification -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[ACmix](cv/classification/acmix/pytorch/README.md) | PyTorch | ImageNet -[ACNet](cv/classification/acnet/pytorch/README.md) | PyTorch | ImageNet -[AlexNet](cv/classification/alexnet/pytorch/README.md) | PyTorch | ImageNet -[AlexNet](cv/classification/alexnet/tensorflow/README.md) | TensorFlow | ImageNet -[BYOL](cv/classification/byol/pytorch/README.md) | PyTorch | ImageNet -[CBAM](cv/classification/cbam/pytorch/README.md) | PyTorch | ImageNet -[ConvNext](cv/classification/convnext/pytorch/README.md) | PyTorch | ImageNet -[CspDarknet53](cv/classification/cspdarknet53/pytorch/README.md) | PyTorch | ImageNet -[DenseNet](cv/classification/densenet/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[DenseNet](cv/classification/densenet/pytorch/README.md) | PyTorch | ImageNet -[DPN92](cv/classification/dpn92/pytorch/README.md) | PyTorch | ImageNet -[DPN107](cv/classification/dpn107/pytorch/README.md) | PyTorch | ImageNet -[ECA-MobileNetV2](cv/classification/eca_mobilenet_v2/pytorch/README.md) | PyTorch | ImageNet -[ECA-ResNet152](cv/classification/eca_resnet152/pytorch/README.md) | PyTorch | ImageNet -[EfficientNetB0](cv/classification/efficientnet_b0/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[EfficientNetB4](cv/classification/efficientnet_b4/pytorch/README.md) | PyTorch | ImageNet -[FasterNet](cv/classification/fasternet/pytorch/README.md) | PyTorch | ImageNet -[GoogLeNet](cv/classification/googlenet/pytorch/README.md) | PyTorch | ImageNet -[GoogLeNet](cv/classification/googlenet/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[InceptionV3](cv/classification/inceptionv3/mindspore/README.md) | MindSpore | ImageNet -[InceptionV3](cv/classification/inceptionv3/pytorch/README.md) | PyTorch | ImageNet -[InceptionV3](cv/classification/inceptionv3/tensorflow/README.md) | TensorFlow | ImageNet -[InceptionV4](cv/classification/inceptionv4/pytorch/README.md) | PyTorch | ImageNet -[InternImage](cv/classification/internimage/pytorch/README.md) | PyTorch | ImageNet -[LeNet](cv/classification/lenet/pytorch/README.md) | PyTorch | ImageNet -[MobileNetV2](cv/classification/mobilenetv2/pytorch/README.md) | PyTorch | ImageNet -[MobileNetV3](cv/classification/mobilenetv3/mindspore/README.md) | MindSpore | ImageNet -[MobileNetV3](cv/classification/mobilenetv3/pytorch/README.md) | PyTorch | ImageNet -[MobileNetV3](cv/classification/mobilenetv3/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[MobileNetV3_Large1.0](cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[MobileOne](cv/classification/mobileone/pytorch/README.md) | PyTorch | ImageNet -[MoCoV2](cv/classification/mocov2/pytorch/README.md) | PyTorch | ImageNet -[PP-LCNet](cv/classification/pp-lcnet/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[RepMLP](cv/classification/repmlp/pytorch/README.md) | PyTorch | ImageNet -[RepVGG](cv/classification/repvgg/pytorch/README.md) | PyTorch | ImageNet -[RepVGG](cv/classification/repvgg/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[RepViT](cv/classification/repvit/pytorch/README.md) | PyTorch | ImageNet -[Res2Net50_14w_8s](cv/classification/Res2Net50_14w_8s/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[ResNeSt14](cv/classification/resnest14/pytorch/README.md) | PyTorch | ImageNet -[ResNeSt50](cv/classification/resnest50/pytorch/README.md) | PyTorch | ImageNet -[ResNeSt50](cv/classification/resnest50/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[ResNeSt101](cv/classification/resnest101/pytorch/README.md) | PyTorch | ImageNet -[ResNeSt269](cv/classification/resnest269/pytorch/README.md) | PyTorch | ImageNet -[ResNet18](cv/classification/resnet18/pytorch/README.md) | PyTorch | ImageNet -[ResNet50](cv/classification/resnet50/pytorch/README.md) | PyTorch | ImageNet -[ResNet50](cv/classification/resnet50/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[ResNet50](cv/classification/resnet50/tensorflow/README.md) | TensorFlow | ImageNet -[ResNet101](cv/classification/resnet101/pytorch/README.md) | PyTorch | ImageNet -[ResNet152](cv/classification/resnet152/pytorch/README.md) | PyTorch | ImageNet -[ResNeXt50_32x4d](cv/classification/resnext50_32x4d/mindspore/README.md) | MindSpore | ImageNet -[ResNeXt50_32x4d](cv/classification/resnext50_32x4d/pytorch/README.md) | PyTorch | ImageNet -[ResNeXt101_32x8d](cv/classification/resnext101_32x8d/pytorch/README.md) | PyTorch | ImageNet -[SE_ResNet50_vd](cv/classification/se_resnet50_vd/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[SEResNeXt](cv/classification/seresnext/pytorch/README.md) | PyTorch | ImageNet -[ShuffleNetV2](cv/classification/shufflenetv2/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[ShuffleNetV2](cv/classification/shufflenetv2/pytorch/README.md) | PyTorch | ImageNet -[SqueezeNet](cv/classification/squeezenet/pytorch/README.md) | PyTorch | ImageNet -[Swin Transformer](cv/classification/swin_transformer/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[Swin Transformer](cv/classification/swin_transformer/pytorch/README.md) | PyTorch | ImageNet -[VGG16](cv/classification/vgg/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[VGG16](cv/classification/vgg/pytorch/README.md) | PyTorch | ImageNet -[VGG16](cv/classification/vgg/tensorflow/README.md) | TensorFlow | ImageNet -[Wave-MLP](cv/classification/wavemlp/pytorch/README.md) | PyTorch | ImageNet -[Wide_ResNet101_2](cv/classification/wide_resnet101_2/pytorch/README.md) | PyTorch | ImageNet -[Xception](cv/classification/xception/paddlepaddle/README.md) | PaddlePaddle | ImageNet -[Xception](cv/classification/xception/pytorch/README.md) | PyTorch | ImageNet +| Model | Framework | Dataset | +|-----------------------------------------------------------------------------------------|--------------|----------| +| [ACmix](cv/classification/acmix/pytorch/README.md) | PyTorch | ImageNet | +| [ACNet](cv/classification/acnet/pytorch/README.md) | PyTorch | ImageNet | +| [AlexNet](cv/classification/alexnet/pytorch/README.md) | PyTorch | ImageNet | +| [AlexNet](cv/classification/alexnet/tensorflow/README.md) | TensorFlow | ImageNet | +| [BYOL](cv/classification/byol/pytorch/README.md) | PyTorch | ImageNet | +| [CBAM](cv/classification/cbam/pytorch/README.md) | PyTorch | ImageNet | +| [ConvNext](cv/classification/convnext/pytorch/README.md) | PyTorch | ImageNet | +| [CspDarknet53](cv/classification/cspdarknet53/pytorch/README.md) | PyTorch | ImageNet | +| [DenseNet](cv/classification/densenet/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [DenseNet](cv/classification/densenet/pytorch/README.md) | PyTorch | ImageNet | +| [DPN92](cv/classification/dpn92/pytorch/README.md) | PyTorch | ImageNet | +| [DPN107](cv/classification/dpn107/pytorch/README.md) | PyTorch | ImageNet | +| [ECA-MobileNetV2](cv/classification/eca_mobilenet_v2/pytorch/README.md) | PyTorch | ImageNet | +| [ECA-ResNet152](cv/classification/eca_resnet152/pytorch/README.md) | PyTorch | ImageNet | +| [EfficientNetB0](cv/classification/efficientnet_b0/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [EfficientNetB4](cv/classification/efficientnet_b4/pytorch/README.md) | PyTorch | ImageNet | +| [FasterNet](cv/classification/fasternet/pytorch/README.md) | PyTorch | ImageNet | +| [GoogLeNet](cv/classification/googlenet/pytorch/README.md) | PyTorch | ImageNet | +| [GoogLeNet](cv/classification/googlenet/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [InceptionV3](cv/classification/inceptionv3/mindspore/README.md) | MindSpore | ImageNet | +| [InceptionV3](cv/classification/inceptionv3/pytorch/README.md) | PyTorch | ImageNet | +| [InceptionV3](cv/classification/inceptionv3/tensorflow/README.md) | TensorFlow | ImageNet | +| [InceptionV4](cv/classification/inceptionv4/pytorch/README.md) | PyTorch | ImageNet | +| [InternImage](cv/classification/internimage/pytorch/README.md) | PyTorch | ImageNet | +| [LeNet](cv/classification/lenet/pytorch/README.md) | PyTorch | ImageNet | +| [MobileNetV2](cv/classification/mobilenetv2/pytorch/README.md) | PyTorch | ImageNet | +| [MobileNetV3](cv/classification/mobilenetv3/mindspore/README.md) | MindSpore | ImageNet | +| [MobileNetV3](cv/classification/mobilenetv3/pytorch/README.md) | PyTorch | ImageNet | +| [MobileNetV3](cv/classification/mobilenetv3/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [MobileNetV3_Large1.0](cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [MobileOne](cv/classification/mobileone/pytorch/README.md) | PyTorch | ImageNet | +| [MoCoV2](cv/classification/mocov2/pytorch/README.md) | PyTorch | ImageNet | +| [PP-LCNet](cv/classification/pp-lcnet/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [RepMLP](cv/classification/repmlp/pytorch/README.md) | PyTorch | ImageNet | +| [RepVGG](cv/classification/repvgg/pytorch/README.md) | PyTorch | ImageNet | +| [RepVGG](cv/classification/repvgg/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [RepViT](cv/classification/repvit/pytorch/README.md) | PyTorch | ImageNet | +| [Res2Net50_14w_8s](cv/classification/Res2Net50_14w_8s/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [ResNeSt14](cv/classification/resnest14/pytorch/README.md) | PyTorch | ImageNet | +| [ResNeSt50](cv/classification/resnest50/pytorch/README.md) | PyTorch | ImageNet | +| [ResNeSt50](cv/classification/resnest50/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [ResNeSt101](cv/classification/resnest101/pytorch/README.md) | PyTorch | ImageNet | +| [ResNeSt269](cv/classification/resnest269/pytorch/README.md) | PyTorch | ImageNet | +| [ResNet18](cv/classification/resnet18/pytorch/README.md) | PyTorch | ImageNet | +| [ResNet50](cv/classification/resnet50/pytorch/README.md) | PyTorch | ImageNet | +| [ResNet50](cv/classification/resnet50/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [ResNet50](cv/classification/resnet50/tensorflow/README.md) | TensorFlow | ImageNet | +| [ResNet101](cv/classification/resnet101/pytorch/README.md) | PyTorch | ImageNet | +| [ResNet152](cv/classification/resnet152/pytorch/README.md) | PyTorch | ImageNet | +| [ResNeXt50_32x4d](cv/classification/resnext50_32x4d/mindspore/README.md) | MindSpore | ImageNet | +| [ResNeXt50_32x4d](cv/classification/resnext50_32x4d/pytorch/README.md) | PyTorch | ImageNet | +| [ResNeXt101_32x8d](cv/classification/resnext101_32x8d/pytorch/README.md) | PyTorch | ImageNet | +| [SE_ResNet50_vd](cv/classification/se_resnet50_vd/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [SEResNeXt](cv/classification/seresnext/pytorch/README.md) | PyTorch | ImageNet | +| [ShuffleNetV2](cv/classification/shufflenetv2/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [ShuffleNetV2](cv/classification/shufflenetv2/pytorch/README.md) | PyTorch | ImageNet | +| [SqueezeNet](cv/classification/squeezenet/pytorch/README.md) | PyTorch | ImageNet | +| [Swin Transformer](cv/classification/swin_transformer/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [Swin Transformer](cv/classification/swin_transformer/pytorch/README.md) | PyTorch | ImageNet | +| [VGG16](cv/classification/vgg/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [VGG16](cv/classification/vgg/pytorch/README.md) | PyTorch | ImageNet | +| [VGG16](cv/classification/vgg/tensorflow/README.md) | TensorFlow | ImageNet | +| [Wave-MLP](cv/classification/wavemlp/pytorch/README.md) | PyTorch | ImageNet | +| [Wide_ResNet101_2](cv/classification/wide_resnet101_2/pytorch/README.md) | PyTorch | ImageNet | +| [Xception](cv/classification/xception/paddlepaddle/README.md) | PaddlePaddle | ImageNet | +| [Xception](cv/classification/xception/pytorch/README.md) | PyTorch | ImageNet | #### Face Detection -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[RetinaFace](cv/face/retinaface/pytorch/README.md) | PyTorch | WiderFace +| Model | Framework | Dataset | +|----------------------------------------------------|-----------|-----------| +| [RetinaFace](cv/face/retinaface/pytorch/README.md) | PyTorch | WiderFace | #### Face Recognition -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[ArcFace](cv/face/arcface/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW -[BlazeFace](cv/face/blazeface/paddlepaddle/README.md) | PaddlePaddle | WIDER-FACE -[CosFace](cv/face/cosface/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW -[FaceNet](cv/face/facenet/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW -[FaceNet](cv/face/facenet/tensorflow/README.md) | TensorFlow | CASIA-WebFaces&LFW +| Model | Framework | Dataset | +|-------------------------------------------------------|--------------|--------------------| +| [ArcFace](cv/face/arcface/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW | +| [BlazeFace](cv/face/blazeface/paddlepaddle/README.md) | PaddlePaddle | WIDER-FACE | +| [CosFace](cv/face/cosface/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW | +| [FaceNet](cv/face/facenet/pytorch/README.md) | PyTorch | CASIA-WebFaces&LFW | +| [FaceNet](cv/face/facenet/tensorflow/README.md) | TensorFlow | CASIA-WebFaces&LFW | #### Instance Segmentation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[SOLO](cv/instance_segmentation/SOLO/pytorch/README.md) | PyTorch | COCO -[SOLOv2](cv/detection/solov2/paddlepaddle/README.md) | PaddlePaddle | COCO -[SOLOv2](cv/instance_segmentation/solov2/pytorch/README.md) | PyTorch | COCO -[YOLACT++](cv/instance_segmentation/yolact/pytorch/README.md) | PyTorch | COCO +| Model | Framework | Dataset| +|---------------------------------------------------------------|--------------|--------| +| [SOLO](cv/instance_segmentation/SOLO/pytorch/README.md) | PyTorch | COCO | +| [SOLOv2](cv/detection/solov2/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [SOLOv2](cv/instance_segmentation/solov2/pytorch/README.md) | PyTorch | COCO | +| [YOLACT++](cv/instance_segmentation/yolact/pytorch/README.md) | PyTorch | COCO | #### Image Generation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[DCGAN](cv/image_generation/dcgan/mindspore/README.md) | MindSpore | ImageNet -[Pix2Pix](cv/image_generation/Pix2pix/paddlepaddle/README.md) | PaddlePaddle | facades +| Model | Framework | Dataset | +|---------------------------------------------------------------|--------------|----------| +| [DCGAN](cv/image_generation/dcgan/mindspore/README.md) | MindSpore | ImageNet | +| [Pix2Pix](cv/image_generation/Pix2pix/paddlepaddle/README.md) | PaddlePaddle | facades | #### Knowledge Distillation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[CWD](cv/distiller/CWD/pytorch/README.md) | PyTorch | Cityscapes -[RKD](cv/distiller/RKD/pytorch/README.md) | PyTorch | CUB-200-2011 -[WSLD](cv/distiller/WSLD/pytorch/README.md) | PyTorch | ImageNet +| Model | Framework | Dataset | +|---------------------------------------------|-----------|--------------| +| [CWD](cv/distiller/CWD/pytorch/README.md) | PyTorch | Cityscapes | +| [RKD](cv/distiller/RKD/pytorch/README.md) | PyTorch | CUB-200-2011 | +| [WSLD](cv/distiller/WSLD/pytorch/README.md) | PyTorch | ImageNet | #### Network Pruning -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Network Slimming](cv/Pruning/Network-Slimming/pytorch/README.md) | PyTorch | CIFAR-10/100 +| Model | Framework | Dataset | +|-------------------------------------------------------------------|-----------|--------------| +| [Network Slimming](cv/Pruning/Network-Slimming/pytorch/README.md) | PyTorch | CIFAR-10/100 | #### Object Detection -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[ATSS](cv/detection/atss_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO -[AutoAssign](cv/detection/autoassign/pytorch/README.md) | PyTorch | COCO -[Cascade R-CNN](cv/detection/cascade_rcnn_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO -[CenterMask2](cv/detection/centermask2/pytorch/README.md) | PyTorch | COCO -[CenterNet](cv/detection/centernet/pytorch/README.md) | PyTorch | COCO -[CenterNet](cv/detection/centernet/paddlepaddle/README.md) | PaddlePaddle | COCO -[Co-DETR](cv/detection/co-detr/pytorch/README.md) | PyTorch | COCO -[CornerNet](cv/detection/cornernet_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO -[DCNV2](cv/detection/dcnv2_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO -[DeepSORT](cv/tracking/deep_sort/pytorch/README.md) | PyTorch | Market-1501 -[DETR](cv/detection/detr/paddlepaddle/README.md) | PaddlePaddle | COCO -[Faster R-CNN](cv/detection/fasterrcnn/pytorch/README.md) | PyTorch | COCO -[FCOS](cv/detection/fcos/paddlepaddle/README.md) | PaddlePaddle | COCO -[FCOS](cv/detection/fcos/pytorch/README.md) | PyTorch | COCO -[Mamba-YOLO](cv/detection/mamba_yolo/pytorch/README.md) | PyTorch | COCO -[Mask R-CNN](cv/detection/maskrcnn/pytorch/README.md) | PyTorch | COCO -[Mask R-CNN](cv/detection/maskrcnn/paddlepaddle/README.md) | PaddlePaddle | COCO -[OC_SORT](cv/detection/oc_sort/paddlepaddle/README.md) | PaddlePaddle | MOT17 -[Oriented RepPoints](cv/detection/oriented_reppoints/pytorch/README.md) | PyTorch | DOTA -[PP-PicoDet](cv/detection/picodet/paddlepaddle/README.md) | PaddlePaddle | COCO -[PP-YOLOE](cv/detection/pp-yoloe/paddlepaddle/README.md) | PaddlePaddle | COCO -[PP-YOLOE+](cv/detection/pp_yoloe+/paddlepaddle/README.md) | PaddlePaddle | COCO -[PVANet](cv/detection/pvanet/pytorch/README.md) | PyTorch | COCO -[RepPoints](cv/detection/reppoints_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO -[RetinaNet](cv/detection/retinanet/pytorch/README.md) | PyTorch | COCO -[RetinaNet](cv/detection/retinanet/paddlepaddle/README.md) | PaddlePaddle | COCO -[RT-DETR](cv/detection/rt-detr/pytorch/README.md) | PyTorch | COCO -[RTMDet](cv/detection/rtmdet/pytorch/README.md) | PyTorch | COCO -[SSD](cv/detection/ssd/pytorch/README.md) | PyTorch | COCO -[SSD](cv/detection/ssd/paddlepaddle/README.md) | PaddlePaddle | COCO -[SSD](cv/detection/ssd/tensorflow/README.md) | TensorFlow | VOC -[SSD](cv/detection/ssd/mindspore/README.md) | MindSpore | COCO -[YOLOF](cv/detection/yolof/pytorch/README.md) | PyTorch | COCO -[YOLOv3](cv/detection/yolov3/pytorch/README.md) | PyTorch | COCO -[YOLOv3](cv/detection/yolov3/paddlepaddle/README.md) | PaddlePaddle | COCO -[YOLOv3](cv/detection/yolov3/tensorflow/README.md) | TensorFlow | VOC -[YOLOv5](cv/detection/yolov5/paddlepaddle/README.md) | PaddlePaddle | COCO -[YOLOv5](cv/detection/yolov5/pytorch/README.md) | PyTorch | COCO -[YOLOv6](cv/detection/yolov6/pytorch/README.md) | PyTorch | COCO -[YOLOv7](cv/detection/yolov7/pytorch/README.md) | PyTorch | COCO -[YOLOv8](cv/detection/yolov8/pytorch/README.md) | PyTorch | COCO -[YOLOv9](cv/detection/yolov9/pytorch/README.md) | PyTorch | COCO -[YOLOv10](cv/detection/yolov10/pytorch/README.md) | PyTorch | COCO +| Model | Framework | Dataset| +|-------------------------------------------------------------------------|-----------------------|--------| +| [ATSS](cv/detection/atss_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO | +| [AutoAssign](cv/detection/autoassign/pytorch/README.md) | PyTorch | COCO | +| [Cascade R-CNN](cv/detection/cascade_rcnn_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO | +| [CenterMask2](cv/detection/centermask2/pytorch/README.md) | PyTorch | COCO | +| [CenterNet](cv/detection/centernet/pytorch/README.md) | PyTorch | COCO | +| [CenterNet](cv/detection/centernet/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [Co-DETR](cv/detection/co-detr/pytorch/README.md) | PyTorch | COCO | +| [CornerNet](cv/detection/cornernet_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO | +| [DCNV2](cv/detection/dcnv2_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO | +| [DETR](cv/detection/detr/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [Faster R-CNN](cv/detection/fasterrcnn/pytorch/README.md) | PyTorch | COCO | +| [FCOS](cv/detection/fcos/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [FCOS](cv/detection/fcos/pytorch/README.md) | PyTorch | COCO | +| [Mamba-YOLO](cv/detection/mamba_yolo/pytorch/README.md) | PyTorch | COCO | +| [Mask R-CNN](cv/detection/maskrcnn/pytorch/README.md) | PyTorch | COCO | +| [Mask R-CNN](cv/detection/maskrcnn/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [OC_SORT](cv/detection/oc_sort/paddlepaddle/README.md) | PaddlePaddle | MOT17 | +| [Oriented RepPoints](cv/detection/oriented_reppoints/pytorch/README.md) | PyTorch | DOTA | +| [PP-PicoDet](cv/detection/picodet/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [PP-YOLOE](cv/detection/pp-yoloe/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [PP-YOLOE+](cv/detection/pp_yoloe+/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [PVANet](cv/detection/pvanet/pytorch/README.md) | PyTorch | COCO | +| [RepPoints](cv/detection/reppoints_mmdet/pytorch/README.md) | PyTorch (MMDetection) | COCO | +| [RetinaNet](cv/detection/retinanet/pytorch/README.md) | PyTorch | COCO | +| [RetinaNet](cv/detection/retinanet/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [RT-DETR](cv/detection/rt-detr/pytorch/README.md) | PyTorch | COCO | +| [RTMDet](cv/detection/rtmdet/pytorch/README.md) | PyTorch | COCO | +| [SSD](cv/detection/ssd/pytorch/README.md) | PyTorch | COCO | +| [SSD](cv/detection/ssd/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [SSD](cv/detection/ssd/tensorflow/README.md) | TensorFlow | VOC | +| [SSD](cv/detection/ssd/mindspore/README.md) | MindSpore | COCO | +| [YOLOF](cv/detection/yolof/pytorch/README.md) | PyTorch | COCO | +| [YOLOv3](cv/detection/yolov3/pytorch/README.md) | PyTorch | COCO | +| [YOLOv3](cv/detection/yolov3/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [YOLOv3](cv/detection/yolov3/tensorflow/README.md) | TensorFlow | VOC | +| [YOLOv5](cv/detection/yolov5/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [YOLOv5](cv/detection/yolov5/pytorch/README.md) | PyTorch | COCO | +| [YOLOv6](cv/detection/yolov6/pytorch/README.md) | PyTorch | COCO | +| [YOLOv7](cv/detection/yolov7/pytorch/README.md) | PyTorch | COCO | +| [YOLOv8](cv/detection/yolov8/pytorch/README.md) | PyTorch | COCO | +| [YOLOv9](cv/detection/yolov9/pytorch/README.md) | PyTorch | COCO | +| [YOLOv10](cv/detection/yolov10/pytorch/README.md) | PyTorch | COCO | #### 3D Object Detection -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[BEVFormer](cv/3d_detection/BEVFormer/pytorch/README.md) | PyTorch | nuScenes&CAN bus -[CenterPoint](cv/3d_detection/centerpoint/pytorch/README.md) | PyTorch | nuScenes -[PAConv](cv/3d_detection/PAConv/pytorch/README.md) | PyTorch | S3DIS -[Part-A2-Anchor](cv/3d_detection/part_a2_anchor/pytorch/README.md) | PyTorch | KITTI -[Part-A2-Free](cv/3d_detection/part_a2_free/pytorch/README.md) | PyTorch | KITTI -[PointNet++](cv/3d_detection/pointnet2/pytorch/mmdetection3d/README.md) | PyTorch | S3DIS -[PointPillars](cv/3d_detection/pointpillars/pytorch/README.md) | PyTorch | KITTI -[PointRCNN](cv/3d_detection/pointrcnn/pytorch/README.md) | PyTorch | KITTI -[PointRCNN-IoU](cv/3d_detection/pointrcnn_iou/pytorch/README.md) | PyTorch | KITTI -[SECOND](cv/3d_detection/second/pytorch/README.md) | PyTorch | KITTI -[SECOND-IoU](cv/3d_detection/second_iou/pytorch/README.md) | PyTorch | KITTI +| Model | Framework | Dataset | +|-------------------------------------------------------------------------|-----------|------------------| +| [BEVFormer](cv/3d_detection/BEVFormer/pytorch/README.md) | PyTorch | nuScenes&CAN bus | +| [CenterPoint](cv/3d_detection/centerpoint/pytorch/README.md) | PyTorch | nuScenes | +| [PAConv](cv/3d_detection/PAConv/pytorch/README.md) | PyTorch | S3DIS | +| [Part-A2-Anchor](cv/3d_detection/part_a2_anchor/pytorch/README.md) | PyTorch | KITTI | +| [Part-A2-Free](cv/3d_detection/part_a2_free/pytorch/README.md) | PyTorch | KITTI | +| [PointNet++](cv/3d_detection/pointnet2/pytorch/mmdetection3d/README.md) | PyTorch | S3DIS | +| [PointPillars](cv/3d_detection/pointpillars/pytorch/README.md) | PyTorch | KITTI | +| [PointRCNN](cv/3d_detection/pointrcnn/pytorch/README.md) | PyTorch | KITTI | +| [PointRCNN-IoU](cv/3d_detection/pointrcnn_iou/pytorch/README.md) | PyTorch | KITTI | +| [SECOND](cv/3d_detection/second/pytorch/README.md) | PyTorch | KITTI | +| [SECOND-IoU](cv/3d_detection/second_iou/pytorch/README.md) | PyTorch | KITTI | + +#### 3D Reconstruction + +| Model | Framework | Dataset | +|----------------------------------------------------------|-----------|---------| +| [HashNeRF](cv/3d-reconstruction/hashnerf/pytorch/README.md) | PyTorch | fox | + +#### GNN (Graph Neural Network) + +| Model | Framework | Dataset | +|------------------------------------------------------|--------------|--------------------------| +| [GAT](cv/gnn/gat/paddlepaddle/README.md) | PaddlePaddle | CORA | +| [GCN](cv/gnn/GCN/mindspore/README.md) | MindSpore | CORA & Citeseer | +| [GCN](cv/gnn/GCN/paddlepaddle/README.md) | PaddlePaddle | CORA & PubMed & Citeseer | +| [GraphSAGE](cv/gnn/graphsage/paddlepaddle/README.md) | PaddlePaddle | Reddit | #### OCR -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[CRNN](cv/ocr/crnn/mindspore/README.md) | MindSpore | OCR_Recog -[CRNN](cv/ocr/crnn/paddlepaddle/README.md) | PaddlePaddle | LMDB -[DBNet](cv/ocr/dbnet/pytorch/README.md) | PyTorch | ICDAR2015 -[DBNet++](cv/ocr/dbnetpp/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 -[DBNet++](cv/ocr/dbnetpp/pytorch/README.md) | PyTorch | ICDAR2015 -[PP-OCR-DB](cv/ocr/pp-ocr-db/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 -[PP-OCR-EAST](cv/ocr/pp-ocr-east/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 -[PSE](cv/ocr/pse/paddlepaddle/README.md) | PaddlePaddle | OCR_Recog -[SAR](cv/ocr/sar/pytorch/README.md) | PyTorch | OCR_Recog -[SAST](cv/ocr/sast/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 -[SATRN](cv/ocr/satrn/pytorch/base/README.md) | PyTorch | OCR_Recog +| Model | Framework | Dataset | +|----------------------------------------------------------|--------------|-----------| +| [CRNN](cv/ocr/crnn/mindspore/README.md) | MindSpore | OCR_Recog | +| [CRNN](cv/ocr/crnn/paddlepaddle/README.md) | PaddlePaddle | LMDB | +| [DBNet](cv/ocr/dbnet/pytorch/README.md) | PyTorch | ICDAR2015 | +| [DBNet++](cv/ocr/dbnetpp/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 | +| [DBNet++](cv/ocr/dbnetpp/pytorch/README.md) | PyTorch | ICDAR2015 | +| [PP-OCR-DB](cv/ocr/pp-ocr-db/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 | +| [PP-OCR-EAST](cv/ocr/pp-ocr-east/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 | +| [PSE](cv/ocr/pse/paddlepaddle/README.md) | PaddlePaddle | OCR_Recog | +| [SAR](cv/ocr/sar/pytorch/README.md) | PyTorch | OCR_Recog | +| [SAST](cv/ocr/sast/paddlepaddle/README.md) | PaddlePaddle | ICDAR2015 | +| [SATRN](cv/ocr/satrn/pytorch/base/README.md) | PyTorch | OCR_Recog | #### Point Cloud -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Point-BERT](cv/point_cloud/point-bert/pytorch/README.md) | PyTorch | ShapeNet55 & processed ModelNet +| Model | Framework | Dataset | +|-----------------------------------------------------------|-----------|---------------------------------| +| [Point-BERT](cv/point_cloud/point-bert/pytorch/README.md) | PyTorch | ShapeNet55 & processed ModelNet | #### Pose Estimation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[AlphaPose](cv/pose/alphapose/pytorch/README.md) | PyTorch | COCO -[HRNet](cv/pose/hrnet/pytorch/README.md) | PyTorch | COCO -[HRNet-W32](cv/pose/hrnet/paddlepaddle/README.md) | PaddlePaddle | COCO -[OpenPose](cv/pose/openpose/mindspore/README.md) | MindSpore | COCO +| Model | Framework | Dataset | +|---------------------------------------------------|--------------|---------| +| [AlphaPose](cv/pose/alphapose/pytorch/README.md) | PyTorch | COCO | +| [HRNet](cv/pose/hrnet/pytorch/README.md) | PyTorch | COCO | +| [HRNet-W32](cv/pose/hrnet/paddlepaddle/README.md) | PaddlePaddle | COCO | +| [OpenPose](cv/pose/openpose/mindspore/README.md) | MindSpore | COCO | #### Self-Supervised Learning -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[MAE](cv/self_supervised_learning/MAE/pytorch/README.md) | PyTorch | ImageNet +| Model | Framework | Dataset | +|----------------------------------------------------------|-----------|----------| +| [MAE](cv/self_supervised_learning/MAE/pytorch/README.md) | PyTorch | ImageNet | #### Semantic Segmentation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[3D-UNet](cv/semantic_segmentation/unet3d/pytorch/README.md) | PyTorch | kits19 -[APCNet](cv/semantic_segmentation/apcnet/pytorch/README.md) | PyTorch | Cityscapes -[Attention U-net](cv/semantic_segmentation/att_unet/pytorch/README.md) | PyTorch | Cityscapes -[BiSeNet](cv/semantic_segmentation/bisenet/pytorch/README.md) | PyTorch | COCO -[BiSeNetV2](cv/semantic_segmentation/bisenetv2/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[BiSeNetV2](cv/semantic_segmentation/bisenetv2/pytorch/README.md) | PyTorch | Cityscapes -[CGNet](cv/semantic_segmentation/cgnet/pytorch/README.md) | PyTorch | COCO -[ContextNet](cv/semantic_segmentation/contextnet/pytorch/README.md) | PyTorch | COCO -[DabNet](cv/semantic_segmentation/dabnet/pytorch/README.md) | PyTorch | COCO -[DANet](cv/semantic_segmentation/danet/pytorch/README.md) | PyTorch | COCO -[DDRnet](cv/semantic_segmentation/ddrnet/pytorch/README.md) | PyTorch | Cityscapes -[DeepLabV3](cv/semantic_segmentation/deeplabv3/pytorch/README.md) | PyTorch | COCO -[DeepLabV3](cv/semantic_segmentation/deeplabv3/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[DeepLabV3](cv/semantic_segmentation/deeplabv3/mindspore/README.md) | MindSpore | VOC -[DeepLabV3+](cv/semantic_segmentation/deeplabv3plus/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[DeepLabV3+](cv/semantic_segmentation/deeplabv3plus/tensorflow/README.md) | TensorFlow | Cityscapes -[DenseASPP](cv/semantic_segmentation/denseaspp/pytorch/README.md) | PyTorch | COCO -[DFANet](cv/semantic_segmentation/dfanet/pytorch/README.md) | PyTorch | COCO -[DNLNet](cv/semantic_segmentation/dnlnet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[DUNet](cv/semantic_segmentation/dunet/pytorch/README.md) | PyTorch | COCO -[EncNet](cv/semantic_segmentation/encnet/pytorch/README.md) | PyTorch | COCO -[ENet](cv/semantic_segmentation/enet/pytorch/README.md) | PyTorch | COCO -[ERFNet](cv/semantic_segmentation/erfnet/pytorch/README.md) | PyTorch | COCO -[ESPNet](cv/semantic_segmentation/espnet/pytorch/README.md) | PyTorch | COCO -[FastFCN](cv/semantic_segmentation/fastfcn/paddlepaddle/README.md) | PyTorch | ADE20K -[FastSCNN](cv/semantic_segmentation/fastscnn/pytorch/README.md) | PyTorch | COCO -[FCN](cv/semantic_segmentation/fcn/pytorch/README.md) | PyTorch | COCO -[FPENet](cv/semantic_segmentation/fpenet/pytorch/README.md) | PyTorch | COCO -[GCNet](cv/semantic_segmentation/gcnet/pytorch/README.md) | PyTorch | Cityscapes -[HardNet](cv/semantic_segmentation/hardnet/pytorch/README.md) | PyTorch | COCO -[ICNet](cv/semantic_segmentation/icnet/pytorch/README.md) | PyTorch | COCO -[LedNet](cv/semantic_segmentation/lednet/pytorch/README.md) | PyTorch | COCO -[LinkNet](cv/semantic_segmentation/linknet/pytorch/README.md) | PyTorch | COCO -[Mask2Former](cv/semantic_segmentation/Mask2Former/pytorch/README.md) | PyTorch | Cityscapes -[MobileSeg](cv/semantic_segmentation/mobileseg/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[OCNet](cv/semantic_segmentation/ocnet/pytorch/README.md) | PyTorch | COCO -[OCRNet](cv/semantic_segmentation/ocrnet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[OCRNet](cv/semantic_segmentation/ocrnet/pytorch/README.md) | PyTorch | Cityscapes -[PP-HumanSegV1](cv/semantic_segmentation/pp_humansegv1/paddlepaddle/README.md) | PaddlePaddle | PP-HumanSeg14K -[PP-HumanSegV2](cv/semantic_segmentation/pp_humansegv2/paddlepaddle/README.md) | PaddlePaddle | PP-HumanSeg14K -[PP-LiteSeg](cv/semantic_segmentation/pp_liteseg/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[PSANet](cv/semantic_segmentation/psanet/pytorch/README.md) | PyTorch | COCO -[RefineNet](cv/semantic_segmentation/refinenet/pytorch/README.md) | PyTorch | COCO -[SegNet](cv/semantic_segmentation/segnet/pytorch/README.md) | PyTorch | COCO -[STDC](cv/semantic_segmentation/stdc/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[STDC](cv/semantic_segmentation/stdc/pytorch/README.md) | PyTorch | Cityscapes -[UNet](cv/semantic_segmentation/unet/pytorch/README.md) | PyTorch | COCO -[UNet](cv/semantic_segmentation/unet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes -[UNet++](cv/semantic_segmentation/unet++/pytorch/README.md) | PyTorch | DRIVE -[VNet](cv/semantic_segmentation/vnet/tensorflow/README.md) | TensorFlow | Hippocampus +| Model | Framework | Dataset | +|--------------------------------------------------------------------------------|--------------|----------------| +| [3D-UNet](cv/semantic_segmentation/unet3d/pytorch/README.md) | PyTorch | kits19 | +| [APCNet](cv/semantic_segmentation/apcnet/pytorch/README.md) | PyTorch | Cityscapes | +| [Attention U-net](cv/semantic_segmentation/att_unet/pytorch/README.md) | PyTorch | Cityscapes | +| [BiSeNet](cv/semantic_segmentation/bisenet/pytorch/README.md) | PyTorch | COCO | +| [BiSeNetV2](cv/semantic_segmentation/bisenetv2/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [BiSeNetV2](cv/semantic_segmentation/bisenetv2/pytorch/README.md) | PyTorch | Cityscapes | +| [CGNet](cv/semantic_segmentation/cgnet/pytorch/README.md) | PyTorch | COCO | +| [ContextNet](cv/semantic_segmentation/contextnet/pytorch/README.md) | PyTorch | COCO | +| [DabNet](cv/semantic_segmentation/dabnet/pytorch/README.md) | PyTorch | COCO | +| [DANet](cv/semantic_segmentation/danet/pytorch/README.md) | PyTorch | COCO | +| [DDRnet](cv/semantic_segmentation/ddrnet/pytorch/README.md) | PyTorch | Cityscapes | +| [DeepLabV3](cv/semantic_segmentation/deeplabv3/pytorch/README.md) | PyTorch | COCO | +| [DeepLabV3](cv/semantic_segmentation/deeplabv3/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [DeepLabV3](cv/semantic_segmentation/deeplabv3/mindspore/README.md) | MindSpore | VOC | +| [DeepLabV3+](cv/semantic_segmentation/deeplabv3plus/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [DeepLabV3+](cv/semantic_segmentation/deeplabv3plus/tensorflow/README.md) | TensorFlow | Cityscapes | +| [DenseASPP](cv/semantic_segmentation/denseaspp/pytorch/README.md) | PyTorch | COCO | +| [DFANet](cv/semantic_segmentation/dfanet/pytorch/README.md) | PyTorch | COCO | +| [DNLNet](cv/semantic_segmentation/dnlnet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [DUNet](cv/semantic_segmentation/dunet/pytorch/README.md) | PyTorch | COCO | +| [EncNet](cv/semantic_segmentation/encnet/pytorch/README.md) | PyTorch | COCO | +| [ENet](cv/semantic_segmentation/enet/pytorch/README.md) | PyTorch | COCO | +| [ERFNet](cv/semantic_segmentation/erfnet/pytorch/README.md) | PyTorch | COCO | +| [ESPNet](cv/semantic_segmentation/espnet/pytorch/README.md) | PyTorch | COCO | +| [FastFCN](cv/semantic_segmentation/fastfcn/paddlepaddle/README.md) | PyTorch | ADE20K | +| [FastSCNN](cv/semantic_segmentation/fastscnn/pytorch/README.md) | PyTorch | COCO | +| [FCN](cv/semantic_segmentation/fcn/pytorch/README.md) | PyTorch | COCO | +| [FPENet](cv/semantic_segmentation/fpenet/pytorch/README.md) | PyTorch | COCO | +| [GCNet](cv/semantic_segmentation/gcnet/pytorch/README.md) | PyTorch | Cityscapes | +| [HardNet](cv/semantic_segmentation/hardnet/pytorch/README.md) | PyTorch | COCO | +| [ICNet](cv/semantic_segmentation/icnet/pytorch/README.md) | PyTorch | COCO | +| [LedNet](cv/semantic_segmentation/lednet/pytorch/README.md) | PyTorch | COCO | +| [LinkNet](cv/semantic_segmentation/linknet/pytorch/README.md) | PyTorch | COCO | +| [Mask2Former](cv/semantic_segmentation/Mask2Former/pytorch/README.md) | PyTorch | Cityscapes | +| [MobileSeg](cv/semantic_segmentation/mobileseg/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [OCNet](cv/semantic_segmentation/ocnet/pytorch/README.md) | PyTorch | COCO | +| [OCRNet](cv/semantic_segmentation/ocrnet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [OCRNet](cv/semantic_segmentation/ocrnet/pytorch/README.md) | PyTorch | Cityscapes | +| [PP-HumanSegV1](cv/semantic_segmentation/pp_humansegv1/paddlepaddle/README.md) | PaddlePaddle | PP-HumanSeg14K | +| [PP-HumanSegV2](cv/semantic_segmentation/pp_humansegv2/paddlepaddle/README.md) | PaddlePaddle | PP-HumanSeg14K | +| [PP-LiteSeg](cv/semantic_segmentation/pp_liteseg/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [PSANet](cv/semantic_segmentation/psanet/pytorch/README.md) | PyTorch | COCO | +| [RefineNet](cv/semantic_segmentation/refinenet/pytorch/README.md) | PyTorch | COCO | +| [SegNet](cv/semantic_segmentation/segnet/pytorch/README.md) | PyTorch | COCO | +| [STDC](cv/semantic_segmentation/stdc/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [STDC](cv/semantic_segmentation/stdc/pytorch/README.md) | PyTorch | Cityscapes | +| [UNet](cv/semantic_segmentation/unet/pytorch/README.md) | PyTorch | COCO | +| [UNet](cv/semantic_segmentation/unet/paddlepaddle/README.md) | PaddlePaddle | Cityscapes | +| [UNet++](cv/semantic_segmentation/unet++/pytorch/README.md) | PyTorch | DRIVE | +| [VNet](cv/semantic_segmentation/vnet/tensorflow/README.md) | TensorFlow | Hippocampus | #### Super Resolution -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[basicVSR++](cv/super_resolution/basicVSR++/pytorch/README.md) | PyTorch | REDS -[basicVSR](cv/super_resolution/basicVSR/pytorch/README.md) | PyTorch | REDS -[ESRGAN](cv/super_resolution/esrgan/pytorch/README.md) | PyTorch | DIV2K -[LIIF](cv/super_resolution/liif/pytorch/README.md) | PyTorch | DIV2K -[RealBasicVSR](cv/super_resolution/real_basicVSR/pytorch/README.md) | PyTorch | REDS -[TTSR](cv/super_resolution/ttsr/pytorch/README.md) | PyTorch | CUFED -[TTVSR](cv/super_resolution/ttvsr/pytorch/README.md) | PyTorch | REDS +| Model | Framework | Dataset | +|---------------------------------------------------------------------|-----------|---------| +| [basicVSR++](cv/super_resolution/basicVSR++/pytorch/README.md) | PyTorch | REDS | +| [basicVSR](cv/super_resolution/basicVSR/pytorch/README.md) | PyTorch | REDS | +| [ESRGAN](cv/super_resolution/esrgan/pytorch/README.md) | PyTorch | DIV2K | +| [LIIF](cv/super_resolution/liif/pytorch/README.md) | PyTorch | DIV2K | +| [RealBasicVSR](cv/super_resolution/real_basicVSR/pytorch/README.md) | PyTorch | REDS | +| [TTSR](cv/super_resolution/ttsr/pytorch/README.md) | PyTorch | CUFED | +| [TTVSR](cv/super_resolution/ttvsr/pytorch/README.md) | PyTorch | REDS | -#### Tracking +#### Multi-Object Tracking -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[ByteTrack](cv/tracking/bytetrack/paddlepaddle/README.md) | PaddlePaddle | MOT17 -[FairMOT](cv/tracking/fairmot/pytorch/README.md) | PyTorch | MOT17 +| Model | Framework | Dataset | +|-----------------------------------------------------------|--------------|-------------| +| [ByteTrack](cv/tracking/bytetrack/paddlepaddle/README.md) | PaddlePaddle | MOT17 | +| [DeepSORT](cv/tracking/deep_sort/pytorch/README.md) | PyTorch | Market-1501 | +| [FairMOT](cv/tracking/fairmot/pytorch/README.md) | PyTorch | MOT17 | #### Traffic Forecast -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Graph WaveNet](cv/traffic_forecast/graph_wavenet/pytorch/README.md) | PyTorch | METR-LA & PEMS-BAY - -### GNN - -#### Graph Attention - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[GAT](gnn/graph_attention/gat/paddlepaddle/README.md) | PaddlePaddle | CORA - -#### Node Classification - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[GraphSAGE](gnn/node_classification/graphsage/paddlepaddle/README.md) | PaddlePaddle | Reddit - -#### Text Classification - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[GCN](gnn/text_classification/GCN/mindspore/README.md) | MindSpore | CORA & Citeseer -[GCN](gnn/text_classification/GCN/paddlepaddle/README.md) | PaddlePaddle | CORA & PubMed & Citeseer - -### HPC - -#### Molecular Dynamics - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Water/se_e2_a](hpc/molecular_dynamics/water_se_e2_a/tensorflow/README.md) | TensorFlow (DeePMD-kit) | data_water - -### Methodology - -#### Kolmogorov-Arnold Networks - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[KAN](methodology/kolmogorov_arnold_networks/kan/pytorch/README.md) | PyTorch | - +| Model | Framework | Dataset | +|----------------------------------------------------------------------|-----------|--------------------| +| [Graph WaveNet](cv/traffic_forecast/graph_wavenet/pytorch/README.md) | PyTorch | METR-LA & PEMS-BAY | + +### LLM (Large Language Model) + +| Model | Framework | ToolBox | Dataset/Weight | +|-----------------------------------------------------------------------|-----------|--------------------|-----------------------| +| [Aquila2-34B](nlp/llm/aquila2-34b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus | +| [Baichuan2-7B](nlp/llm/baichuan2-7b/Baichuan2/README.md) | PyTorch | DeepSpeed | baichuan2-7b-base | +| [Bloom-7B1](nlp/llm/bloom-7b1/firefly/README.md) | PyTorch | Firefly | school_math_0.25M | +| [ChatGLM-6B](nlp/llm/chatglm-6b/deepspeed/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm-6b | +| [ChatGLM2-6B SFT](nlp/llm/ChatGLM2-6b-sft/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm2-6b | +| [ChatGLM3-6B](nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm3-6b | +| [DeepSeekMoE 7B](nlp/llm/deepseek_moe_7b/colossalai/README.md) | PyTorch | ColossalAI | deepseek-moe-16b-base | +| [Llama-7B](nlp/llm/llama-7b/colossalai/README.md) | PyTorch | ColossalAI | llama-7b-hf | +| [Llama2-7B](nlp/llm/llama2-7b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus | +| [Llama2-7B RMF](nlp/llm/llama2-7b_reward_sft/deepspeed/README.md) | PyTorch | DeepSpeed | Dahoas/rm-static | +| [Llama2-7B RLHF](nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | llama2-7b&tiny-llama | +| [Llama2-7B SFT](nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | GPT Small-117M | +| [Llama2-13B](nlp/llm/llama2-13b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus | +| [Llama2-34B](nlp/llm/llama2-34b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus | +| [Llama3-8B](nlp/llm/llama3_8b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus | +| [Llama3-8B SFT](nlp/llm/llama3_8b/colossalai/README.md) | PyTorch | ColossalAI | school_math_0.25M | +| [Mamba-2](nlp/llm/mamba-2/megatron-lm/README.md) | PyTorch | Megatron-LM | GPT Small-117M | +| [Mixtral 8x7B](nlp/llm/mixtral/megatron-lm/README.md) | PyTorch | Megatron-LM | GPT Small-117M | +| [QWen-7B](nlp/llm/qwen-7b/firefly/README.md) | PyTorch | Firefly | qwen-7b | +| [QWen1.5-7B](nlp/llm/qwen1.5-7b/firefly/README.md) | PyTorch | Firefly | school_math | +| [QWen1.5-14B](nlp/llm/qwen1.5-14b/firefly/README.md) | PyTorch | Firefly | school_math | +| [Qwen2.5-7B SFT](nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md) | PyTorch | LLaMA-Factory | qwen2.5-7b | ### Multimodal -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[BLIP](multimodal/BLIP/pytorch/README.md) | PyTorch | COCO -[CLIP](multimodal/Language-Image_Pre-Training/clip/pytorch/README.md) | PyTorch | CIFAR100 -[ControlNet](multimodal/diffusion/ControlNet/README.md) | PyTorch | Fill50K -[DDPM](multimodal/diffusion/ddpm/README.md) | PyTorch | CIFAR-10 -[LLaVA 1.5](multimodal/llava/pytorch/README.md) | PyTorch | LLaVA-Pretrain -[L-Verse](multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md) | PyTorch | ImageNet -[Stable Diffusion 1.4](multimodal/diffusion/stable-diffusion/training/README.md) | PyTorch | pokemon-images -[Stable Diffusion 1.5](multimodal/diffusion/stable-diffusion/sd_1.5/README.md) | PyTorch | pokemon-images -[Stable Diffusion 2.1](multimodal/diffusion/stable-diffusion/sd_2.1/README.md) | PyTorch | pokemon-images -[Stable Diffusion 3](multimodal/diffusion/stable-diffusion/sd_3/README.md) | PyTorch | dog-example -[Stable Diffusion XL](multimodal/diffusion/stable-diffusion/sd_xl/README.md) | PyTorch | pokemon-images - -### NLP +| Model | Framework | Dataset | +|----------------------------------------------------------------------------------|-----------|----------------| +| [BLIP](multimodal/BLIP/pytorch/README.md) | PyTorch | COCO | +| [CLIP](multimodal/Language-Image_Pre-Training/clip/pytorch/README.md) | PyTorch | CIFAR100 | +| [ControlNet](multimodal/diffusion/ControlNet/README.md) | PyTorch | Fill50K | +| [DDPM](multimodal/diffusion/ddpm/README.md) | PyTorch | CIFAR-10 | +| [LLaVA 1.5](multimodal/llava/pytorch/README.md) | PyTorch | LLaVA-Pretrain | +| [L-Verse](multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md) | PyTorch | ImageNet | +| [Stable Diffusion 1.4](multimodal/diffusion/stable-diffusion/training/README.md) | PyTorch | pokemon-images | +| [Stable Diffusion 1.5](multimodal/diffusion/stable-diffusion/sd_1.5/README.md) | PyTorch | pokemon-images | +| [Stable Diffusion 2.1](multimodal/diffusion/stable-diffusion/sd_2.1/README.md) | PyTorch | pokemon-images | +| [Stable Diffusion 3](multimodal/diffusion/stable-diffusion/sd_3/README.md) | PyTorch | dog-example | +| [Stable Diffusion XL](multimodal/diffusion/stable-diffusion/sd_xl/README.md) | PyTorch | pokemon-images | + +### NLP (Natural Language Processing) #### Cloze Test -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[GLM](nlp/cloze_test/glm/pytorch/GLMForMultiTokenCloze/README.md) | PyTorch | GLMForMultiTokenCloze +| Model | Framework | Dataset | +|-------------------------------------------------------------------|-----------|-----------------------| +| [GLM](nlp/cloze_test/glm/pytorch/GLMForMultiTokenCloze/README.md) | PyTorch | GLMForMultiTokenCloze | #### Dialogue Generation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[CPM](nlp/dialogue_generation/cpm/pytorch/README.md) | PyTorch | STC +| Model | Framework | Dataset | +|------------------------------------------------------|-----------|---------| +| [CPM](nlp/dialogue_generation/cpm/pytorch/README.md) | PyTorch | STC | #### Language Modeling -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[BART](nlp/language_model/bart_fairseq/pytorch/README.md) | PyTorch (Fairseq) | RTE -[BERT NER](nlp/ner/bert/pytorch/README.md) | PyTorch | CoNLL-2003 -[BERT Pretraining](nlp/language_model/bert/pytorch/README.md) | PyTorch | MLCommon Wikipedia (2048_shards_uncompressed) -[BERT Pretraining](nlp/language_model/bert/paddlepaddle/README.md) | PaddlePaddle | MNLI -[BERT Pretraining](nlp/language_model/bert/tensorflow/base/README.md) | TensorFlow | MNLI -[BERT Pretraining](nlp/language_model/bert/mindspore/README.md) | MindSpore | SQuAD -[BERT Text Classification](nlp/text_classification/bert/pytorch/README.md) |PyTorch | GLUE -[BERT Text Summerization](nlp/text_summarisation/bert/pytorch/README.md) | PyTorch | cnn_dailymail -[BERT Question Answering](nlp/question_answering/bert/pytorch/README.md) | PyTorch | SQuAD -[GPT2-Medium-EN](nlp/llm/gpt2-medium-en/paddlepaddle/README.md) | PaddlePaddle | SST-2 -[RoBERTa](nlp/language_model/roberta_fairseq/pytorch/README.md) | PyTorch (Fairseq) | RTE -[XLNet](nlp/language_model/xlnet/paddlepaddle/README.md) | PaddlePaddle | SST-2 - -#### Large Language Model (LLM) - -模型名称 | 框架 | 工具箱 | 数据集/权重 -------- | --- | ----- | ----- -[Aquila2-34B](nlp/llm/aquila2-34b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus -[Baichuan2-7B](nlp/llm/baichuan2-7b/Baichuan2/README.md) | PyTorch | DeepSpeed | baichuan2-7b-base -[Bloom-7B1](nlp/llm/bloom-7b1/firefly/README.md) | PyTorch | Firefly | school_math_0.25M & bloom-7b1 -[ChatGLM-6B](nlp/llm/chatglm-6b/deepspeed/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm-6b -[ChatGLM2-6B SFT](nlp/llm/ChatGLM2-6b-sft/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm2-6b -[ChatGLM3-6B](nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md) | PyTorch | DeepSpeed | ADGEN & chatglm3-6b -[DeepSeekMoE 7B](nlp/llm/deepseek_moe_7b/colossalai/README.md) | PyTorch | ColossalAI | deepseek-moe-16b-base -[Llama-7B](nlp/llm/llama-7b/colossalai/README.md) | PyTorch | Colossal-AI | llama-7b-hf -[Llama2-7B](nlp/llm/llama2-7b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus -[Llama2-7B Reward Model Finetuning](nlp/llm/llama2-7b_reward_sft/deepspeed/README.md) | PyTorch | DeepSpeed | Dahoas/rm-static -[Llama2-7B RLHF](nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | llama2-7b&tiny-llama -[Llama2-7B SFT](nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | GPT Small-117M -[Llama2-13B](nlp/llm/llama2-13b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus -[Llama2-34B](nlp/llm/llama2-34b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus -[Llama3-8B](nlp/llm/llama3_8b/megatron-deepspeed/README.md) | PyTorch | Megatron-DeepSpeed | Bookcorpus -[Llama3-8B SFT](nlp/llm/llama3_8b/colossalai/README.md) | PyTorch | ColossalAI | school_math_0.25M -[Mamba-2](nlp/llm/mamba-2/megatron-lm/README.md) | PyTorch | Megatron-LM | GPT Small-117M -[Mixtral 8x7B](nlp/llm/mixtral/megatron-lm/README.md) | PyTorch | Megatron-LM | GPT Small-117M -[QWen-7B](nlp/llm/qwen-7b/firefly/README.md) | PyTorch | Firefly | qwen-7b -[QWen1.5-7B](nlp/llm/qwen1.5-7b/firefly/README.md) | PyTorch | Firefly | school_math -[QWen1.5-14B](nlp/llm/qwen1.5-14b/firefly/README.md) | PyTorch | Firefly | school_math -[Qwen2.5-7B SFT](nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md) | PyTorch | LLaMA-Factory | qwen2.5-7b +| Model | Framework | Dataset | +|----------------------------------------------------------------------------|-------------------|--------------------| +| [BART](nlp/language_model/bart_fairseq/pytorch/README.md) | PyTorch (Fairseq) | RTE | +| [BERT NER](nlp/ner/bert/pytorch/README.md) | PyTorch | CoNLL-2003 | +| [BERT Pretraining](nlp/language_model/bert/pytorch/README.md) | PyTorch | MLCommon Wikipedia | +| [BERT Pretraining](nlp/language_model/bert/paddlepaddle/README.md) | PaddlePaddle | MNLI | +| [BERT Pretraining](nlp/language_model/bert/tensorflow/base/README.md) | TensorFlow | MNLI | +| [BERT Pretraining](nlp/language_model/bert/mindspore/README.md) | MindSpore | SQuAD | +| [BERT Text Classification](nlp/text_classification/bert/pytorch/README.md) | PyTorch | GLUE | +| [BERT Text Summerization](nlp/text_summarisation/bert/pytorch/README.md) | PyTorch | cnn_dailymail | +| [BERT Question Answering](nlp/question_answering/bert/pytorch/README.md) | PyTorch | SQuAD | +| [GPT2-Medium-EN](nlp/llm/gpt2-medium-en/paddlepaddle/README.md) | PaddlePaddle | SST-2 | +| [RoBERTa](nlp/language_model/roberta_fairseq/pytorch/README.md) | PyTorch (Fairseq) | RTE | +| [XLNet](nlp/language_model/xlnet/paddlepaddle/README.md) | PaddlePaddle | SST-2 | #### Text Correction -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Ernie](nlp/text_correction/ernie/paddlepaddle/README.md) | PaddlePaddle | corpus +| Model | Framework | Dataset| +|-----------------------------------------------------------|--------------|--------| +| [Ernie](nlp/text_correction/ernie/paddlepaddle/README.md) | PaddlePaddle | corpus | #### Translation -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Convolutional](nlp/translation/convolutional_fairseq/pytorch/README.md) | PyTorch (Fairseq) | WMT14 -[T5](nlp/translation/t5/pytorch/README.md) | PyTorch | wmt14-en-de-pre-processed -[Transformer](nlp/translation/transformer/paddlepaddle/README.md) | PaddlePaddle | wmt14-en-de-pre-processed -[Transformer](nlp/translation/transformer_fairseq/pytorch/README.md) | PyTorch (Fairseq) | IWSLT14 - -### Recommendation - -#### Collaborative Filtering - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[NCF](recommendation/collaborative_filtering/ncf/pytorch/README.md) | PyTorch | movielens - -#### Click Through Rate - -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[DLRM](recommendation/ctr/dlrm/pytorch/README.md) | PyTorch | Criteo_Terabyte -[DLRM](recommendation/ctr/dlrm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte -[FFM](recommendation/ctr/ffm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte -[DeepFM](recommendation/ctr/deepfm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte -[Wide&Deep](recommendation/ctr/wide_deep/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte -[xDeepFM](recommendation/ctr/xdeepfm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte +| Model | Framework | Dataset | +|--------------------------------------------------------------------------|-------------------|---------| +| [Convolutional](nlp/translation/convolutional_fairseq/pytorch/README.md) | PyTorch (Fairseq) | WMT14 | +| [T5](nlp/translation/t5/pytorch/README.md) | PyTorch | WMT14 | +| [Transformer](nlp/translation/transformer/paddlepaddle/README.md) | PaddlePaddle | WMT14 | +| [Transformer](nlp/translation/transformer_fairseq/pytorch/README.md) | PyTorch (Fairseq) | IWSLT14 | ### Reinforcement Learning -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[DQN](reinforcement_learning/q-learning-networks/dqn/paddlepaddle/README.md) | PaddlePaddle | CartPole-v0 +| Model | Framework | Dataset | +|------------------------------------------------------------------------------|--------------|-------------| +| [DQN](reinforcement_learning/q-learning-networks/dqn/paddlepaddle/README.md) | PaddlePaddle | CartPole-v0 | -### Speech +### Audio #### Speech Recognition -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[Conformer](speech/speech_recognition/conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL -[Efficient Conformer v2](speech/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL -[PP-ASR-Conformer](speech/speech_recognition/conformer/paddlepaddle/README.md) | PaddlePaddle | AISHELL -[RNN-T](speech/speech_recognition/rnnt/pytorch/README.md) | PyTorch | LJSpeech -[Transformer](speech/speech_recognition/transformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL -[U2++ Conformer](speech/speech_recognition/u2++_conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL -[Unified Conformer](speech/speech_recognition/unified_conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL +| Model | Framework | Dataset | +|----------------------------------------------------------------------------------------------------|-----------------|----------| +| [Conformer](audio/speech_recognition/conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL | +| [Efficient Conformer v2](audio/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL | +| [PP-ASR-Conformer](audio/speech_recognition/conformer/paddlepaddle/README.md) | PaddlePaddle | AISHELL | +| [RNN-T](audio/speech_recognition/rnnt/pytorch/README.md) | PyTorch | LJSpeech | +| [Transformer](audio/speech_recognition/transformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL | +| [U2++ Conformer](audio/speech_recognition/u2++_conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL | +| [Unified Conformer](audio/speech_recognition/unified_conformer_wenet/pytorch/README.md) | PyTorch (WeNet) | AISHELL | #### Speech Synthesis -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[PP-TTS-FastSpeech2](speech/speech_synthesis/fastspeech2/paddlepaddle/README.md) | PaddlePaddle | CSMSC -[PP-TTS-HiFiGAN](speech/speech_synthesis/hifigan/paddlepaddle/README.md) | PaddlePaddle | CSMSC -[Tacotron2](speech/speech_synthesis/tacotron2/pytorch/README.md) | PyTorch | LJSpeech -[VQMIVC](speech/speech_synthesis/vqmivc/pytorch/README.md) | PyTorch | VCTK-Corpus -[WaveGlow](speech/speech_synthesis/waveglow/pytorch/README.md) | PyTorch | LJSpeech +| Model | Framework | Dataset | +|----------------------------------------------------------------------------------|--------------|-------------| +| [PP-TTS-FastSpeech2](audio/speech_synthesis/fastspeech2/paddlepaddle/README.md) | PaddlePaddle | CSMSC | +| [PP-TTS-HiFiGAN](audio/speech_synthesis/hifigan/paddlepaddle/README.md) | PaddlePaddle | CSMSC | +| [Tacotron2](audio/speech_synthesis/tacotron2/pytorch/README.md) | PyTorch | LJSpeech | +| [VQMIVC](audio/speech_synthesis/vqmivc/pytorch/README.md) | PyTorch | VCTK-Corpus | +| [WaveGlow](audio/speech_synthesis/waveglow/pytorch/README.md) | PyTorch | LJSpeech | + +### Others + +#### Kolmogorov-Arnold Networks + +| Model | Framework | Dataset | +|-----------------------------------------------------------------------|-----------|---------| +| [KAN](others/kolmogorov_arnold_networks/kan/pytorch/README.md) | PyTorch | - | -### 3D Reconstruction +#### Recommendation Systems -模型名称 | 框架 | 数据集 --------- | ------ | ---- -[HashNeRF](3d-reconstruction/hashnerf/pytorch/README.md) | PyTorch | fox +| Model | Framework | Dataset | +|------------------------------------------------------------------------------------|--------------|-----------------| +| [DeepFM](others/recommendation_systems/deepfm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte | +| [DLRM](others/recommendation_systems/dlrm/pytorch/README.md) | PyTorch | Criteo_Terabyte | +| [DLRM](others/recommendation_systems/dlrm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte | +| [FFM](others/recommendation_systems/ffm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte | +| [NCF](others/recommendation_systems/ncf/pytorch/README.md) | PyTorch | movielens | +| [Wide&Deep](others/recommendation_systems/wide_deep/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte | +| [xDeepFM](others/recommendation_systems/xdeepfm/paddlepaddle/README.md) | PaddlePaddle | Criteo_Terabyte | -------- @@ -497,7 +471,8 @@ DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领 ### 治理 -请参见 DeepSpark Code of Conduct on [Gitee](https://gitee.com/deep-spark/deepspark/blob/master/CODE_OF_CONDUCT.md) or on [GitHub](https://github.com/Deep-Spark/deepspark/blob/main/CODE_OF_CONDUCT.md)。 +请参见 DeepSpark Code of Conduct on [Gitee](https://gitee.com/deep-spark/deepspark/blob/master/CODE_OF_CONDUCT.md) or on +[GitHub](https://github.com/Deep-Spark/deepspark/blob/main/CODE_OF_CONDUCT.md)。 ### 交流 @@ -509,11 +484,13 @@ DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领 ### 免责声明 -DeepSparkHub仅提供公共数据集的下载和预处理脚本。这些数据集不属于DeepSparkHub,DeepSparkHub也不对其质量或维护负责。请确保您具有这些数据集的使用许可,基于这些数据集训练的模型仅可用于非商业研究和教育。 +DeepSparkHub仅提供公共数据集的下载和预处理脚本。这些数据集不属于DeepSparkHub,DeepSparkHub也不对其质量或维护负责。请确保 +您具有这些数据集的使用许可,基于这些数据集训练的模型仅可用于非商业研究和教育。 致数据集所有者: -如果不希望您的数据集公布在DeepSparkHub上或希望更新DeepSparkHub中属于您的数据集,请在Gitee或Github上提交issue,我们将按您的issue删除或更新。衷心感谢您对我们社区的支持和贡献。 +如果不希望您的数据集公布在DeepSparkHub上或希望更新DeepSparkHub中属于您的数据集,请在Gitee或Github上提交issue,我们将按您 +的issue删除或更新。衷心感谢您对我们社区的支持和贡献。 ## 许可证 diff --git a/speech/speech_recognition/README.md b/audio/speech_recognition/README.md similarity index 100% rename from speech/speech_recognition/README.md rename to audio/speech_recognition/README.md diff --git a/audio/speech_recognition/conformer/paddlepaddle/README.md b/audio/speech_recognition/conformer/paddlepaddle/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6e06183736191dc133a17296038ddabe978dfd44 --- /dev/null +++ b/audio/speech_recognition/conformer/paddlepaddle/README.md @@ -0,0 +1,44 @@ +# Conformer + +## Model description + +Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech +Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing +content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of +both worlds by studying how to combine convolution neural networks and transformers to model both local and global +dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented +transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and +CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER +of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also +observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters. + +## Step 1:Installation + +```sh +git clone --recursive -b r1.4 https://github.com/PaddlePaddle/PaddleSpeech.git +cd PaddleSpeech +pip3 install . +``` + +## Step 2:Preparing datasets + +```sh +cd examples/aishell/asr1/ +bash run.sh --stage 0 --stop_stage 0 +``` + +"run.sh" will download and process the datasets, The download process may be slow, you can download the data_aishell.tgz +from [wenet](http://openslr.magicdatatech.com/resources/33/data_aishell.tgz) and put it in the +/path/to/PaddleSpeech/dataset/aishell/, then return to execute the above command. + +## Step 3:Training + +```sh +bash run.sh --stage 1 --stop_stage 3 +``` + +## Results + +| GPUs | IPS | CER | +|-------------|------|-----------------------| +| BI-V100 x 4 | 48.5 | 0.0495(checkpoint 81) | diff --git a/speech/speech_recognition/conformer_wenet/pytorch/README.md b/audio/speech_recognition/conformer_wenet/pytorch/README.md similarity index 81% rename from speech/speech_recognition/conformer_wenet/pytorch/README.md rename to audio/speech_recognition/conformer_wenet/pytorch/README.md index 08c4bba7ef683b624ba40939bce63762b910f26b..b05608f49439f4be4cd35a346440ceb104c14ae7 100755 --- a/speech/speech_recognition/conformer_wenet/pytorch/README.md +++ b/audio/speech_recognition/conformer_wenet/pytorch/README.md @@ -1,15 +1,15 @@ # Conformer ## Model description -The Conformer neural network is a hybrid model designed for tasks like speech recognition, -merging the strengths of convolutional neural networks (CNNs) and transformers. It employs -CNNs for local feature extraction and transformers to capture long-range dependencies in -data. This combination allows the Conformer to efficiently handle both local patterns and -global relationships, making it particularly effective for audio and speech tasks. + +The Conformer neural network is a hybrid model designed for tasks like speech recognition, merging the strengths of +convolutional neural networks (CNNs) and transformers. It employs CNNs for local feature extraction and transformers to +capture long-range dependencies in data. This combination allows the Conformer to efficiently handle both local patterns +and global relationships, making it particularly effective for audio and speech tasks. ## Step 1: Installation -```bash +```sh cd ../../../../toolbox/WeNet/ bash install_toolbox_wenet.sh ``` @@ -21,7 +21,7 @@ You could just run the whole script, which will download the dataset automatical **You need to modify the path of dataset in run.sh.** -```bash +```sh # Change to the scripts path cd wenet/examples/aishell/s0/ @@ -35,7 +35,7 @@ bash run.sh --stage -1 --stop-stage 6 Or you also run each stage one by one manually and check the result to understand the whole process. -```bash +```sh # Download data bash run.sh --stage -1 --stop-stage -1 # Prepare Training data @@ -66,4 +66,5 @@ bash run.sh --stage 6 --stop-stage 6 | 3.72 | SDK V2.2,bs:32,8x,fp32 | 380 | 4.79@cer | 113\*8 | 0.82 | 21.5\*8 | 1 | ## Reference -https://github.com/wenet-e2e/wenet + + diff --git a/speech/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md b/audio/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md similarity index 83% rename from speech/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md rename to audio/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md index f2feed5903a558de0c2b15de41a87f41c0d7dda1..d35cb89b48a31d4eed06ed9859d236c750061c3e 100644 --- a/speech/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md +++ b/audio/speech_recognition/efficient_conformer_v2_wenet/pytorch/README.md @@ -1,14 +1,15 @@ # Efficient Conformer V2 ## Model description -EfficientFormerV2 mimics MobileNet with its convolutional structure, -offering transformers a series of designs and optimizations for mobile acceleration. + +EfficientFormerV2 mimics MobileNet with its convolutional structure, +offering transformers a series of designs and optimizations for mobile acceleration. The number of parameters and latency of the model are critical for resource-constrained hardware, so EfficientFormerV2 combines a fine-grained joint search strategy to propose an efficient network with low latency and size. ## Step 1: Installation -```bash +```sh cd ../../../../toolbox/WeNet/ git clone https://github.com/wenet-e2e/wenet.git cd wenet/ @@ -24,7 +25,7 @@ You could just run the whole script, which will download the dataset automatical **You need to modify the path of the dataset in run.sh.** -```bash +```sh # Change to the scripts path cd wenet/examples/aishell/s0/ @@ -46,7 +47,7 @@ bash run.sh --stage -1 --stop-stage 6 Or you also run each stage one by one manually and check the result to understand the whole process. -```bash +```sh # Download data bash run.sh --stage -1 --stop-stage -1 # Prepare Training data @@ -66,10 +67,11 @@ bash run.sh --stage 6 --stop-stage 6 ``` ## Results -| GPUs | QPS |WER(ctc_greedy_search) |WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring) | -|------|-------|-----|-----|-----|-----|-----| -| BI-V100 x8 | 234 | 5.00% | 4.99% |4.89% | 4.58% | +| GPUs | QPS | WER(ctc_greedy_search) | WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring) | +|------------|-----|------------------------|-----------------------------|----------------|--------------------------| +| BI-V100 x8 | 234 | 5.00% | 4.99% | 4.89% | 4.58% | ## Reference + - [WeNet](https://github.com/wenet-e2e/wenet) diff --git a/speech/speech_recognition/rnnt/pytorch/.gitignore b/audio/speech_recognition/rnnt/pytorch/.gitignore similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/.gitignore rename to audio/speech_recognition/rnnt/pytorch/.gitignore diff --git a/speech/speech_recognition/rnnt/pytorch/LICENSE b/audio/speech_recognition/rnnt/pytorch/LICENSE similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/LICENSE rename to audio/speech_recognition/rnnt/pytorch/LICENSE diff --git a/speech/speech_recognition/rnnt/pytorch/NOTICE b/audio/speech_recognition/rnnt/pytorch/NOTICE similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/NOTICE rename to audio/speech_recognition/rnnt/pytorch/NOTICE diff --git a/speech/speech_recognition/rnnt/pytorch/README.md b/audio/speech_recognition/rnnt/pytorch/README.md similarity index 41% rename from speech/speech_recognition/rnnt/pytorch/README.md rename to audio/speech_recognition/rnnt/pytorch/README.md index 3d3571ae084a7e415ad42b3b5606cb56151f9308..66d044631db3b41fcb087dc1533c6a0346e8b768 100644 --- a/speech/speech_recognition/rnnt/pytorch/README.md +++ b/audio/speech_recognition/rnnt/pytorch/README.md @@ -2,33 +2,51 @@ ## Model description -Many machine learning tasks can be expressed as the transformation---or \emph{transduction}---of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since \emph{finding} the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for phoneme recognition are provided on the TIMIT speech corpus. +Many machine learning tasks can be expressed as the transformation---or \emph{transduction}---of input sequences into +output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to +name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output +sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent +neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such +representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to +perform transduction. This is a severe limitation since \emph{finding} the alignment is the most difficult aspect of +many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. +This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in +principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for +phoneme recognition are provided on the TIMIT speech corpus. ## Step 1: Installing packages + Install required libraries: -``` + +```sh bash install.sh ``` ## Step 2: Preparing datasets + download LibriSpeech [http://www.openslr.org/12](http://www.openslr.org/12) -``` + +```sh bash scripts/download_librispeech.sh ${DATA_ROOT_DIR} ``` + preprocess LibriSpeech -``` + +```sh bash scripts/preprocess_librispeech.sh ${DATA_ROOT_DIR} ``` ## Step 3: Training + ### Setup config yaml -```shell + +```sh sed -i "s#MODIFY_DATASET_DIR#${DATA_ROOT_DIR}/LibriSpeech#g" configs/baseline_v3-1023sp.yaml ``` ### Multiple GPUs on one machine -``` +```sh mkdir -p output/ bash scripts/train_rnnt_1x8.sh output/ ${DATA_ROOT_DIR}/LibriSpeech ``` @@ -36,18 +54,17 @@ bash scripts/train_rnnt_1x8.sh output/ ${DATA_ROOT_DIR}/LibriSpeech Following conditions were tested, you can run any of them below: | | FP32 | -| ----------- | ------------------- | -| single card | `train_rnnt_1x1.sh` | +|-------------|---------------------| +| single card | `train_rnnt_1x1.sh` | | 4 cards | `train_rnnt_1x4.sh` | | 8 cards | `train_rnnt_1x8.sh` | - ## Results on BI-V100 -| GPUs | FP16 | FPS | WER | -|------|-------|-----| ---------- | -| 1x8 | False | 20 | 0.058 | - +| GPUs | FP16 | FPS | WER | +|------|-------|-----|-------| +| 1x8 | False | 20 | 0.058 | ## Reference -https://github.com/mlcommons/training/tree/master/rnn_speech_recognition/pytorch + + diff --git a/cv/tracking/deep_sort/pytorch/__init__.py b/audio/speech_recognition/rnnt/pytorch/common/__init__.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/__init__.py rename to audio/speech_recognition/rnnt/pytorch/common/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/audio.py b/audio/speech_recognition/rnnt/pytorch/common/audio.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/audio.py rename to audio/speech_recognition/rnnt/pytorch/common/audio.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/__init__.py b/audio/speech_recognition/rnnt/pytorch/common/data/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/__init__.py rename to audio/speech_recognition/rnnt/pytorch/common/data/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dali/__init__.py b/audio/speech_recognition/rnnt/pytorch/common/data/dali/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dali/__init__.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dali/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dali/data_loader.py b/audio/speech_recognition/rnnt/pytorch/common/data/dali/data_loader.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dali/data_loader.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dali/data_loader.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dali/iterator.py b/audio/speech_recognition/rnnt/pytorch/common/data/dali/iterator.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dali/iterator.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dali/iterator.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dali/pipeline.py b/audio/speech_recognition/rnnt/pytorch/common/data/dali/pipeline.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dali/pipeline.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dali/pipeline.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dali/sampler.py b/audio/speech_recognition/rnnt/pytorch/common/data/dali/sampler.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dali/sampler.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dali/sampler.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/dataset.py b/audio/speech_recognition/rnnt/pytorch/common/data/dataset.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/dataset.py rename to audio/speech_recognition/rnnt/pytorch/common/data/dataset.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/features.py b/audio/speech_recognition/rnnt/pytorch/common/data/features.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/features.py rename to audio/speech_recognition/rnnt/pytorch/common/data/features.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/helpers.py b/audio/speech_recognition/rnnt/pytorch/common/data/helpers.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/helpers.py rename to audio/speech_recognition/rnnt/pytorch/common/data/helpers.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/data/text.py b/audio/speech_recognition/rnnt/pytorch/common/data/text.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/data/text.py rename to audio/speech_recognition/rnnt/pytorch/common/data/text.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/helpers.py b/audio/speech_recognition/rnnt/pytorch/common/helpers.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/helpers.py rename to audio/speech_recognition/rnnt/pytorch/common/helpers.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/metrics.py b/audio/speech_recognition/rnnt/pytorch/common/metrics.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/metrics.py rename to audio/speech_recognition/rnnt/pytorch/common/metrics.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/optimizers.py b/audio/speech_recognition/rnnt/pytorch/common/optimizers.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/optimizers.py rename to audio/speech_recognition/rnnt/pytorch/common/optimizers.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/rnn.py b/audio/speech_recognition/rnnt/pytorch/common/rnn.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/rnn.py rename to audio/speech_recognition/rnnt/pytorch/common/rnn.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/sampler.py b/audio/speech_recognition/rnnt/pytorch/common/sampler.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/sampler.py rename to audio/speech_recognition/rnnt/pytorch/common/sampler.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/tb_dllogger.py b/audio/speech_recognition/rnnt/pytorch/common/tb_dllogger.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/tb_dllogger.py rename to audio/speech_recognition/rnnt/pytorch/common/tb_dllogger.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/text/LICENSE b/audio/speech_recognition/rnnt/pytorch/common/text/LICENSE similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/text/LICENSE rename to audio/speech_recognition/rnnt/pytorch/common/text/LICENSE diff --git a/speech/speech_recognition/rnnt/pytorch/common/text/__init__.py b/audio/speech_recognition/rnnt/pytorch/common/text/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/text/__init__.py rename to audio/speech_recognition/rnnt/pytorch/common/text/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/text/cleaners.py b/audio/speech_recognition/rnnt/pytorch/common/text/cleaners.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/text/cleaners.py rename to audio/speech_recognition/rnnt/pytorch/common/text/cleaners.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/text/numbers.py b/audio/speech_recognition/rnnt/pytorch/common/text/numbers.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/text/numbers.py rename to audio/speech_recognition/rnnt/pytorch/common/text/numbers.py diff --git a/speech/speech_recognition/rnnt/pytorch/common/text/symbols.py b/audio/speech_recognition/rnnt/pytorch/common/text/symbols.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/text/symbols.py rename to audio/speech_recognition/rnnt/pytorch/common/text/symbols.py diff --git a/speech/speech_recognition/rnnt/pytorch/configs/baseline_v3-1023sp.yaml b/audio/speech_recognition/rnnt/pytorch/configs/baseline_v3-1023sp.yaml similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/configs/baseline_v3-1023sp.yaml rename to audio/speech_recognition/rnnt/pytorch/configs/baseline_v3-1023sp.yaml diff --git a/speech/speech_recognition/rnnt/pytorch/eval_model.py b/audio/speech_recognition/rnnt/pytorch/eval_model.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/eval_model.py rename to audio/speech_recognition/rnnt/pytorch/eval_model.py diff --git a/speech/speech_recognition/rnnt/pytorch/inference.py b/audio/speech_recognition/rnnt/pytorch/inference.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/inference.py rename to audio/speech_recognition/rnnt/pytorch/inference.py diff --git a/speech/speech_recognition/rnnt/pytorch/install.sh b/audio/speech_recognition/rnnt/pytorch/install.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/install.sh rename to audio/speech_recognition/rnnt/pytorch/install.sh diff --git a/gnn/text_classification/GCN/mindspore/model_utils/__init__.py b/audio/speech_recognition/rnnt/pytorch/mlperf/__init__.py old mode 100755 new mode 100644 similarity index 100% rename from gnn/text_classification/GCN/mindspore/model_utils/__init__.py rename to audio/speech_recognition/rnnt/pytorch/mlperf/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/mlperf/logging.py b/audio/speech_recognition/rnnt/pytorch/mlperf/logging.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/mlperf/logging.py rename to audio/speech_recognition/rnnt/pytorch/mlperf/logging.py diff --git a/speech/speech_recognition/rnnt/pytorch/requirements.txt b/audio/speech_recognition/rnnt/pytorch/requirements.txt similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/requirements.txt rename to audio/speech_recognition/rnnt/pytorch/requirements.txt diff --git a/speech/speech_recognition/rnnt/pytorch/rnnt/config.py b/audio/speech_recognition/rnnt/pytorch/rnnt/config.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/rnnt/config.py rename to audio/speech_recognition/rnnt/pytorch/rnnt/config.py diff --git a/speech/speech_recognition/rnnt/pytorch/rnnt/decoder.py b/audio/speech_recognition/rnnt/pytorch/rnnt/decoder.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/rnnt/decoder.py rename to audio/speech_recognition/rnnt/pytorch/rnnt/decoder.py diff --git a/speech/speech_recognition/rnnt/pytorch/rnnt/loss.py b/audio/speech_recognition/rnnt/pytorch/rnnt/loss.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/rnnt/loss.py rename to audio/speech_recognition/rnnt/pytorch/rnnt/loss.py diff --git a/speech/speech_recognition/rnnt/pytorch/rnnt/model.py b/audio/speech_recognition/rnnt/pytorch/rnnt/model.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/rnnt/model.py rename to audio/speech_recognition/rnnt/pytorch/rnnt/model.py diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/create_sentencepieces.sh b/audio/speech_recognition/rnnt/pytorch/scripts/create_sentencepieces.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/create_sentencepieces.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/create_sentencepieces.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/docker/Dockerfile b/audio/speech_recognition/rnnt/pytorch/scripts/docker/Dockerfile similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/docker/Dockerfile rename to audio/speech_recognition/rnnt/pytorch/scripts/docker/Dockerfile diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/docker/build.sh b/audio/speech_recognition/rnnt/pytorch/scripts/docker/build.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/docker/build.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/docker/build.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/docker/launch.sh b/audio/speech_recognition/rnnt/pytorch/scripts/docker/launch.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/docker/launch.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/docker/launch.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/download_librispeech.sh b/audio/speech_recognition/rnnt/pytorch/scripts/download_librispeech.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/download_librispeech.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/download_librispeech.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/inference.sh b/audio/speech_recognition/rnnt/pytorch/scripts/inference.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/inference.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/inference.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/preprocess_librispeech.sh b/audio/speech_recognition/rnnt/pytorch/scripts/preprocess_librispeech.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/preprocess_librispeech.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/preprocess_librispeech.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x1.sh b/audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x1.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x1.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x1.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x4.sh b/audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x4.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x4.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x4.sh diff --git a/speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x8.sh b/audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x8.sh similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x8.sh rename to audio/speech_recognition/rnnt/pytorch/scripts/train_rnnt_1x8.sh diff --git a/speech/speech_recognition/rnnt/pytorch/train.py b/audio/speech_recognition/rnnt/pytorch/train.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/train.py rename to audio/speech_recognition/rnnt/pytorch/train.py diff --git a/gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/cora/__init__.py b/audio/speech_recognition/rnnt/pytorch/utils/__init__.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/cora/__init__.py rename to audio/speech_recognition/rnnt/pytorch/utils/__init__.py diff --git a/speech/speech_recognition/rnnt/pytorch/utils/convert_librispeech.py b/audio/speech_recognition/rnnt/pytorch/utils/convert_librispeech.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/convert_librispeech.py rename to audio/speech_recognition/rnnt/pytorch/utils/convert_librispeech.py diff --git a/speech/speech_recognition/rnnt/pytorch/utils/download_librispeech.py b/audio/speech_recognition/rnnt/pytorch/utils/download_librispeech.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/download_librispeech.py rename to audio/speech_recognition/rnnt/pytorch/utils/download_librispeech.py diff --git a/speech/speech_recognition/rnnt/pytorch/utils/download_utils.py b/audio/speech_recognition/rnnt/pytorch/utils/download_utils.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/download_utils.py rename to audio/speech_recognition/rnnt/pytorch/utils/download_utils.py diff --git a/speech/speech_recognition/rnnt/pytorch/utils/inference_librispeech.csv b/audio/speech_recognition/rnnt/pytorch/utils/inference_librispeech.csv similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/inference_librispeech.csv rename to audio/speech_recognition/rnnt/pytorch/utils/inference_librispeech.csv diff --git a/speech/speech_recognition/rnnt/pytorch/utils/librispeech.csv b/audio/speech_recognition/rnnt/pytorch/utils/librispeech.csv similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/librispeech.csv rename to audio/speech_recognition/rnnt/pytorch/utils/librispeech.csv diff --git a/speech/speech_recognition/rnnt/pytorch/utils/preprocessing_utils.py b/audio/speech_recognition/rnnt/pytorch/utils/preprocessing_utils.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/preprocessing_utils.py rename to audio/speech_recognition/rnnt/pytorch/utils/preprocessing_utils.py diff --git a/speech/speech_recognition/transformer_wenet/pytorch/README.md b/audio/speech_recognition/transformer_wenet/pytorch/README.md similarity index 60% rename from speech/speech_recognition/transformer_wenet/pytorch/README.md rename to audio/speech_recognition/transformer_wenet/pytorch/README.md index dd30710f878d787781c1f59c153e4a4a039f4fea..62a77dede1e4364b09cd51f3dcae7515a98d194c 100755 --- a/speech/speech_recognition/transformer_wenet/pytorch/README.md +++ b/audio/speech_recognition/transformer_wenet/pytorch/README.md @@ -1,17 +1,16 @@ # Transformer ## Model description -The Transformer architecture, introduced in the paper "Attention Is All You Need" by -Vaswani et al. in 2017, revolutionized the field of deep learning. It relies on a -mechanism called self-attention to process input data in parallel (as opposed to -sequentially) and capture complex dependencies in data, regardless of their distance -in the sequence. Transformers have since become the foundation for state-of-the-art -models in various tasks, especially in natural language processing, such as the BERT -and GPT series. + +The Transformer architecture, introduced in the paper "Attention Is All You Need" by Vaswani et al. in 2017, +revolutionized the field of deep learning. It relies on a mechanism called self-attention to process input data in +parallel (as opposed to sequentially) and capture complex dependencies in data, regardless of their distance in the +sequence. Transformers have since become the foundation for state-of-the-art models in various tasks, especially in +natural language processing, such as the BERT and GPT series. ## Step 1: Installation -```bash +```sh cd ../../../../toolbox/WeNet/ bash install_toolbox_wenet.sh ``` @@ -23,7 +22,7 @@ You could just run the whole script, which will download the dataset automatical **You need to modify the path of the dataset in run.sh.** -```bash +```sh # Change to the scripts path cd wenet/examples/aishell/s0/ @@ -37,7 +36,7 @@ bash run.sh --stage -1 --stop-stage 6 Or you also run each stage one by one manually and check the result to understand the whole process. -```bash +```sh # Download data bash run.sh --stage -1 --stop-stage -1 # Prepare Training data @@ -58,9 +57,10 @@ bash run.sh --stage 6 --stop-stage 6 ## Results -| GPUs | FP16 | QPS | WER (ctc_greedy_search)| WER (ctc_prefix_beam_search) | WER (attention)| WER (attention_rescoring)| -|--- |--- |--- |--- |--- |--- |--- | -| BI-V100 x8 | False | 394| 5.78% | 5.78% | 5.59% | 5.17% | +| GPUs | FP16 | QPS | WER (ctc_greedy_search) | WER (ctc_prefix_beam_search) | WER (attention) | WER (attention_rescoring) | +|------------|-------|-----|-------------------------|------------------------------|-----------------|---------------------------| +| BI-V100 x8 | False | 394 | 5.78% | 5.78% | 5.59% | 5.17% | ## Reference + - [WeNet](https://github.com/wenet-e2e/wenet) diff --git a/speech/speech_recognition/u2++_conformer_wenet/pytorch/README.md b/audio/speech_recognition/u2++_conformer_wenet/pytorch/README.md similarity index 67% rename from speech/speech_recognition/u2++_conformer_wenet/pytorch/README.md rename to audio/speech_recognition/u2++_conformer_wenet/pytorch/README.md index 596532e83edac1c8853627967bb12c5d93620290..7e3d89dad2f0eaf02c1cb7ffe3cae7142cd4f389 100755 --- a/speech/speech_recognition/u2++_conformer_wenet/pytorch/README.md +++ b/audio/speech_recognition/u2++_conformer_wenet/pytorch/README.md @@ -1,14 +1,14 @@ # U2++ Conformer ## Model description -U2++, an enhanced version of U2 to further improve the accuracy. The core idea of U2++ is -to use the forward and the backward information of the labeling sequences at the same -time at training to learn richer information, and combine the forward and backward -prediction at decoding to give more accurate recognition results. + +U2++, an enhanced version of U2 to further improve the accuracy. The core idea of U2++ is to use the forward and the +backward information of the labeling sequences at the same time at training to learn richer information, and combine the +forward and backward prediction at decoding to give more accurate recognition results. ## Step 1: Installation -```bash +```sh cd ../../../../toolbox/WeNet/ bash install_toolbox_wenet.sh ``` @@ -20,7 +20,7 @@ You could just run the whole script, which will download the dataset automatical **You need to modify the path of the dataset in run.sh.** -```bash +```sh # Change to the scripts path cd wenet/examples/aishell/s0/ @@ -34,7 +34,7 @@ bash run.sh --stage -1 --stop-stage 6 Or you also run each stage one by one manually and check the result to understand the whole process. -```bash +```sh # Download data bash run.sh --stage -1 --stop-stage -1 # Prepare Training data @@ -55,10 +55,10 @@ bash run.sh --stage 6 --stop-stage 6 ## Results -| GPUs | FP16 | QPS |WER(ctc_greedy_search) |WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring) | -|------|-------|-----|----- |----- |----- |----- | -| BI-V100 x8 | False | 272 | 5.21% | 5.21% |5.13% | 4.82% | - +| GPUs | FP16 | QPS | WER(ctc_greedy_search) | WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring) | +|------------|-------|-----|------------------------|-----------------------------|----------------|--------------------------| +| BI-V100 x8 | False | 272 | 5.21% | 5.21% | 5.13% | 4.82% | ## Reference + - [WeNet](https://github.com/wenet-e2e/wenet) diff --git a/speech/speech_recognition/unified_conformer_wenet/pytorch/README.md b/audio/speech_recognition/unified_conformer_wenet/pytorch/README.md similarity index 67% rename from speech/speech_recognition/unified_conformer_wenet/pytorch/README.md rename to audio/speech_recognition/unified_conformer_wenet/pytorch/README.md index 4f9d81e270852505c1fc4d9890829adfe390db21..18b58ea0b277274acffb9dfa687083a7185cb883 100755 --- a/speech/speech_recognition/unified_conformer_wenet/pytorch/README.md +++ b/audio/speech_recognition/unified_conformer_wenet/pytorch/README.md @@ -1,15 +1,15 @@ # Unified Conformer ## Model description -Unified Conformer is an architecture that has become state-of-the-art in the field of -Automatic Speech Recognition (ASR). It is a variant of the Transformer architecture which -has achieved extraordinary performance in Natural Language Processing and Computer Vision -tasks thanks to its powerful self-attention mechanism¹. The Conformer architecture has + +Unified Conformer is an architecture that has become state-of-the-art in the field of Automatic Speech Recognition +(ASR). It is a variant of the Transformer architecture which has achieved extraordinary performance in Natural Language +Processing and Computer Vision tasks thanks to its powerful self-attention mechanism¹. The Conformer architecture has been modified from ASR to Automatic Speaker Verification (ASV) with very minor changes. ## Step 1: Installation -```bash +```sh cd ../../../../toolbox/WeNet/ bash install_toolbox_wenet.sh ``` @@ -21,7 +21,7 @@ You could just run the whole script, which will download the dataset automatical **You need to modify the path of the dataset in run.sh.** -```bash +```sh # Change to the scripts path cd wenet/examples/aishell/s0/ @@ -35,7 +35,7 @@ bash run.sh --stage -1 --stop-stage 6 Or you also run each stage one by one manually and check the result to understand the whole process. -```bash +```sh # Download data bash run.sh --stage -1 --stop-stage -1 # Prepare Training data @@ -56,9 +56,10 @@ bash run.sh --stage 6 --stop-stage 6 ## Results -| GPUs | FP16 | QPS | WER(ctc_greedy_search)| WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring)| -|------|-------|-----| -----|-----|-----|-----| -| BI-V100 x8 | False | 257| 5.60%|5.60% |5.46% |4.98% | +| GPUs | FP16 | QPS | WER(ctc_greedy_search) | WER(ctc_prefix_beam_search) | WER(attention) | WER(attention_rescoring) | +|------------|-------|-----|------------------------|-----------------------------|----------------|--------------------------| +| BI-V100 x8 | False | 257 | 5.60% | 5.60% | 5.46% | 4.98% | ## Reference + - [WeNet](https://github.com/wenet-e2e/wenet) diff --git a/speech/speech_synthesis/README.md b/audio/speech_synthesis/README.md similarity index 100% rename from speech/speech_synthesis/README.md rename to audio/speech_synthesis/README.md diff --git a/speech/speech_synthesis/fastspeech2/paddlepaddle/README.md b/audio/speech_synthesis/fastspeech2/paddlepaddle/README.md similarity index 65% rename from speech/speech_synthesis/fastspeech2/paddlepaddle/README.md rename to audio/speech_synthesis/fastspeech2/paddlepaddle/README.md index 1a98c4003deaf3dbeee4519911d7311cbb4a78e8..a5b33637f3e459b4e7cc72b402397150f21270bc 100644 --- a/speech/speech_synthesis/fastspeech2/paddlepaddle/README.md +++ b/audio/speech_synthesis/fastspeech2/paddlepaddle/README.md @@ -2,11 +2,15 @@ ## Model description -Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. FastSpeech 2s is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. +Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than +previous autoregressive models with comparable quality. FastSpeech 2s is the first attempt to directly generate speech +waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) +FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) +FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. ## Step 1:Installation -``` +```sh # Pip the requirements pip3 install -r requirements.txt @@ -15,7 +19,7 @@ git clone https://github.com/PaddlePaddle/PaddleSpeech.git cd PaddleSpeech/examples/csmsc/tts3 ``` -``` +```sh # Install sqlite3 wget https://sqlite.org/2019/sqlite-autoconf-3290000.tar.gz tar zxvf sqlite-autoconf-3290000.tar.gz @@ -33,7 +37,7 @@ make && make install cp /usr/bin/lib/python3.7/lib-dynload/_sqlite3.cpython-37m-x86_64-linux-gnu.so /usr/local/lib/python3.7/lib-dynload/_sqlite3.so ``` -``` +```sh # Update GCC lib wget http://ftp.gnu.org/gnu/gcc/gcc-8.3.0/gcc-8.3.0.tar.gz tar -zxvf gcc-8.3.0.tar.gz @@ -54,17 +58,20 @@ ln -s libstdc++.so.6.0.25 libstdc++.so.6 ## Step 2:Preparing datasets -#### Download and Extract +### Download and Extract -Download CSMSC(BZNSYP) from this [Website.](https://aistudio.baidu.com/datasetdetail/36741) and extract it to ./datasets. Then the dataset is in the directory ./datasets/BZNSYP. +Download CSMSC(BZNSYP) from this [Website.](https://aistudio.baidu.com/datasetdetail/36741) and extract it to +./datasets. Then the dataset is in the directory ./datasets/BZNSYP. -#### Get MFA Result and Extract +### Get MFA Result and Extract -We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2. You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz). +We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2. You can +download from here +[baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz). Put the data directory structure like this: -``` +```sh tts3 ├── baker_alignment_tone ├── conf @@ -79,15 +86,15 @@ tts3 Change the rootdir of dataset in ./local/preprocess.sh to the dataset path. Like this: `--rootdir=./datasets/BZNSYP` -#### Data preprocessing +### Data preprocessing -``` +```sh PYTHONWARNINGS='ignore:semaphore_tracker:UserWarning' ./run.sh --stage 0 --stop-stage 0 ``` When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below. -``` +```sh dump ├── dev │ ├── norm @@ -107,7 +114,7 @@ dump ## Step 3:Training -#### Model Training +### Model Training You can choose use how many gpus for training by changing gups parameter in run.sh file and ngpu parameter in ./local/train.sh file. @@ -115,48 +122,51 @@ You can choose use how many gpus for training by changing gups parameter in run. PYTHONWARNINGS='ignore:semaphore_tracker:UserWarning' ./run.sh --stage 1 --stop-stage 1 ``` -#### Synthesizing +### Synthesizing -We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/voc1) as the neural vocoder. Download pretrained parallel wavegan model from [pwg_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_baker_ckpt_0.4.zip) and unzip it. +We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/voc1) as the neural +vocoder. Download pretrained parallel wavegan model from +[pwg_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_baker_ckpt_0.4.zip) and +unzip it. -``` +```sh unzip pwg_baker_ckpt_0.4.zip ``` Parallel WaveGAN checkpoint contains files listed below. -``` +```sh pwg_baker_ckpt_0.4 ├── pwg_default.yaml # default config used to train parallel wavegan ├── pwg_snapshot_iter_400000.pdz # model parameters of parallel wavegan └── pwg_stats.npy # statistics used to normalize spectrogram when training parallel wavegan ``` -Run synthesizing -Modify the parameter of `ckpt_name` in run.sh file to the weight name after training. -Add parameter `providers=['CUDAExecutionProvider'] `to the file `PaddleSpeech/paddlespeech/t2s/frontend/g2pw/onnx_api.py `at line 80. Like below: +Run synthesizing Modify the parameter of `ckpt_name` in run.sh file to the weight name after training. Add parameter +`providers=['CUDAExecutionProvider'] `to the file `PaddleSpeech/paddlespeech/t2s/frontend/g2pw/onnx_api.py `at line 80. +Like below: -``` +```sh self.session_g2pW = onnxruntime.InferenceSession( os.path.join(uncompress_path, 'g2pW.onnx'), sess_options=sess_options, providers=['CUDAExecutionProvider']) ``` -``` +```sh ./run.sh --stage 2 --stop-stage 3 ``` -#### Inferencing +### Inferencing -``` +```sh ./run.sh --stage 4 --stop-stage 4 ``` ## Results -| GPUS | avg_ips | l1 loss | duration loss | pitch loss | energy loss | loss | -| ----------- | ------- | ------- | ------------- | ---------- | ----------- | ----- | -| BI 100 × 8 | 71.19sequences/sec | 0.603 | 0.037 | 0.327 | 0.151 | 1.118 | +| GPUS | avg_ips | l1 loss | duration loss | pitch loss | energy loss | loss | +|------------|--------------------|---------|---------------|------------|-------------|-------| +| BI 100 × 8 | 71.19sequences/sec | 0.603 | 0.037 | 0.327 | 0.151 | 1.118 | ## Reference diff --git a/speech/speech_synthesis/fastspeech2/paddlepaddle/requirements.txt b/audio/speech_synthesis/fastspeech2/paddlepaddle/requirements.txt similarity index 100% rename from speech/speech_synthesis/fastspeech2/paddlepaddle/requirements.txt rename to audio/speech_synthesis/fastspeech2/paddlepaddle/requirements.txt diff --git a/speech/speech_synthesis/hifigan/paddlepaddle/README.md b/audio/speech_synthesis/hifigan/paddlepaddle/README.md similarity index 69% rename from speech/speech_synthesis/hifigan/paddlepaddle/README.md rename to audio/speech_synthesis/hifigan/paddlepaddle/README.md index 639b1d522695637684ec4fce9a30e8f6aa7cc640..cd8e2cff80878979d83f81e8e832c397ee385475 100644 --- a/speech/speech_synthesis/hifigan/paddlepaddle/README.md +++ b/audio/speech_synthesis/hifigan/paddlepaddle/README.md @@ -2,11 +2,13 @@ ## Model description -HiFiGAN is a commonly used vocoder in academia and industry in recent years, which can convert the frequency spectrum generated by acoustic models into high-quality audio. This vocoder uses generative adversarial networks as the basis for generating models. +HiFiGAN is a commonly used vocoder in academia and industry in recent years, which can convert the frequency spectrum +generated by acoustic models into high-quality audio. This vocoder uses generative adversarial networks as the basis for +generating models. ## Step 1:Installation -``` +```sh # Pip the requirements pip3 install requirements.txt @@ -18,17 +20,20 @@ cd PaddleSpeech/examples/csmsc/voc5 ## Step 2:Preparing datasets -#### Download and Extract +### Download and Extract -Download CSMSC(BZNSYP) from this [Website.](https://aistudio.baidu.com/datasetdetail/36741)and extract it to ./datasets. Then the dataset is in the directory ./datasets/BZNSYP. +Download CSMSC(BZNSYP) from this [Website.](https://aistudio.baidu.com/datasetdetail/36741)and extract it to ./datasets. +Then the dataset is in the directory ./datasets/BZNSYP. -#### Get MFA Result and Extract +### Get MFA Result and Extract -We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2. You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz). +We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2. You can +download from here +[baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz). Put the data directory structure like this: -``` +```sh voc5 ├── baker_alignment_tone ├── conf @@ -43,15 +48,15 @@ voc5 Change the rootdir of dataset in ./local/preprocess.sh to the dataset path. Like this: `--rootdir=./datasets/BZNSYP` -#### Data preprocessing +### Data preprocessing -``` +```sh ./run.sh --stage 0 --stop-stage 0 ``` When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below. -``` +```sh dump ├── dev │ ├── norm @@ -67,27 +72,27 @@ dump ## Step 3:Training -#### Model Training +### Model Training You can choose use how many gpus for training by changing `gups` parameter in run.sh file and `ngpu` parameter in ./local/train.sh file. Modify `./local/train.sh `file to use python3 run. -``` +```sh sed -i 's/python /python3 /g' ./local/train.sh ``` Full training may cost much time, you can modify the `train_max_steps` parameter in ./conf/default.yaml file to reduce training time. But in order to get the weight file you should make the `train_max_steps` parameter bigger than `save_interval_steps ` parameter. -``` +```sh ./run.sh --stage 1 --stop-stage 1 ``` -#### Synthesizing +### Synthesizing Modify the parameter of `ckpt_name` in run.sh file to the weight name after training. -``` +```sh ./run.sh --stage 2 --stop-stage 2 ``` @@ -95,8 +100,8 @@ Modify the parameter of `ckpt_name` in run.sh file to the weight name after trai Main results after 1000 step train. -| GPUS | avg_ips | adversarial loss | feature matching loss | mel loss | generator loss | real loss | fake loss | discriminator loss | -| ------------ | ------------------- | ---------------- | --------------------- | -------- | -------------- | --------- | --------- | ------------------ | +| GPUS | avg_ips | adversarial loss | feature matching loss | mel loss | generator loss | real loss | fake loss | discriminator loss | +|-------------|---------------------|------------------|-----------------------|----------|----------------|-----------|-----------|--------------------| | BI V100 × 1 | 15.42 sequences/sec | 6.276 | 0.845 | 0.531 | 31.858 | 0.513 | 0.6289 | 1.142 | ## Reference diff --git a/speech/speech_synthesis/hifigan/paddlepaddle/requirements.txt b/audio/speech_synthesis/hifigan/paddlepaddle/requirements.txt similarity index 100% rename from speech/speech_synthesis/hifigan/paddlepaddle/requirements.txt rename to audio/speech_synthesis/hifigan/paddlepaddle/requirements.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/Dockerfile b/audio/speech_synthesis/tacotron2/pytorch/Dockerfile similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/Dockerfile rename to audio/speech_synthesis/tacotron2/pytorch/Dockerfile diff --git a/speech/speech_synthesis/tacotron2/pytorch/LICENSE b/audio/speech_synthesis/tacotron2/pytorch/LICENSE similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/LICENSE rename to audio/speech_synthesis/tacotron2/pytorch/LICENSE diff --git a/speech/speech_synthesis/tacotron2/pytorch/README.md b/audio/speech_synthesis/tacotron2/pytorch/README.md similarity index 43% rename from speech/speech_synthesis/tacotron2/pytorch/README.md rename to audio/speech_synthesis/tacotron2/pytorch/README.md index 9a0ef0994d695ddddcc73d0edb724cd5569ea92c..9bd8eeb05339b8b45e1df8bc83fe39dd8944476a 100644 --- a/speech/speech_synthesis/tacotron2/pytorch/README.md +++ b/audio/speech_synthesis/tacotron2/pytorch/README.md @@ -1,54 +1,65 @@ -# Tacotron2 +# Tacotron2 ## Model description -This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize timedomain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the input to WaveNet instead of linguistic, duration, and F_0 features. We further demonstrate that using a compact acoustic intermediate representation enables significant simplification of the WaveNet architecture. +This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is +composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale +spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize timedomain waveforms from those +spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally +recorded speech. To validate our design choices, we present ablation studies of key components of our system and +evaluate the impact of using mel spectrograms as the input to WaveNet instead of linguistic, duration, and F_0 features. +We further demonstrate that using a compact acoustic intermediate representation enables significant simplification of +the WaveNet architecture. ## Step 1: Installing packages -``` + +```sh pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets + 1.Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) in the current directory; - - wget -c https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2; - - tar -jxvf LJSpeech-1.1.tar.bz2; +- wget -c ; +- tar -jxvf LJSpeech-1.1.tar.bz2; ## Step 3: Training First, create a directory to save output and logs. -``` -$ mkdir outdir logdir +```sh +mkdir outdir logdir ``` ### On single GPU -``` -$ python3 train.py --output_directory=outdir --log_directory=logdir --target_val_loss=0.5 + +```sh +python3 train.py --output_directory=outdir --log_directory=logdir --target_val_loss=0.5 ``` ### Multiple GPUs on one machine -``` -$ python3 -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True --target_val_loss=0.5 -``` +```sh +python3 -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True --target_val_loss=0.5 +``` ### Multiple GPUs on one machine (AMP) -``` -$ python3 -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True --target_val_loss=0.5 + +```sh +python3 -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True --target_val_loss=0.5 ``` ## Results on BI-V100 | GPUs | FP16 | FPS | Score(MOS) | -|------| ---- |-----| ---------- | +|------|------|-----|------------| | 1x8 | True | 9.2 | 4.460 | | Convergence criteria | Configuration (x denotes number of GPUs) | Performance | Accuracy | Power(W) | Scalability | Memory utilization(G) | Stability | |----------------------|------------------------------------------|-------------|----------|------------|-------------|-------------------------|-----------| | score(MOS):4.460 | SDK V2.2,bs:128,8x,AMP | 77 | 4.46 | 128\*8 | 0.96 | 18.4\*8 | 1 | - ## Reference -https://github.com/NVIDIA/tacotron2 \ No newline at end of file + +- [tacotron2](https://github.com/NVIDIA/tacotron2) diff --git a/speech/speech_synthesis/tacotron2/pytorch/audio_processing.py b/audio/speech_synthesis/tacotron2/pytorch/audio_processing.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/audio_processing.py rename to audio/speech_synthesis/tacotron2/pytorch/audio_processing.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/data_utils.py b/audio/speech_synthesis/tacotron2/pytorch/data_utils.py similarity index 71% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/data_utils.py rename to audio/speech_synthesis/tacotron2/pytorch/data_utils.py index ed8723864d405de92f9f85ef304c4b3f07d27d97..c12c371beb6140838a41f81879fc4f242a8e64dc 100644 --- a/speech/speech_synthesis/waveglow/pytorch/tacotron2/data_utils.py +++ b/audio/speech_synthesis/tacotron2/pytorch/data_utils.py @@ -10,10 +10,11 @@ from text import text_to_sequence class TextMelLoader(torch.utils.data.Dataset): """ - 1) loads audio,text pairs - 2) normalizes text and converts them to sequences of one-hot vectors - 3) computes mel-spectrograms from audio files. + 1) loads audio,text pairs + 2) normalizes text and converts them to sequences of one-hot vectors + 3) computes mel-spectrograms from audio files. """ + def __init__(self, audiopaths_and_text, hparams): self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) self.text_cleaners = hparams.text_cleaners @@ -21,9 +22,14 @@ class TextMelLoader(torch.utils.data.Dataset): self.sampling_rate = hparams.sampling_rate self.load_mel_from_disk = hparams.load_mel_from_disk self.stft = layers.TacotronSTFT( - hparams.filter_length, hparams.hop_length, hparams.win_length, - hparams.n_mel_channels, hparams.sampling_rate, hparams.mel_fmin, - hparams.mel_fmax) + hparams.filter_length, + hparams.hop_length, + hparams.win_length, + hparams.n_mel_channels, + hparams.sampling_rate, + hparams.mel_fmin, + hparams.mel_fmax, + ) random.seed(hparams.seed) random.shuffle(self.audiopaths_and_text) @@ -38,8 +44,11 @@ class TextMelLoader(torch.utils.data.Dataset): if not self.load_mel_from_disk: audio, sampling_rate = load_wav_to_torch(filename) if sampling_rate != self.stft.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.stft.sampling_rate)) + raise ValueError( + "{} {} SR doesn't match target {} SR".format( + sampling_rate, self.stft.sampling_rate + ) + ) audio_norm = audio / self.max_wav_value audio_norm = audio_norm.unsqueeze(0) audio_norm = torch.autograd.Variable(audio_norm, requires_grad=False) @@ -47,9 +56,11 @@ class TextMelLoader(torch.utils.data.Dataset): melspec = torch.squeeze(melspec, 0) else: melspec = torch.from_numpy(np.load(filename)) - assert melspec.size(0) == self.stft.n_mel_channels, ( - 'Mel dimension mismatch: given {}, expected {}'.format( - melspec.size(0), self.stft.n_mel_channels)) + assert ( + melspec.size(0) == self.stft.n_mel_channels + ), "Mel dimension mismatch: given {}, expected {}".format( + melspec.size(0), self.stft.n_mel_channels + ) return melspec @@ -64,9 +75,9 @@ class TextMelLoader(torch.utils.data.Dataset): return len(self.audiopaths_and_text) -class TextMelCollate(): - """ Zero-pads model inputs and targets based on number of frames per setep - """ +class TextMelCollate: + """Zero-pads model inputs and targets based on number of frames per step""" + def __init__(self, n_frames_per_step): self.n_frames_per_step = n_frames_per_step @@ -78,21 +89,23 @@ class TextMelCollate(): """ # Right zero-pad all one-hot text sequences to max input length input_lengths, ids_sorted_decreasing = torch.sort( - torch.LongTensor([len(x[0]) for x in batch]), - dim=0, descending=True) + torch.LongTensor([len(x[0]) for x in batch]), dim=0, descending=True + ) max_input_len = input_lengths[0] text_padded = torch.LongTensor(len(batch), max_input_len) text_padded.zero_() for i in range(len(ids_sorted_decreasing)): text = batch[ids_sorted_decreasing[i]][0] - text_padded[i, :text.size(0)] = text + text_padded[i, : text.size(0)] = text # Right zero-pad mel-spec num_mels = batch[0][1].size(0) max_target_len = max([x[1].size(1) for x in batch]) if max_target_len % self.n_frames_per_step != 0: - max_target_len += self.n_frames_per_step - max_target_len % self.n_frames_per_step + max_target_len += ( + self.n_frames_per_step - max_target_len % self.n_frames_per_step + ) assert max_target_len % self.n_frames_per_step == 0 # include mel padded and gate padded @@ -103,9 +116,8 @@ class TextMelCollate(): output_lengths = torch.LongTensor(len(batch)) for i in range(len(ids_sorted_decreasing)): mel = batch[ids_sorted_decreasing[i]][1] - mel_padded[i, :, :mel.size(1)] = mel - gate_padded[i, mel.size(1)-1:] = 1 + mel_padded[i, :, : mel.size(1)] = mel + gate_padded[i, mel.size(1) - 1 :] = 1 output_lengths[i] = mel.size(1) - return text_padded, input_lengths, mel_padded, gate_padded, \ - output_lengths + return text_padded, input_lengths, mel_padded, gate_padded, output_lengths diff --git a/speech/speech_synthesis/tacotron2/pytorch/demo.wav b/audio/speech_synthesis/tacotron2/pytorch/demo.wav similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/demo.wav rename to audio/speech_synthesis/tacotron2/pytorch/demo.wav diff --git a/speech/speech_synthesis/tacotron2/pytorch/distributed.py b/audio/speech_synthesis/tacotron2/pytorch/distributed.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/distributed.py rename to audio/speech_synthesis/tacotron2/pytorch/distributed.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_test_filelist.txt b/audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_test_filelist.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_test_filelist.txt rename to audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_test_filelist.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_train_filelist.txt b/audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_train_filelist.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_train_filelist.txt rename to audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_train_filelist.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_val_filelist.txt b/audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_val_filelist.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_val_filelist.txt rename to audio/speech_synthesis/tacotron2/pytorch/filelists/ljs_audio_text_val_filelist.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/hparam.py b/audio/speech_synthesis/tacotron2/pytorch/hparam.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/hparam.py rename to audio/speech_synthesis/tacotron2/pytorch/hparam.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/hparams.py b/audio/speech_synthesis/tacotron2/pytorch/hparams.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/hparams.py rename to audio/speech_synthesis/tacotron2/pytorch/hparams.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/layers.py b/audio/speech_synthesis/tacotron2/pytorch/layers.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/layers.py rename to audio/speech_synthesis/tacotron2/pytorch/layers.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/logger.py b/audio/speech_synthesis/tacotron2/pytorch/logger.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/logger.py rename to audio/speech_synthesis/tacotron2/pytorch/logger.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/loss_function.py b/audio/speech_synthesis/tacotron2/pytorch/loss_function.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/loss_function.py rename to audio/speech_synthesis/tacotron2/pytorch/loss_function.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/loss_scaler.py b/audio/speech_synthesis/tacotron2/pytorch/loss_scaler.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/loss_scaler.py rename to audio/speech_synthesis/tacotron2/pytorch/loss_scaler.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/model.py b/audio/speech_synthesis/tacotron2/pytorch/model.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/model.py rename to audio/speech_synthesis/tacotron2/pytorch/model.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/multiproc.py b/audio/speech_synthesis/tacotron2/pytorch/multiproc.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/multiproc.py rename to audio/speech_synthesis/tacotron2/pytorch/multiproc.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/plotting_utils.py b/audio/speech_synthesis/tacotron2/pytorch/plotting_utils.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/plotting_utils.py rename to audio/speech_synthesis/tacotron2/pytorch/plotting_utils.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/requirements.txt b/audio/speech_synthesis/tacotron2/pytorch/requirements.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/requirements.txt rename to audio/speech_synthesis/tacotron2/pytorch/requirements.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/requirements_aarch64.txt b/audio/speech_synthesis/tacotron2/pytorch/requirements_aarch64.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/requirements_aarch64.txt rename to audio/speech_synthesis/tacotron2/pytorch/requirements_aarch64.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/stft.py b/audio/speech_synthesis/tacotron2/pytorch/stft.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/stft.py rename to audio/speech_synthesis/tacotron2/pytorch/stft.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/tensorboard.png b/audio/speech_synthesis/tacotron2/pytorch/tensorboard.png similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/tensorboard.png rename to audio/speech_synthesis/tacotron2/pytorch/tensorboard.png diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/LICENSE b/audio/speech_synthesis/tacotron2/pytorch/text/LICENSE similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/LICENSE rename to audio/speech_synthesis/tacotron2/pytorch/text/LICENSE diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/__init__.py b/audio/speech_synthesis/tacotron2/pytorch/text/__init__.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/__init__.py rename to audio/speech_synthesis/tacotron2/pytorch/text/__init__.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/cleaners.py b/audio/speech_synthesis/tacotron2/pytorch/text/cleaners.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/cleaners.py rename to audio/speech_synthesis/tacotron2/pytorch/text/cleaners.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/cmudict.py b/audio/speech_synthesis/tacotron2/pytorch/text/cmudict.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/cmudict.py rename to audio/speech_synthesis/tacotron2/pytorch/text/cmudict.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/numbers.py b/audio/speech_synthesis/tacotron2/pytorch/text/numbers.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/numbers.py rename to audio/speech_synthesis/tacotron2/pytorch/text/numbers.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/text/symbols.py b/audio/speech_synthesis/tacotron2/pytorch/text/symbols.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/text/symbols.py rename to audio/speech_synthesis/tacotron2/pytorch/text/symbols.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/train.py b/audio/speech_synthesis/tacotron2/pytorch/train.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/train.py rename to audio/speech_synthesis/tacotron2/pytorch/train.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/utils.py b/audio/speech_synthesis/tacotron2/pytorch/utils.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/utils.py rename to audio/speech_synthesis/tacotron2/pytorch/utils.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/LICENSE b/audio/speech_synthesis/tacotron2/pytorch/waveglow/LICENSE similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/LICENSE rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/LICENSE diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/README.md b/audio/speech_synthesis/tacotron2/pytorch/waveglow/README.md similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/README.md rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/README.md diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/config.json b/audio/speech_synthesis/tacotron2/pytorch/waveglow/config.json similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/config.json rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/config.json diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/convert_model.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/convert_model.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/convert_model.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/convert_model.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/denoiser.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/denoiser.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/denoiser.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/denoiser.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/distributed.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/distributed.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/distributed.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/distributed.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/glow.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/glow.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/glow.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/glow.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/glow_old.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/glow_old.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/glow_old.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/glow_old.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/inference.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/inference.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/inference.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/inference.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/mel2samp.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/mel2samp.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/mel2samp.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/mel2samp.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/requirements.txt b/audio/speech_synthesis/tacotron2/pytorch/waveglow/requirements.txt similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/requirements.txt rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/requirements.txt diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/train.py b/audio/speech_synthesis/tacotron2/pytorch/waveglow/train.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/train.py rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/train.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/waveglow/waveglow_logo.png b/audio/speech_synthesis/tacotron2/pytorch/waveglow/waveglow_logo.png similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/waveglow/waveglow_logo.png rename to audio/speech_synthesis/tacotron2/pytorch/waveglow/waveglow_logo.png diff --git a/speech/speech_synthesis/vqmivc/pytorch/Dataset/README.md b/audio/speech_synthesis/vqmivc/pytorch/Dataset/README.md similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/Dataset/README.md rename to audio/speech_synthesis/vqmivc/pytorch/Dataset/README.md diff --git a/speech/speech_synthesis/vqmivc/pytorch/LICENSE b/audio/speech_synthesis/vqmivc/pytorch/LICENSE similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/LICENSE rename to audio/speech_synthesis/vqmivc/pytorch/LICENSE diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/.gitignore b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/.gitignore similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/.gitignore rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/.gitignore diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/LICENSE b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/LICENSE similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/LICENSE rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/LICENSE diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/README.md b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/README.md similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/README.md rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/README.md diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/README.md b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/README.md similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/README.md rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/README.md diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/arctic/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/hifigan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/hifigan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/hifigan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/hifigan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/multi_band_melgan.v2.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/multi_band_melgan.v2.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/multi_band_melgan.v2.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/multi_band_melgan.v2.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/style_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/style_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/style_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/conf/style_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/csmsc/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/train_speakers.txt b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/train_speakers.txt similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/train_speakers.txt rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/conf/train_speakers.txt diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jnas/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsss/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/hifigan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/hifigan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/hifigan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/hifigan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/multi_band_melgan.v2.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/multi_band_melgan.v2.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/multi_band_melgan.v2.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/multi_band_melgan.v2.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/style_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/style_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/style_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/conf/style_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/jsut/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/hifigan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/hifigan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/hifigan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/hifigan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/style_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/style_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/style_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/conf/style_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/libritts/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v2.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v2.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v2.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/full_band_melgan.v2.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/hifigan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/hifigan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/hifigan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/hifigan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan.v3.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/melgan_large.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v2.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v2.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v2.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/multi_band_melgan.v2.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.no_limit.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.no_limit.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.no_limit.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.no_limit.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v3.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v3.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v3.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/parallel_wavegan.v3.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/style_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/style_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/style_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/conf/style_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/ljspeech/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/speech_commands/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_multi_spk/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/template_single_spk/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/hifigan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/hifigan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/hifigan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/hifigan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/multi_band_melgan.v2.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/multi_band_melgan.v2.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/multi_band_melgan.v2.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/multi_band_melgan.v2.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.long.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.long.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.long.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.long.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/parallel_wavegan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/style_melgan.v1.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/style_melgan.v1.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/style_melgan.v1.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/conf/style_melgan.v1.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/vctk/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/cmd.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/cmd.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/cmd.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/cmd.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/hifigan.v1.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/hifigan.v1.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/hifigan.v1.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/hifigan.v1.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v1.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v1.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v1.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v1.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v3.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v3.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v3.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/melgan.v3.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/multi_band_melgan.v1.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/multi_band_melgan.v1.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/multi_band_melgan.v1.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/multi_band_melgan.v1.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.diff_fs.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.diff_fs.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.diff_fs.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.diff_fs.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.npy.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.npy.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.npy.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.npy.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/parallel_wavegan.v1.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/slurm.conf b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/slurm.conf similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/slurm.conf rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/slurm.conf diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/style_melgan.v1.debug.yaml b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/style_melgan.v1.debug.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/style_melgan.v1.debug.yaml rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/conf/style_melgan.v1.debug.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_download.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_download.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_download.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_download.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_prep.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_prep.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_prep.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/local/data_prep.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/path.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/path.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/path.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/path.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/run.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/run.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/run.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/run.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/utils b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/utils similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/utils rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/egs/yesno/voc1/utils diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/__init__.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/__init__.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/compute_statistics.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/compute_statistics.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/compute_statistics.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/compute_statistics.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/decode.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/decode.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/decode.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/decode.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/normalize.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/normalize.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/normalize.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/normalize.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/train.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/train.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/train.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/train.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/audio_mel_dataset.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/audio_mel_dataset.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/audio_mel_dataset.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/audio_mel_dataset.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/scp_dataset.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/scp_dataset.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/scp_dataset.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/datasets/scp_dataset.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/launch.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/launch.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/launch.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/launch.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/causal_conv.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/causal_conv.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/causal_conv.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/causal_conv.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/pqmf.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/pqmf.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/pqmf.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/pqmf.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_block.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_block.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_block.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_block.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_stack.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_stack.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_stack.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/residual_stack.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tade_res_block.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tade_res_block.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tade_res_block.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tade_res_block.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tf_layers.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tf_layers.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tf_layers.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/tf_layers.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/upsample.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/upsample.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/upsample.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/layers/upsample.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/adversarial_loss.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/adversarial_loss.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/adversarial_loss.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/adversarial_loss.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/feat_match_loss.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/feat_match_loss.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/feat_match_loss.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/feat_match_loss.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/mel_loss.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/mel_loss.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/mel_loss.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/mel_loss.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/hifigan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/hifigan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/hifigan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/hifigan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/melgan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/melgan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/melgan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/melgan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/parallel_wavegan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/parallel_wavegan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/parallel_wavegan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/parallel_wavegan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/style_melgan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/style_melgan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/style_melgan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/style_melgan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/tf_models.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/tf_models.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/tf_models.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/models/tf_models.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/radam.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/radam.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/radam.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/optimizers/radam.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/__init__.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/__init__.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/__init__.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/utils.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/utils.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/utils.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/utils/utils.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/pyrightconfig.json b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/pyrightconfig.json similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/pyrightconfig.json rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/pyrightconfig.json diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.cfg b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.cfg similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.cfg rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.cfg diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/setup.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_hifigan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_hifigan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_hifigan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_hifigan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_layers.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_layers.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_layers.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_layers.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_mel_loss.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_mel_loss.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_mel_loss.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_mel_loss.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_melgan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_melgan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_melgan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_melgan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_parallel_wavegan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_parallel_wavegan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_parallel_wavegan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_parallel_wavegan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_style_melgan.py b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_style_melgan.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_style_melgan.py rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/test/test_style_melgan.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/tools/Makefile b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/tools/Makefile similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/tools/Makefile rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/tools/Makefile diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/combine_data.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/combine_data.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/combine_data.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/combine_data.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/download_from_google_drive.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/download_from_google_drive.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/download_from_google_drive.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/download_from_google_drive.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/make_subset_data.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/make_subset_data.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/make_subset_data.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/make_subset_data.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/parse_options.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/parse_options.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/parse_options.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/parse_options.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/queue.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/queue.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/queue.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/queue.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/run.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/run.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/run.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/run.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/slurm.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/slurm.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/slurm.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/slurm.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_data.sh b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_data.sh similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_data.sh rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_data.sh diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_scp.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_scp.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_scp.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/split_scp.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/ssh.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/ssh.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/ssh.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/ssh.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/stdout.pl b/audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/stdout.pl similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/stdout.pl rename to audio/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/utils/stdout.pl diff --git a/speech/speech_synthesis/vqmivc/pytorch/README.md b/audio/speech_synthesis/vqmivc/pytorch/README.md similarity index 31% rename from speech/speech_synthesis/vqmivc/pytorch/README.md rename to audio/speech_synthesis/vqmivc/pytorch/README.md index 58940f2a9faf2f6059a9aa05e06ac9a45be43eea..bac78a9df0147ccc0d185a5546527d629c1f9149 100644 --- a/speech/speech_synthesis/vqmivc/pytorch/README.md +++ b/audio/speech_synthesis/vqmivc/pytorch/README.md @@ -1,13 +1,22 @@ -# vqmivc +# VQMIVC ## Model description -One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC. - +One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker +utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally +ignores the correlation between different speech representations during training, which causes leakage of content +information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector +quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, +to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in +an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective +disentangled speech representations for retaining source linguistic content and intonation variations, while capturing +target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker +similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at +. ## Step 1: Preparing datasets -```shell +```sh mkdir -p /home/data/vqmivc/ cd /home/data/vqmivc/ wget https://datashare.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip @@ -16,7 +25,7 @@ unzip VCTK-Corpus-0.92.zip ## Step 2: Preprocess -```shell +```sh cd ${DEEPSPARKHUB_ROOT}/speech/speech_synthesis/vqmivc/pytorch/ pip3 install -r requirements_bi.txt ln -s /home/data/vqmivc . @@ -28,22 +37,24 @@ ln -s vqmivc/data . * Training with mutual information minimization (MIM): -```shell +```sh export HYDRA_FULL_ERROR=1 python3 train.py use_CSMI=True use_CPMI=True use_PSMI=True ``` * Training without MIM: -```shell +```sh python3 train.py use_CSMI=False use_CPMI=False use_PSMI=False ``` ## Results on BI-V100 -| Card Type | recon loss | cps loss | vq loss | perpexlity | lld cs loss | mi cs loss | lld ps loss | mi ps loss | lld cp loss | mi cp loss |used time(s)| -| -------- | -----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | -| BI |0.635|1.062 |0.453 |401.693 |110.958|2.653E-4|0.052|0.001|219.895|0.021|4.315| +| Card Type | recon loss | cps loss | vq loss | perpexlity | lld cs loss | mi cs loss | lld ps loss | mi ps loss | lld cp loss | mi cp loss | used time(s) | +|-----------|------------|----------|---------|------------|-------------|------------|-------------|------------|-------------|------------|--------------| +| BI | 0.635 | 1.062 | 0.453 | 401.693 | 110.958 | 2.653E-4 | 0.052 | 0.001 | 219.895 | 0.021 | 4.315 | +| | ## Reference -https://github.com/Wendison/VQMIVC + +- [VQMIVC](https://github.com/Wendison/VQMIVC) diff --git a/speech/speech_synthesis/vqmivc/pytorch/cog.yaml b/audio/speech_synthesis/vqmivc/pytorch/cog.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/cog.yaml rename to audio/speech_synthesis/vqmivc/pytorch/cog.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/config/convert.yaml b/audio/speech_synthesis/vqmivc/pytorch/config/convert.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/config/convert.yaml rename to audio/speech_synthesis/vqmivc/pytorch/config/convert.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/config/model/default.yaml b/audio/speech_synthesis/vqmivc/pytorch/config/model/default.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/config/model/default.yaml rename to audio/speech_synthesis/vqmivc/pytorch/config/model/default.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/config/train.yaml b/audio/speech_synthesis/vqmivc/pytorch/config/train.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/config/train.yaml rename to audio/speech_synthesis/vqmivc/pytorch/config/train.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/config/training/cpc.yaml b/audio/speech_synthesis/vqmivc/pytorch/config/training/cpc.yaml similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/config/training/cpc.yaml rename to audio/speech_synthesis/vqmivc/pytorch/config/training/cpc.yaml diff --git a/speech/speech_synthesis/vqmivc/pytorch/convert.py b/audio/speech_synthesis/vqmivc/pytorch/convert.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/convert.py rename to audio/speech_synthesis/vqmivc/pytorch/convert.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/convert_example.py b/audio/speech_synthesis/vqmivc/pytorch/convert_example.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/convert_example.py rename to audio/speech_synthesis/vqmivc/pytorch/convert_example.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/dataset.py b/audio/speech_synthesis/vqmivc/pytorch/dataset.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/dataset.py rename to audio/speech_synthesis/vqmivc/pytorch/dataset.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/mel_stats/stats.npy b/audio/speech_synthesis/vqmivc/pytorch/mel_stats/stats.npy similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/mel_stats/stats.npy rename to audio/speech_synthesis/vqmivc/pytorch/mel_stats/stats.npy diff --git a/speech/speech_synthesis/vqmivc/pytorch/mi_estimators.py b/audio/speech_synthesis/vqmivc/pytorch/mi_estimators.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/mi_estimators.py rename to audio/speech_synthesis/vqmivc/pytorch/mi_estimators.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/model_decoder.py b/audio/speech_synthesis/vqmivc/pytorch/model_decoder.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/model_decoder.py rename to audio/speech_synthesis/vqmivc/pytorch/model_decoder.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/model_encoder.py b/audio/speech_synthesis/vqmivc/pytorch/model_encoder.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/model_encoder.py rename to audio/speech_synthesis/vqmivc/pytorch/model_encoder.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/predict.py b/audio/speech_synthesis/vqmivc/pytorch/predict.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/predict.py rename to audio/speech_synthesis/vqmivc/pytorch/predict.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/preprocess.py b/audio/speech_synthesis/vqmivc/pytorch/preprocess.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/preprocess.py rename to audio/speech_synthesis/vqmivc/pytorch/preprocess.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/requirements.txt b/audio/speech_synthesis/vqmivc/pytorch/requirements.txt similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/requirements.txt rename to audio/speech_synthesis/vqmivc/pytorch/requirements.txt diff --git a/speech/speech_synthesis/vqmivc/pytorch/requirements_bi.txt b/audio/speech_synthesis/vqmivc/pytorch/requirements_bi.txt similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/requirements_bi.txt rename to audio/speech_synthesis/vqmivc/pytorch/requirements_bi.txt diff --git a/speech/speech_synthesis/vqmivc/pytorch/scheduler.py b/audio/speech_synthesis/vqmivc/pytorch/scheduler.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/scheduler.py rename to audio/speech_synthesis/vqmivc/pytorch/scheduler.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/spectrogram.py b/audio/speech_synthesis/vqmivc/pytorch/spectrogram.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/spectrogram.py rename to audio/speech_synthesis/vqmivc/pytorch/spectrogram.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/test_wavs/p225_038.wav b/audio/speech_synthesis/vqmivc/pytorch/test_wavs/p225_038.wav similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/test_wavs/p225_038.wav rename to audio/speech_synthesis/vqmivc/pytorch/test_wavs/p225_038.wav diff --git a/speech/speech_synthesis/vqmivc/pytorch/test_wavs/p334_047.wav b/audio/speech_synthesis/vqmivc/pytorch/test_wavs/p334_047.wav similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/test_wavs/p334_047.wav rename to audio/speech_synthesis/vqmivc/pytorch/test_wavs/p334_047.wav diff --git a/speech/speech_synthesis/vqmivc/pytorch/testing_speakers.txt b/audio/speech_synthesis/vqmivc/pytorch/testing_speakers.txt similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/testing_speakers.txt rename to audio/speech_synthesis/vqmivc/pytorch/testing_speakers.txt diff --git a/speech/speech_synthesis/vqmivc/pytorch/train.py b/audio/speech_synthesis/vqmivc/pytorch/train.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/train.py rename to audio/speech_synthesis/vqmivc/pytorch/train.py diff --git a/speech/speech_synthesis/vqmivc/pytorch/vocoder/README.md b/audio/speech_synthesis/vqmivc/pytorch/vocoder/README.md similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/vocoder/README.md rename to audio/speech_synthesis/vqmivc/pytorch/vocoder/README.md diff --git a/speech/speech_synthesis/waveglow/pytorch/LICENSE b/audio/speech_synthesis/waveglow/pytorch/LICENSE similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/LICENSE rename to audio/speech_synthesis/waveglow/pytorch/LICENSE diff --git a/speech/speech_synthesis/waveglow/pytorch/README.md b/audio/speech_synthesis/waveglow/pytorch/README.md similarity index 31% rename from speech/speech_synthesis/waveglow/pytorch/README.md rename to audio/speech_synthesis/waveglow/pytorch/README.md index 479b99415ee31fbda16aa12c69e26f8b514c399d..e50f14f36375d727cdef9c4f2300af07770f3f84 100644 --- a/speech/speech_synthesis/waveglow/pytorch/README.md +++ b/audio/speech_synthesis/waveglow/pytorch/README.md @@ -2,47 +2,48 @@ ## Model description -In our recent paper, we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable. - +In our recent paper, we propose WaveGlow: a flow-based network capable of generating high quality speech from +mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality +audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained +using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure +simple and stable. ## Step 1: Installing packages -```shell +```sh pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets -Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) in the current directory; - - wget -c https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2; - - tar -jxvf LJSpeech-1.1.tar.bz2 + +Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) in the current directory. + +- wget -c ; +- tar -jxvf LJSpeech-1.1.tar.bz2 ## Step 3: Training ### On single GPU -``` -$ python3 train.py -c config.json +```sh +python3 train.py -c config.json ``` ### Multiple GPUs on one machine > Warning: DDP & AMP(mixed precision training set `"fp16_run": true` on `config.json`) -``` -$ python3 distributed.py -c config.json -``` - -## Results on BI-V100 (Performace) -### FP32 -| Card Type | Single Card | 8 Cards | -| -------- |-------------:|:--------:| -| BI | 196.845 | 1440.233 | +```sh +python3 distributed.py -c config.json +``` -### AMP -| Card Type | Single Card | 8 Cards | -| -------- |------------:|:--------:| -| BI | 351.040 | 2400.745 | +## Results +| Card Type | Prec. | Single Card | 8 Cards | +|-----------|-------|------------:|:--------:| +| BI | FP32 | 196.845 | 1440.233 | +| BI | AMP | 351.040 | 2400.745 | ## Reference -https://github.com/NVIDIA/waveglow + +- [waveglow](https://github.com/NVIDIA/waveglow) diff --git a/speech/speech_synthesis/waveglow/pytorch/config.json b/audio/speech_synthesis/waveglow/pytorch/config.json similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/config.json rename to audio/speech_synthesis/waveglow/pytorch/config.json diff --git a/speech/speech_synthesis/waveglow/pytorch/convert_model.py b/audio/speech_synthesis/waveglow/pytorch/convert_model.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/convert_model.py rename to audio/speech_synthesis/waveglow/pytorch/convert_model.py diff --git a/speech/speech_synthesis/waveglow/pytorch/denoiser.py b/audio/speech_synthesis/waveglow/pytorch/denoiser.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/denoiser.py rename to audio/speech_synthesis/waveglow/pytorch/denoiser.py diff --git a/speech/speech_synthesis/waveglow/pytorch/distributed.py b/audio/speech_synthesis/waveglow/pytorch/distributed.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/distributed.py rename to audio/speech_synthesis/waveglow/pytorch/distributed.py diff --git a/speech/speech_synthesis/waveglow/pytorch/glow.py b/audio/speech_synthesis/waveglow/pytorch/glow.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/glow.py rename to audio/speech_synthesis/waveglow/pytorch/glow.py diff --git a/speech/speech_synthesis/waveglow/pytorch/glow_old.py b/audio/speech_synthesis/waveglow/pytorch/glow_old.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/glow_old.py rename to audio/speech_synthesis/waveglow/pytorch/glow_old.py diff --git a/speech/speech_synthesis/waveglow/pytorch/inference.py b/audio/speech_synthesis/waveglow/pytorch/inference.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/inference.py rename to audio/speech_synthesis/waveglow/pytorch/inference.py diff --git a/speech/speech_synthesis/waveglow/pytorch/mel2samp.py b/audio/speech_synthesis/waveglow/pytorch/mel2samp.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/mel2samp.py rename to audio/speech_synthesis/waveglow/pytorch/mel2samp.py diff --git a/speech/speech_synthesis/waveglow/pytorch/requirements.txt b/audio/speech_synthesis/waveglow/pytorch/requirements.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/requirements.txt rename to audio/speech_synthesis/waveglow/pytorch/requirements.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/Dockerfile b/audio/speech_synthesis/waveglow/pytorch/tacotron2/Dockerfile similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/Dockerfile rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/Dockerfile diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/LICENSE b/audio/speech_synthesis/waveglow/pytorch/tacotron2/LICENSE similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/LICENSE rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/LICENSE diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/README.md b/audio/speech_synthesis/waveglow/pytorch/tacotron2/README.md similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/README.md rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/README.md diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/audio_processing.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/audio_processing.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/audio_processing.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/audio_processing.py diff --git a/speech/speech_synthesis/tacotron2/pytorch/data_utils.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/data_utils.py similarity index 100% rename from speech/speech_synthesis/tacotron2/pytorch/data_utils.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/data_utils.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/demo.wav b/audio/speech_synthesis/waveglow/pytorch/tacotron2/demo.wav similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/demo.wav rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/demo.wav diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/distributed.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/distributed.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/distributed.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/distributed.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_test_filelist.txt b/audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_test_filelist.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_test_filelist.txt rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_test_filelist.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_train_filelist.txt b/audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_train_filelist.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_train_filelist.txt rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_train_filelist.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_val_filelist.txt b/audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_val_filelist.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_val_filelist.txt rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/filelists/ljs_audio_text_val_filelist.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/hparams.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/hparams.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/hparams.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/hparams.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/layers.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/layers.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/layers.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/layers.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/logger.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/logger.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/logger.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/logger.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/loss_function.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/loss_function.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/loss_function.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/loss_function.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/loss_scaler.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/loss_scaler.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/loss_scaler.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/loss_scaler.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/model.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/model.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/model.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/model.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/multiproc.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/multiproc.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/multiproc.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/multiproc.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/plotting_utils.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/plotting_utils.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/plotting_utils.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/plotting_utils.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/requirements.txt b/audio/speech_synthesis/waveglow/pytorch/tacotron2/requirements.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/requirements.txt rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/requirements.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/stft.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/stft.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/stft.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/stft.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/tensorboard.png b/audio/speech_synthesis/waveglow/pytorch/tacotron2/tensorboard.png similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/tensorboard.png rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/tensorboard.png diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/LICENSE b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/LICENSE similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/LICENSE rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/LICENSE diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/__init__.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/__init__.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/__init__.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/__init__.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/cleaners.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/cleaners.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/cleaners.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/cleaners.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/cmudict.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/cmudict.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/cmudict.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/cmudict.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/numbers.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/numbers.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/numbers.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/numbers.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/text/symbols.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/text/symbols.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/text/symbols.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/text/symbols.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/train.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/train.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/train.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/train.py diff --git a/speech/speech_synthesis/waveglow/pytorch/tacotron2/utils.py b/audio/speech_synthesis/waveglow/pytorch/tacotron2/utils.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/tacotron2/utils.py rename to audio/speech_synthesis/waveglow/pytorch/tacotron2/utils.py diff --git a/speech/speech_synthesis/waveglow/pytorch/test_files.txt b/audio/speech_synthesis/waveglow/pytorch/test_files.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/test_files.txt rename to audio/speech_synthesis/waveglow/pytorch/test_files.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/train.py b/audio/speech_synthesis/waveglow/pytorch/train.py similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/train.py rename to audio/speech_synthesis/waveglow/pytorch/train.py diff --git a/speech/speech_synthesis/waveglow/pytorch/train_files.txt b/audio/speech_synthesis/waveglow/pytorch/train_files.txt similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/train_files.txt rename to audio/speech_synthesis/waveglow/pytorch/train_files.txt diff --git a/speech/speech_synthesis/waveglow/pytorch/waveglow_logo.png b/audio/speech_synthesis/waveglow/pytorch/waveglow_logo.png similarity index 100% rename from speech/speech_synthesis/waveglow/pytorch/waveglow_logo.png rename to audio/speech_synthesis/waveglow/pytorch/waveglow_logo.png diff --git a/3d-reconstruction/hashnerf/pytorch/LICENSE b/cv/3d-reconstruction/hashnerf/pytorch/LICENSE similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/LICENSE rename to cv/3d-reconstruction/hashnerf/pytorch/LICENSE diff --git a/3d-reconstruction/hashnerf/pytorch/README.md b/cv/3d-reconstruction/hashnerf/pytorch/README.md similarity index 73% rename from 3d-reconstruction/hashnerf/pytorch/README.md rename to cv/3d-reconstruction/hashnerf/pytorch/README.md index 1b088c17177f6e5c1cc15184cde7dbe5ae169971..3cdc741931212b8319aedaed69a97b9a097a07f2 100644 --- a/3d-reconstruction/hashnerf/pytorch/README.md +++ b/cv/3d-reconstruction/hashnerf/pytorch/README.md @@ -1,23 +1,29 @@ # HashNeRF ## Model description -A PyTorch implementation (Hash) of the NeRF part (grid encoder, density grid ray sampler) in instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. + +A PyTorch implementation (Hash) of the NeRF part (grid encoder, density grid ray sampler) in instant-ngp, as described +in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ## Step 1: Installation -```bash + +```sh pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets -We use the same data format as instant-ngp, [fox](https://github.com/NVlabs/instant-ngp/tree/master/data/nerf/fox) and blender dataset [nerf_synthetic](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).Please download and put them under `./data`. +We use the same data format as instant-ngp, [fox](https://github.com/NVlabs/instant-ngp/tree/master/data/nerf/fox) and +blender dataset [nerf_synthetic](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).Please +download and put them under `./data`. For custom dataset, you should: -1. take a video / many photos from different views + +1. take a video / many photos from different views 2. put the video under a path like ./data/custom/video.mp4 or the images under ./data/custom/images/*.jpg. 3. call the preprocess code: (should install ffmpeg and colmap first! refer to the file for more options) -```bash +```sh python3 scripts/colmap2nerf.py --video ./data/custom/video.mp4 --run_colmap # if use video python3 scripts/colmap2nerf.py --images ./data/custom/images/ --run_colmap # if use images ``` @@ -28,16 +34,18 @@ python3 scripts/colmap2nerf.py --images ./data/custom/images/ --run_colmap # if First time running will take some time to compile the CUDA extensions. -```bash +```sh # train with fox dataset python3 main_nerf.py data/fox --workspace trial_nerf -O -# data/fox is dataset path; --workspace means output path; -O means --fp16 --cuda_ray --preload, which usually gives the best results balanced on speed & performance. + +# data/fox is dataset path; --workspace means output path; +# -O means --fp16 --cuda_ray --preload, which usually gives the best results balanced on speed & performance. # test mode python3 main_nerf.py data/fox --workspace trial_nerf -O --test ``` -```bash +```sh # train with the blender dataset, you should add `--bound 1.0 --scale 0.8 --dt_gamma 0` # --bound means the scene is assumed to be inside box[-bound, bound] # --scale adjusts the camera locaction to make sure it falls inside the above bounding box. @@ -45,15 +53,12 @@ python3 main_nerf.py data/fox --workspace trial_nerf -O --test python3 main_nerf.py data/nerf_synthetic/lego --workspace trial_nerf -O --bound 1.0 --scale 0.8 --dt_gamma 0 ``` -```bash +```sh # train with custom dataset(you'll need to tune the scale & bound if necessary): python3 main_nerf.py data/custom_data --workspace trial_nerf -O ``` -## Results on BI-V100 - -@@ -65,90 +60,4 @@ python3 main_nerf.py data/nerf_synthetic/lego --workspace trial_nerf -O - +## Results | Convergence criteria | Configuration (x denotes number of GPUs) | Performance | Accuracy | Power(W) | Scalability | Memory utilization(G) | Stability | |----------------------|------------------------------------------|-------------|----------|------------|-------------|-------------------------|-----------| @@ -62,4 +67,4 @@ python3 main_nerf.py data/custom_data --workspace trial_nerf -O ## Reference - [torch-ngp](https://github.com/ashawkey/torch-ngp) -- [DearPyGui](https://github.com/hoffstadt/DearPyGui) \ No newline at end of file +- [DearPyGui](https://github.com/hoffstadt/DearPyGui) diff --git a/3d-reconstruction/hashnerf/pytorch/activation.py b/cv/3d-reconstruction/hashnerf/pytorch/activation.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/activation.py rename to cv/3d-reconstruction/hashnerf/pytorch/activation.py diff --git a/3d-reconstruction/hashnerf/pytorch/assets/bg_model.jpg b/cv/3d-reconstruction/hashnerf/pytorch/assets/bg_model.jpg similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/bg_model.jpg rename to cv/3d-reconstruction/hashnerf/pytorch/assets/bg_model.jpg diff --git a/3d-reconstruction/hashnerf/pytorch/assets/ccnerf.jpg b/cv/3d-reconstruction/hashnerf/pytorch/assets/ccnerf.jpg similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/ccnerf.jpg rename to cv/3d-reconstruction/hashnerf/pytorch/assets/ccnerf.jpg diff --git a/3d-reconstruction/hashnerf/pytorch/assets/fox.jpg b/cv/3d-reconstruction/hashnerf/pytorch/assets/fox.jpg similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/fox.jpg rename to cv/3d-reconstruction/hashnerf/pytorch/assets/fox.jpg diff --git a/3d-reconstruction/hashnerf/pytorch/assets/gallery.md b/cv/3d-reconstruction/hashnerf/pytorch/assets/gallery.md similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/gallery.md rename to cv/3d-reconstruction/hashnerf/pytorch/assets/gallery.md diff --git a/3d-reconstruction/hashnerf/pytorch/assets/llff.jpg b/cv/3d-reconstruction/hashnerf/pytorch/assets/llff.jpg similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/llff.jpg rename to cv/3d-reconstruction/hashnerf/pytorch/assets/llff.jpg diff --git a/3d-reconstruction/hashnerf/pytorch/assets/truck.jpg b/cv/3d-reconstruction/hashnerf/pytorch/assets/truck.jpg similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/truck.jpg rename to cv/3d-reconstruction/hashnerf/pytorch/assets/truck.jpg diff --git a/3d-reconstruction/hashnerf/pytorch/assets/update_logs.md b/cv/3d-reconstruction/hashnerf/pytorch/assets/update_logs.md similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/assets/update_logs.md rename to cv/3d-reconstruction/hashnerf/pytorch/assets/update_logs.md diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/gui.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/gui.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/gui.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/gui.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/network.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/network.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/network.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/network.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/network_basis.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/network_basis.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/network_basis.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/network_basis.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/network_hyper.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/network_hyper.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/network_hyper.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/network_hyper.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/provider.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/provider.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/provider.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/provider.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/renderer.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/renderer.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/renderer.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/renderer.py diff --git a/3d-reconstruction/hashnerf/pytorch/dnerf/utils.py b/cv/3d-reconstruction/hashnerf/pytorch/dnerf/utils.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/dnerf/utils.py rename to cv/3d-reconstruction/hashnerf/pytorch/dnerf/utils.py diff --git a/3d-reconstruction/hashnerf/pytorch/encoding.py b/cv/3d-reconstruction/hashnerf/pytorch/encoding.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/encoding.py rename to cv/3d-reconstruction/hashnerf/pytorch/encoding.py diff --git a/3d-reconstruction/hashnerf/pytorch/environment.yml b/cv/3d-reconstruction/hashnerf/pytorch/environment.yml similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/environment.yml rename to cv/3d-reconstruction/hashnerf/pytorch/environment.yml diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/__init__.py b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/__init__.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/__init__.py rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/__init__.py diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/backend.py b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/backend.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/backend.py rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/backend.py diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/ffmlp.py b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/ffmlp.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/ffmlp.py rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/ffmlp.py diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/setup.py b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/setup.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/setup.py rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/setup.py diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/src/bindings.cpp b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/bindings.cpp similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/src/bindings.cpp rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/bindings.cpp diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/src/cutlass_matmul.h b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/cutlass_matmul.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/src/cutlass_matmul.h rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/cutlass_matmul.h diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.cu b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.cu similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.cu rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.cu diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.h b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.h rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/ffmlp.h diff --git a/3d-reconstruction/hashnerf/pytorch/ffmlp/src/utils.h b/cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/utils.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/ffmlp/src/utils.h rename to cv/3d-reconstruction/hashnerf/pytorch/ffmlp/src/utils.h diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/__init__.py b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/__init__.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/__init__.py rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/__init__.py diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/backend.py b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/backend.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/backend.py rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/backend.py diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/freq.py b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/freq.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/freq.py rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/freq.py diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/setup.py b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/setup.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/setup.py rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/setup.py diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/src/bindings.cpp b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/bindings.cpp similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/src/bindings.cpp rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/bindings.cpp diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.cu b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.cu similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.cu rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.cu diff --git a/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.h b/cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.h rename to cv/3d-reconstruction/hashnerf/pytorch/freqencoder/src/freqencoder.h diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/__init__.py b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/__init__.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/__init__.py rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/__init__.py diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/backend.py b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/backend.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/backend.py rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/backend.py diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/grid.py b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/grid.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/grid.py rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/grid.py diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/setup.py b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/setup.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/setup.py rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/setup.py diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/src/bindings.cpp b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/bindings.cpp similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/src/bindings.cpp rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/bindings.cpp diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.cu b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.cu similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.cu rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.cu diff --git a/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.h b/cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.h rename to cv/3d-reconstruction/hashnerf/pytorch/gridencoder/src/gridencoder.h diff --git a/3d-reconstruction/hashnerf/pytorch/loss.py b/cv/3d-reconstruction/hashnerf/pytorch/loss.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/loss.py rename to cv/3d-reconstruction/hashnerf/pytorch/loss.py diff --git a/3d-reconstruction/hashnerf/pytorch/main_CCNeRF.py b/cv/3d-reconstruction/hashnerf/pytorch/main_CCNeRF.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/main_CCNeRF.py rename to cv/3d-reconstruction/hashnerf/pytorch/main_CCNeRF.py diff --git a/3d-reconstruction/hashnerf/pytorch/main_dnerf.py b/cv/3d-reconstruction/hashnerf/pytorch/main_dnerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/main_dnerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/main_dnerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/main_nerf.py b/cv/3d-reconstruction/hashnerf/pytorch/main_nerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/main_nerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/main_nerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/main_sdf.py b/cv/3d-reconstruction/hashnerf/pytorch/main_sdf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/main_sdf.py rename to cv/3d-reconstruction/hashnerf/pytorch/main_sdf.py diff --git a/3d-reconstruction/hashnerf/pytorch/main_tensoRF.py b/cv/3d-reconstruction/hashnerf/pytorch/main_tensoRF.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/main_tensoRF.py rename to cv/3d-reconstruction/hashnerf/pytorch/main_tensoRF.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/clip_utils.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/clip_utils.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/clip_utils.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/clip_utils.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/gui.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/gui.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/gui.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/gui.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/network.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/network.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/network.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/network.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/network_ff.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/network_ff.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/network_ff.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/network_ff.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/network_tcnn.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/network_tcnn.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/network_tcnn.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/network_tcnn.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/provider.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/provider.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/provider.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/provider.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/renderer.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/renderer.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/renderer.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/renderer.py diff --git a/3d-reconstruction/hashnerf/pytorch/nerf/utils.py b/cv/3d-reconstruction/hashnerf/pytorch/nerf/utils.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/nerf/utils.py rename to cv/3d-reconstruction/hashnerf/pytorch/nerf/utils.py diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/__init__.py b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/__init__.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/__init__.py rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/__init__.py diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/backend.py b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/backend.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/backend.py rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/backend.py diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/raymarching.py b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/raymarching.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/raymarching.py rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/raymarching.py diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/setup.py b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/setup.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/setup.py rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/setup.py diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/src/bindings.cpp b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/bindings.cpp similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/src/bindings.cpp rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/bindings.cpp diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.cu b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.cu similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.cu rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.cu diff --git a/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.h b/cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.h rename to cv/3d-reconstruction/hashnerf/pytorch/raymarching/src/raymarching.h diff --git a/3d-reconstruction/hashnerf/pytorch/requirements.txt b/cv/3d-reconstruction/hashnerf/pytorch/requirements.txt similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/requirements.txt rename to cv/3d-reconstruction/hashnerf/pytorch/requirements.txt diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/colmap2nerf.py b/cv/3d-reconstruction/hashnerf/pytorch/scripts/colmap2nerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/colmap2nerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/colmap2nerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/hyper2nerf.py b/cv/3d-reconstruction/hashnerf/pytorch/scripts/hyper2nerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/hyper2nerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/hyper2nerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/install_ext.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/install_ext.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/install_ext.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/install_ext.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/llff2nerf.py b/cv/3d-reconstruction/hashnerf/pytorch/scripts/llff2nerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/llff2nerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/llff2nerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_ccnerf.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_ccnerf.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_ccnerf.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_ccnerf.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_dnerf.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_dnerf.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_dnerf.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_dnerf.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf_clip.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf_clip.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf_clip.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_nerf_clip.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_tensoRF.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_tensoRF.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_gui_tensoRF.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_gui_tensoRF.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_nerf.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_nerf.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_nerf.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_nerf.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_sdf.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_sdf.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_sdf.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_sdf.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/run_tensoRF.sh b/cv/3d-reconstruction/hashnerf/pytorch/scripts/run_tensoRF.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/run_tensoRF.sh rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/run_tensoRF.sh diff --git a/3d-reconstruction/hashnerf/pytorch/scripts/tanks2nerf.py b/cv/3d-reconstruction/hashnerf/pytorch/scripts/tanks2nerf.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/scripts/tanks2nerf.py rename to cv/3d-reconstruction/hashnerf/pytorch/scripts/tanks2nerf.py diff --git a/3d-reconstruction/hashnerf/pytorch/sdf/netowrk.py b/cv/3d-reconstruction/hashnerf/pytorch/sdf/netowrk.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/sdf/netowrk.py rename to cv/3d-reconstruction/hashnerf/pytorch/sdf/netowrk.py diff --git a/3d-reconstruction/hashnerf/pytorch/sdf/netowrk_ff.py b/cv/3d-reconstruction/hashnerf/pytorch/sdf/netowrk_ff.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/sdf/netowrk_ff.py rename to cv/3d-reconstruction/hashnerf/pytorch/sdf/netowrk_ff.py diff --git a/3d-reconstruction/hashnerf/pytorch/sdf/network_tcnn.py b/cv/3d-reconstruction/hashnerf/pytorch/sdf/network_tcnn.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/sdf/network_tcnn.py rename to cv/3d-reconstruction/hashnerf/pytorch/sdf/network_tcnn.py diff --git a/3d-reconstruction/hashnerf/pytorch/sdf/provider.py b/cv/3d-reconstruction/hashnerf/pytorch/sdf/provider.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/sdf/provider.py rename to cv/3d-reconstruction/hashnerf/pytorch/sdf/provider.py diff --git a/3d-reconstruction/hashnerf/pytorch/sdf/utils.py b/cv/3d-reconstruction/hashnerf/pytorch/sdf/utils.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/sdf/utils.py rename to cv/3d-reconstruction/hashnerf/pytorch/sdf/utils.py diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/__init__.py b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/__init__.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/__init__.py rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/__init__.py diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/backend.py b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/backend.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/backend.py rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/backend.py diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/setup.py b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/setup.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/setup.py rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/setup.py diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/sphere_harmonics.py b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/sphere_harmonics.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/sphere_harmonics.py rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/sphere_harmonics.py diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/src/bindings.cpp b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/bindings.cpp similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/src/bindings.cpp rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/bindings.cpp diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.cu b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.cu similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.cu rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.cu diff --git a/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.h b/cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.h similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.h rename to cv/3d-reconstruction/hashnerf/pytorch/shencoder/src/shencoder.h diff --git a/3d-reconstruction/hashnerf/pytorch/tensoRF/network.py b/cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/tensoRF/network.py rename to cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network.py diff --git a/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cc.py b/cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cc.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/tensoRF/network_cc.py rename to cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cc.py diff --git a/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cp.py b/cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cp.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/tensoRF/network_cp.py rename to cv/3d-reconstruction/hashnerf/pytorch/tensoRF/network_cp.py diff --git a/3d-reconstruction/hashnerf/pytorch/tensoRF/utils.py b/cv/3d-reconstruction/hashnerf/pytorch/tensoRF/utils.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/tensoRF/utils.py rename to cv/3d-reconstruction/hashnerf/pytorch/tensoRF/utils.py diff --git a/3d-reconstruction/hashnerf/pytorch/testing/test_ffmlp.py b/cv/3d-reconstruction/hashnerf/pytorch/testing/test_ffmlp.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/testing/test_ffmlp.py rename to cv/3d-reconstruction/hashnerf/pytorch/testing/test_ffmlp.py diff --git a/3d-reconstruction/hashnerf/pytorch/testing/test_hashencoder.py b/cv/3d-reconstruction/hashnerf/pytorch/testing/test_hashencoder.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/testing/test_hashencoder.py rename to cv/3d-reconstruction/hashnerf/pytorch/testing/test_hashencoder.py diff --git a/3d-reconstruction/hashnerf/pytorch/testing/test_hashgrid_grad.py b/cv/3d-reconstruction/hashnerf/pytorch/testing/test_hashgrid_grad.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/testing/test_hashgrid_grad.py rename to cv/3d-reconstruction/hashnerf/pytorch/testing/test_hashgrid_grad.py diff --git a/3d-reconstruction/hashnerf/pytorch/testing/test_raymarching.py b/cv/3d-reconstruction/hashnerf/pytorch/testing/test_raymarching.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/testing/test_raymarching.py rename to cv/3d-reconstruction/hashnerf/pytorch/testing/test_raymarching.py diff --git a/3d-reconstruction/hashnerf/pytorch/testing/test_shencoder.py b/cv/3d-reconstruction/hashnerf/pytorch/testing/test_shencoder.py similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/testing/test_shencoder.py rename to cv/3d-reconstruction/hashnerf/pytorch/testing/test_shencoder.py diff --git a/3d-reconstruction/hashnerf/pytorch/train.sh b/cv/3d-reconstruction/hashnerf/pytorch/train.sh similarity index 100% rename from 3d-reconstruction/hashnerf/pytorch/train.sh rename to cv/3d-reconstruction/hashnerf/pytorch/train.sh diff --git a/gnn/graph_attention/gat/paddlepaddle/README.md b/cv/gnn/gat/paddlepaddle/README.md similarity index 65% rename from gnn/graph_attention/gat/paddlepaddle/README.md rename to cv/gnn/gat/paddlepaddle/README.md index 5aa9a604d4036f71f9faa164a6997d1f7a29ad45..6caf3844a95b29b5c11ecfe40f44363a01e306dc 100644 --- a/gnn/graph_attention/gat/paddlepaddle/README.md +++ b/cv/gnn/gat/paddlepaddle/README.md @@ -1,6 +1,9 @@ # GAT (Graph Attention Networks) -[Graph Attention Networks \(GAT\)](https://arxiv.org/abs/1710.10903) is a novel architectures that operate on graph-structured data, which leverages masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. Based on PGL, we reproduce GAT algorithms and reach the same level of indicators as the paper in citation network benchmarks. +[Graph Attention Networks \(GAT\)](https://arxiv.org/abs/1710.10903) is a novel architectures that operate on +graph-structured data, which leverages masked self-attentional layers to address the shortcomings of prior methods based +on graph convolutions or their approximations. Based on PGL, we reproduce GAT algorithms and reach the same level of +indicators as the paper in citation network benchmarks. ## Step 1: Installation diff --git a/gnn/text_classification/GCN/mindspore/README.md b/cv/gnn/gcn/mindspore/README.md similarity index 42% rename from gnn/text_classification/GCN/mindspore/README.md rename to cv/gnn/gcn/mindspore/README.md index 8e66455df4ef32000684b8fcd172bae29a649ea2..a15ece1f980be9b7b3caf91572b8048ea98adb99 100755 --- a/gnn/text_classification/GCN/mindspore/README.md +++ b/cv/gnn/gcn/mindspore/README.md @@ -1,35 +1,46 @@ # GCN ## Model description -GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. -[Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. +GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured +data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on +graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations +that encode both local graph structure and features of nodes. + +[Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph +Convolutional Networks. In ICLR 2016. ## Step 1: Installing -``` + +```sh pip3 install -r requirements.txt pip3 install easydict ``` ## Step 2: Prepare Datasets -Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. +Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant +domain/network architecture. In the following sections, we will introduce how to run the scripts using the related +dataset below. | Dataset | Type | Nodes | Edges | Classes | Features | Label rate | -| ------- | ---------------: |-----: | ----: | ------: |--------: | ---------: | -| Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | -| Citeseer| Citation network | 3327 | 4732 | 6 | 3703 | 0.036 | +|----------|------------------|-------|-------|---------|----------|------------| +| Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | +| Citeseer | Citation network | 3327 | 4732 | 6 | 3703 | 0.036 | ## Step 3: Training -``` + +```sh cd scripts bash train_gcn_1p.sh ``` + ## Evaluation -```bash +```sh cd .. -python3 eval.py --data_dir=scripts/data_mr/cora --device_target="GPU" --model_ckpt scripts/train/ckpt/ckpt_gcn-200_1.ckpt &> eval.log & +python3 eval.py --data_dir=scripts/data_mr/cora --device_target="GPU" \ + --model_ckpt scripts/train/ckpt/ckpt_gcn-200_1.ckpt &> eval.log & ``` ## Evaluation result diff --git a/gnn/text_classification/GCN/mindspore/ascend310_infer/CMakeLists.txt b/cv/gnn/gcn/mindspore/ascend310_infer/CMakeLists.txt similarity index 100% rename from gnn/text_classification/GCN/mindspore/ascend310_infer/CMakeLists.txt rename to cv/gnn/gcn/mindspore/ascend310_infer/CMakeLists.txt diff --git a/gnn/text_classification/GCN/mindspore/ascend310_infer/build.sh b/cv/gnn/gcn/mindspore/ascend310_infer/build.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/ascend310_infer/build.sh rename to cv/gnn/gcn/mindspore/ascend310_infer/build.sh diff --git a/gnn/text_classification/GCN/mindspore/ascend310_infer/inc/utils.h b/cv/gnn/gcn/mindspore/ascend310_infer/inc/utils.h similarity index 100% rename from gnn/text_classification/GCN/mindspore/ascend310_infer/inc/utils.h rename to cv/gnn/gcn/mindspore/ascend310_infer/inc/utils.h diff --git a/gnn/text_classification/GCN/mindspore/ascend310_infer/src/main.cc b/cv/gnn/gcn/mindspore/ascend310_infer/src/main.cc similarity index 100% rename from gnn/text_classification/GCN/mindspore/ascend310_infer/src/main.cc rename to cv/gnn/gcn/mindspore/ascend310_infer/src/main.cc diff --git a/gnn/text_classification/GCN/mindspore/ascend310_infer/src/utils.cc b/cv/gnn/gcn/mindspore/ascend310_infer/src/utils.cc similarity index 100% rename from gnn/text_classification/GCN/mindspore/ascend310_infer/src/utils.cc rename to cv/gnn/gcn/mindspore/ascend310_infer/src/utils.cc diff --git a/gnn/text_classification/GCN/mindspore/default_config.yaml b/cv/gnn/gcn/mindspore/default_config.yaml similarity index 100% rename from gnn/text_classification/GCN/mindspore/default_config.yaml rename to cv/gnn/gcn/mindspore/default_config.yaml diff --git a/gnn/text_classification/GCN/mindspore/eval.py b/cv/gnn/gcn/mindspore/eval.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/eval.py rename to cv/gnn/gcn/mindspore/eval.py diff --git a/gnn/text_classification/GCN/mindspore/export.py b/cv/gnn/gcn/mindspore/export.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/export.py rename to cv/gnn/gcn/mindspore/export.py diff --git a/gnn/text_classification/GCN/mindspore/mindspore_hub_conf.py b/cv/gnn/gcn/mindspore/mindspore_hub_conf.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/mindspore_hub_conf.py rename to cv/gnn/gcn/mindspore/mindspore_hub_conf.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/__init__.py b/cv/gnn/gcn/mindspore/model_utils/__init__.py old mode 100644 new mode 100755 similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/__init__.py rename to cv/gnn/gcn/mindspore/model_utils/__init__.py diff --git a/gnn/text_classification/GCN/mindspore/model_utils/config.py b/cv/gnn/gcn/mindspore/model_utils/config.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/model_utils/config.py rename to cv/gnn/gcn/mindspore/model_utils/config.py diff --git a/gnn/text_classification/GCN/mindspore/model_utils/device_adapter.py b/cv/gnn/gcn/mindspore/model_utils/device_adapter.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/model_utils/device_adapter.py rename to cv/gnn/gcn/mindspore/model_utils/device_adapter.py diff --git a/gnn/text_classification/GCN/mindspore/model_utils/local_adapter.py b/cv/gnn/gcn/mindspore/model_utils/local_adapter.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/model_utils/local_adapter.py rename to cv/gnn/gcn/mindspore/model_utils/local_adapter.py diff --git a/gnn/text_classification/GCN/mindspore/model_utils/moxing_adapter.py b/cv/gnn/gcn/mindspore/model_utils/moxing_adapter.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/model_utils/moxing_adapter.py rename to cv/gnn/gcn/mindspore/model_utils/moxing_adapter.py diff --git a/gnn/text_classification/GCN/mindspore/postprocess.py b/cv/gnn/gcn/mindspore/postprocess.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/postprocess.py rename to cv/gnn/gcn/mindspore/postprocess.py diff --git a/gnn/text_classification/GCN/mindspore/preprocess.py b/cv/gnn/gcn/mindspore/preprocess.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/preprocess.py rename to cv/gnn/gcn/mindspore/preprocess.py diff --git a/gnn/text_classification/GCN/mindspore/requirements.txt b/cv/gnn/gcn/mindspore/requirements.txt similarity index 100% rename from gnn/text_classification/GCN/mindspore/requirements.txt rename to cv/gnn/gcn/mindspore/requirements.txt diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_distribute_train_gpu.sh b/cv/gnn/gcn/mindspore/scripts/run_distribute_train_gpu.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_distribute_train_gpu.sh rename to cv/gnn/gcn/mindspore/scripts/run_distribute_train_gpu.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_eval_gpu.sh b/cv/gnn/gcn/mindspore/scripts/run_eval_gpu.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_eval_gpu.sh rename to cv/gnn/gcn/mindspore/scripts/run_eval_gpu.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_infer_310.sh b/cv/gnn/gcn/mindspore/scripts/run_infer_310.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_infer_310.sh rename to cv/gnn/gcn/mindspore/scripts/run_infer_310.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_process_data.sh b/cv/gnn/gcn/mindspore/scripts/run_process_data.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_process_data.sh rename to cv/gnn/gcn/mindspore/scripts/run_process_data.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_train.sh b/cv/gnn/gcn/mindspore/scripts/run_train.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_train.sh rename to cv/gnn/gcn/mindspore/scripts/run_train.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/run_train_gpu.sh b/cv/gnn/gcn/mindspore/scripts/run_train_gpu.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/run_train_gpu.sh rename to cv/gnn/gcn/mindspore/scripts/run_train_gpu.sh diff --git a/gnn/text_classification/GCN/mindspore/scripts/train_gcn_1p.sh b/cv/gnn/gcn/mindspore/scripts/train_gcn_1p.sh similarity index 100% rename from gnn/text_classification/GCN/mindspore/scripts/train_gcn_1p.sh rename to cv/gnn/gcn/mindspore/scripts/train_gcn_1p.sh diff --git a/gnn/text_classification/GCN/mindspore/src/config.py b/cv/gnn/gcn/mindspore/src/config.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/src/config.py rename to cv/gnn/gcn/mindspore/src/config.py diff --git a/gnn/text_classification/GCN/mindspore/src/dataset.py b/cv/gnn/gcn/mindspore/src/dataset.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/src/dataset.py rename to cv/gnn/gcn/mindspore/src/dataset.py diff --git a/gnn/text_classification/GCN/mindspore/src/eval_callback.py b/cv/gnn/gcn/mindspore/src/eval_callback.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/src/eval_callback.py rename to cv/gnn/gcn/mindspore/src/eval_callback.py diff --git a/gnn/text_classification/GCN/mindspore/src/gcn.py b/cv/gnn/gcn/mindspore/src/gcn.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/src/gcn.py rename to cv/gnn/gcn/mindspore/src/gcn.py diff --git a/gnn/text_classification/GCN/mindspore/src/metrics.py b/cv/gnn/gcn/mindspore/src/metrics.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/src/metrics.py rename to cv/gnn/gcn/mindspore/src/metrics.py diff --git a/gnn/text_classification/GCN/mindspore/train.py b/cv/gnn/gcn/mindspore/train.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/train.py rename to cv/gnn/gcn/mindspore/train.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/__init__.py b/cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/cora/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/__init__.py rename to cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/cora/__init__.py diff --git a/gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/cora/mr_api.py b/cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/cora/mr_api.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/cora/mr_api.py rename to cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/cora/mr_api.py diff --git a/gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/graph_map_schema.py b/cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/graph_map_schema.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/graph_map_schema.py rename to cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/graph_map_schema.py diff --git a/gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/writer.py b/cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/writer.py similarity index 100% rename from gnn/text_classification/GCN/mindspore/utils/graph_to_mindrecord/writer.py rename to cv/gnn/gcn/mindspore/utils/graph_to_mindrecord/writer.py diff --git a/gnn/text_classification/GCN/paddlepaddle/README.md b/cv/gnn/gcn/paddlepaddle/README.md similarity index 50% rename from gnn/text_classification/GCN/paddlepaddle/README.md rename to cv/gnn/gcn/paddlepaddle/README.md index 256dc6b70bef027a4168a7978334f81a00f9b09d..a36c0901cb95ccc1948e5482aa16f159c6c695cb 100644 --- a/gnn/text_classification/GCN/paddlepaddle/README.md +++ b/cv/gnn/gcn/paddlepaddle/README.md @@ -2,18 +2,22 @@ ## Model description -GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. +GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured +data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on +graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations +that encode both local graph structure and features of nodes. -[Paper](https://gitee.com/link?target=https%3A%2F%2Farxiv.org%2Fabs%2F1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. +[Paper](https://gitee.com/link?target=https%3A%2F%2Farxiv.org%2Fabs%2F1609.02907): Thomas N. Kipf, Max Welling. 2016. +Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. ## Step 1:Installation -``` +```sh # Clone PGL repository git clone https://github.com/PaddlePaddle/PGL.git ``` -``` +```sh # Pip the requirements pip3 install pgl pip3 install urllib3==1.23 @@ -28,7 +32,7 @@ The datasets contain three citation networks: CORA, PUBMED, CITESEER. ## Step 3:Training -``` +```sh cd PGL/examples/gcn/ # Run on CPU @@ -36,16 +40,15 @@ python3 train.py --dataset cora # Run on GPU CUDA_VISIBLE_DEVICES=0 python3 train.py --dataset cora - ``` ## Results -| GPUS | Datasets | speed | Accurary | -| --------- | -------- | -------- | -------- | -| BI V100×1 | CORA | 0.0064 | 80.3% | -| BI V100×1 | PUBMED | 0.0076 | 79.0% | -| BI V100×1 | CITESEER | 0.0085 | 70.6% | +| GPUS | Datasets | speed | Accurary | +|-----------|----------|--------|----------| +| BI V100×1 | CORA | 0.0064 | 80.3% | +| BI V100×1 | PUBMED | 0.0076 | 79.0% | +| BI V100×1 | CITESEER | 0.0085 | 70.6% | ## Reference diff --git a/gnn/node_classification/graphsage/paddlepaddle/README.md b/cv/gnn/graphsage/paddlepaddle/README.md similarity index 39% rename from gnn/node_classification/graphsage/paddlepaddle/README.md rename to cv/gnn/graphsage/paddlepaddle/README.md index d38de46cac5e7160610f23277dace1fa4c1f2716..ff1851654ef4f79da7d49ee84b54d652438f6e5a 100644 --- a/gnn/node_classification/graphsage/paddlepaddle/README.md +++ b/cv/gnn/graphsage/paddlepaddle/README.md @@ -1,32 +1,38 @@ # GraphSAGE (Inductive Representation Learning on Large Graphs) -[GraphSAGE](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf) is a general inductive framework that leverages node feature -information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, GraphSAGE learns a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Based on PGL, we reproduce GraphSAGE algorithm and reach the same level of indicators as the paper in Reddit Dataset. Besides, this is an example of subgraph sampling and training in PGL. +[GraphSAGE](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf) is a general inductive framework that +leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen +data. Instead of training individual embeddings for each node, GraphSAGE learns a function that generates embeddings by +sampling and aggregating features from a node’s local neighborhood. Based on PGL, we reproduce GraphSAGE algorithm and +reach the same level of indicators as the paper in Reddit Dataset. Besides, this is an example of subgraph sampling and +training in PGL. ## Step 1: Installation -```bash +```sh git clone -b 2.2.5 https://github.com/PaddlePaddle/PGL pip3 install scikit-learn pip3 install pgl==2.2.5 ``` ## Step 2: Preparing datasets -The reddit dataset should be downloaded from the following links and placed in the directory ```pgl.data```. The details for Reddit Dataset can be found [here](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf). -- reddit.npz https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2J -- reddit_adj.npz: https://drive.google.com/open?id=174vb0Ws7Vxk_QTUtxqTgDHSQ4El4qDHt +The reddit dataset should be downloaded from the following links and placed in the directory ```pgl.data```. The details +for Reddit Dataset can be found [here](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf). -```bash +- reddit.npz +- reddit_adj.npz: + +```sh # Make soft link to reddit dataset path ln -s /path/to/reddit/ /usr/local/lib/python3.7/site-packages/pgl/data/ ``` ## Step 3: Training -T train a GraphSAGE model on Reddit Dataset, you can just run +To train a GraphSAGE model on Reddit Dataset, you can just run: -```bash +```sh cd PGL/examples/graphsage/cpu_sample_version CUDA_VISIBLE_DEVICES=0 python3 train.py --epoch 10 --normalize --symmetry @@ -34,9 +40,9 @@ CUDA_VISIBLE_DEVICES=0 python3 train.py --epoch 10 --normalize --symmetry ## Results -| GPUs | Accuracy | FPS | -| --- | --- | --- | -| BI-V100 x1 | 0.9072 | 47.54 s/it | +| GPUs | Accuracy | FPS | +|------------|----------|------------| +| BI-V100 x1 | 0.9072 | 47.54 s/it | ## Reference diff --git a/cv/tracking/README.md b/cv/multi_object_tracking/README.md similarity index 100% rename from cv/tracking/README.md rename to cv/multi_object_tracking/README.md diff --git a/cv/tracking/bytetrack/paddlepaddle/README.md b/cv/multi_object_tracking/bytetrack/paddlepaddle/README.md similarity index 100% rename from cv/tracking/bytetrack/paddlepaddle/README.md rename to cv/multi_object_tracking/bytetrack/paddlepaddle/README.md diff --git a/cv/tracking/deep_sort/pytorch/.gitignore b/cv/multi_object_tracking/deep_sort/pytorch/.gitignore similarity index 100% rename from cv/tracking/deep_sort/pytorch/.gitignore rename to cv/multi_object_tracking/deep_sort/pytorch/.gitignore diff --git a/cv/tracking/deep_sort/pytorch/LICENSE b/cv/multi_object_tracking/deep_sort/pytorch/LICENSE similarity index 100% rename from cv/tracking/deep_sort/pytorch/LICENSE rename to cv/multi_object_tracking/deep_sort/pytorch/LICENSE diff --git a/cv/tracking/deep_sort/pytorch/README.md b/cv/multi_object_tracking/deep_sort/pytorch/README.md similarity index 63% rename from cv/tracking/deep_sort/pytorch/README.md rename to cv/multi_object_tracking/deep_sort/pytorch/README.md index 5a263a014b16440286b120f5f17cf56322ddada4..0a8517920cdbbeff156043ab3ff22b88e119cd85 100644 --- a/cv/tracking/deep_sort/pytorch/README.md +++ b/cv/multi_object_tracking/deep_sort/pytorch/README.md @@ -1,16 +1,24 @@ # DeepSORT ## Model description -This is an implement of MOT tracking algorithm deep sort. Deep sort is basicly the same with sort but added a CNN model to extract features in image of human part bounded by a detector. This CNN model is indeed a RE-ID model and the detector used in [PAPER](https://arxiv.org/abs/1703.07402) is FasterRCNN , and the original source code is [HERE](https://github.com/nwojke/deep_sort). -However in original code, the CNN model is implemented with tensorflow, which I'm not familier with. SO I re-implemented the CNN feature extraction model with PyTorch, and changed the CNN model a little bit. Also, I use **YOLOv3** to generate bboxes instead of FasterRCNN. + +This is an implement of MOT tracking algorithm deep sort. Deep sort is basicly the same with sort +but added a CNN model to extract features in image of human part bounded by a detector. This CNN +model is indeed a RE-ID model and the detector used in [PAPER](https://arxiv.org/abs/1703.07402) is +FasterRCNN , and the original source code is [HERE](https://github.com/nwojke/deep_sort). However in +original code, the CNN model is implemented with tensorflow, which I'm not familier with. SO I +re-implemented the CNN feature extraction model with PyTorch, and changed the CNN model a little +bit. Also, I use **YOLOv3** to generate bboxes instead of FasterRCNN. We just need to train the RE-ID model! ## Preparing datasets -Download the [Market-1501](https://zheng-lab.cecs.anu.edu.au/Project/project_reid.html) + +Download the [Market-1501](https://zheng-lab.cecs.anu.edu.au/Project/project_reid.html) The original data set structure is as follows: -``` + +```sh Market-1501-v15.09.15 ├── bounding_box_test ├── bounding_box_train @@ -22,16 +30,19 @@ Market-1501-v15.09.15 We need to generate train and test datasets. -``` +```sh python3 create_train_test_datasets.py --origin_datasets_path origin_datasets_path --datasets_path process_datasets_path ``` + We need to generate query and gallery datasets for evaluate. -``` + +```sh python3 create_query_gallery_datasets.py --origin_datasets_path origin_datasets_path --datasets_path process_datasets_path ``` After the datasets is processed, the datasets structure is as follows: -``` + +```sh data ├── train ├── test @@ -40,19 +51,24 @@ data ``` ## Training + The original model used in paper is in original_model.py, and its parameter here [original_ckpt.t7](https://drive.google.com/drive/folders/1xhG0kRH1EX5B9_Iz8gQJb7UNnn_riXi6). -``` +```sh python3 train.py --data-dir your path ``` ## Evaluate your model -``` + +```sh python3 test.py --data-dir your path python3 evaluate.py ``` + ## Results + Acc top1:0.980 -## Quick Start All Processes -Please refer to https://github.com/ZQPei/deep_sort_pytorch +## Reference + +Please refer to diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/utils/__init__.py b/cv/multi_object_tracking/deep_sort/pytorch/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/utils/__init__.py rename to cv/multi_object_tracking/deep_sort/pytorch/__init__.py diff --git a/cv/tracking/deep_sort/pytorch/checkpoint/.gitkeep b/cv/multi_object_tracking/deep_sort/pytorch/checkpoint/.gitkeep similarity index 100% rename from cv/tracking/deep_sort/pytorch/checkpoint/.gitkeep rename to cv/multi_object_tracking/deep_sort/pytorch/checkpoint/.gitkeep diff --git a/cv/tracking/deep_sort/pytorch/create_query_gallery_datasets.py b/cv/multi_object_tracking/deep_sort/pytorch/create_query_gallery_datasets.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/create_query_gallery_datasets.py rename to cv/multi_object_tracking/deep_sort/pytorch/create_query_gallery_datasets.py diff --git a/cv/tracking/deep_sort/pytorch/create_train_test_datasets.py b/cv/multi_object_tracking/deep_sort/pytorch/create_train_test_datasets.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/create_train_test_datasets.py rename to cv/multi_object_tracking/deep_sort/pytorch/create_train_test_datasets.py diff --git a/cv/tracking/deep_sort/pytorch/evaluate.py b/cv/multi_object_tracking/deep_sort/pytorch/evaluate.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/evaluate.py rename to cv/multi_object_tracking/deep_sort/pytorch/evaluate.py diff --git a/cv/tracking/deep_sort/pytorch/feature_extractor.py b/cv/multi_object_tracking/deep_sort/pytorch/feature_extractor.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/feature_extractor.py rename to cv/multi_object_tracking/deep_sort/pytorch/feature_extractor.py diff --git a/cv/tracking/deep_sort/pytorch/model.py b/cv/multi_object_tracking/deep_sort/pytorch/model.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/model.py rename to cv/multi_object_tracking/deep_sort/pytorch/model.py diff --git a/cv/tracking/deep_sort/pytorch/original_model.py b/cv/multi_object_tracking/deep_sort/pytorch/original_model.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/original_model.py rename to cv/multi_object_tracking/deep_sort/pytorch/original_model.py diff --git a/cv/tracking/deep_sort/pytorch/test.py b/cv/multi_object_tracking/deep_sort/pytorch/test.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/test.py rename to cv/multi_object_tracking/deep_sort/pytorch/test.py diff --git a/cv/tracking/deep_sort/pytorch/train.py b/cv/multi_object_tracking/deep_sort/pytorch/train.py similarity index 100% rename from cv/tracking/deep_sort/pytorch/train.py rename to cv/multi_object_tracking/deep_sort/pytorch/train.py diff --git a/cv/tracking/fairmot/pytorch/.gitignore b/cv/multi_object_tracking/fairmot/pytorch/.gitignore similarity index 100% rename from cv/tracking/fairmot/pytorch/.gitignore rename to cv/multi_object_tracking/fairmot/pytorch/.gitignore diff --git a/cv/tracking/fairmot/pytorch/LICENSE b/cv/multi_object_tracking/fairmot/pytorch/LICENSE similarity index 100% rename from cv/tracking/fairmot/pytorch/LICENSE rename to cv/multi_object_tracking/fairmot/pytorch/LICENSE diff --git a/cv/tracking/fairmot/pytorch/README.md b/cv/multi_object_tracking/fairmot/pytorch/README.md similarity index 100% rename from cv/tracking/fairmot/pytorch/README.md rename to cv/multi_object_tracking/fairmot/pytorch/README.md diff --git a/cv/tracking/fairmot/pytorch/assets/pipeline.png b/cv/multi_object_tracking/fairmot/pytorch/assets/pipeline.png similarity index 100% rename from cv/tracking/fairmot/pytorch/assets/pipeline.png rename to cv/multi_object_tracking/fairmot/pytorch/assets/pipeline.png diff --git a/cv/tracking/fairmot/pytorch/experiments/all_yolov5s.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/all_yolov5s.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/all_yolov5s.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/all_yolov5s.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/crowdhuman_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/crowdhuman_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/crowdhuman_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/crowdhuman_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_ft_ch_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_ft_ch_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_ft_ch_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_ft_ch_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_hrnet18.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_hrnet18.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_hrnet18.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_hrnet18.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res34fpn.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res34fpn.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res34fpn.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res34fpn.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res50.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res50.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_res50.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_res50.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_yolov5s.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_yolov5s.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mix_mot17_half_yolov5s.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mix_mot17_half_yolov5s.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mot15_ft_mix_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mot15_ft_mix_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mot15_ft_mix_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mot15_ft_mix_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mot17_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mot17_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mot17_half_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_half_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mot17_half_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_half_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mot17_half_yolov5s.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_half_yolov5s.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mot17_half_yolov5s.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mot17_half_yolov5s.sh diff --git a/cv/tracking/fairmot/pytorch/experiments/mot20_ft_mix_dla34.sh b/cv/multi_object_tracking/fairmot/pytorch/experiments/mot20_ft_mix_dla34.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/experiments/mot20_ft_mix_dla34.sh rename to cv/multi_object_tracking/fairmot/pytorch/experiments/mot20_ft_mix_dla34.sh diff --git a/cv/tracking/fairmot/pytorch/requirements.txt b/cv/multi_object_tracking/fairmot/pytorch/requirements.txt similarity index 100% rename from cv/tracking/fairmot/pytorch/requirements.txt rename to cv/multi_object_tracking/fairmot/pytorch/requirements.txt diff --git a/cv/tracking/fairmot/pytorch/src/_init_paths.py b/cv/multi_object_tracking/fairmot/pytorch/src/_init_paths.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/_init_paths.py rename to cv/multi_object_tracking/fairmot/pytorch/src/_init_paths.py diff --git a/cv/tracking/fairmot/pytorch/src/demo.py b/cv/multi_object_tracking/fairmot/pytorch/src/demo.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/demo.py rename to cv/multi_object_tracking/fairmot/pytorch/src/demo.py diff --git a/cv/tracking/fairmot/pytorch/src/detect.py b/cv/multi_object_tracking/fairmot/pytorch/src/detect.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/detect.py rename to cv/multi_object_tracking/fairmot/pytorch/src/detect.py diff --git a/cv/tracking/fairmot/pytorch/src/detection_demo.py b/cv/multi_object_tracking/fairmot/pytorch/src/detection_demo.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/detection_demo.py rename to cv/multi_object_tracking/fairmot/pytorch/src/detection_demo.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_data_path.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_data_path.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_data_path.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_data_path.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_15.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_15.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_15.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_15.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_16.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_16.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_16.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_16.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_17.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_17.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_17.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_17.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_20.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_20.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_20.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_20.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_crowd_det.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_crowd_det.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_crowd_det.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_crowd_det.py diff --git a/cv/tracking/fairmot/pytorch/src/gen_labels_crowd_id.py b/cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_crowd_id.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/gen_labels_crowd_id.py rename to cv/multi_object_tracking/fairmot/pytorch/src/gen_labels_crowd_id.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/crowdhuman.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/crowdhuman.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/crowdhuman.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/crowdhuman.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/data.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/data.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/data_all.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data_all.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/data_all.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data_all.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/data_half.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data_half.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/data_half.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/data_half.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/mot15.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot15.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/mot15.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot15.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/mot16.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot16.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/mot16.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot16.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/mot17.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot17.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/mot17.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot17.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/mot17_half.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot17_half.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/mot17_half.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot17_half.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/cfg/mot20.json b/cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot20.json similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/cfg/mot20.json rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/cfg/mot20.json diff --git a/cv/tracking/fairmot/pytorch/src/lib/datasets/dataset/jde.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset/jde.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/datasets/dataset/jde.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset/jde.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/datasets/dataset/jde_yolov5.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset/jde_yolov5.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/datasets/dataset/jde_yolov5.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset/jde_yolov5.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/datasets/dataset_factory.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset_factory.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/datasets/dataset_factory.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/datasets/dataset_factory.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/logger.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/logger.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/logger.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/logger.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/common.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/common.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/common.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/common.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/data_parallel.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/data_parallel.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/data_parallel.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/data_parallel.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/decode.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/decode.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/decode.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/decode.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/losses.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/losses.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/losses.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/losses.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/model.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/model.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/model.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/model.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/config/__init__.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/__init__.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/config/__init__.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/__init__.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/config/default.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/default.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/config/default.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/default.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w18.yaml b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w18.yaml similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w18.yaml rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w18.yaml diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w32.yaml b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w32.yaml similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w32.yaml rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/hrnet_w32.yaml diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/config/yolov5s.yaml b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/yolov5s.yaml similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/config/yolov5s.yaml rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/config/yolov5s.yaml diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/dlav0.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/dlav0.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/dlav0.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/dlav0.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_conv.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_conv.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_conv.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_conv.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_dcn.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_dcn.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_dcn.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_dla_dcn.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_hrnet.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_hrnet.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/pose_hrnet.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/pose_hrnet.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/resnet_dcn.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/resnet_dcn.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/resnet_dcn.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/resnet_dcn.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/resnet_fpn_dcn.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/resnet_fpn_dcn.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/resnet_fpn_dcn.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/resnet_fpn_dcn.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/networks/vision_dcn_v2.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/vision_dcn_v2.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/networks/vision_dcn_v2.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/networks/vision_dcn_v2.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/scatter_gather.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/scatter_gather.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/scatter_gather.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/scatter_gather.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/utils.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/utils.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/utils.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/utils.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/models/yolo.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/models/yolo.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/models/yolo.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/models/yolo.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/opts.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/opts.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/opts.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/opts.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracker/basetrack.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/basetrack.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracker/basetrack.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/basetrack.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracker/matching.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/matching.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracker/matching.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/matching.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracker/multitracker.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/multitracker.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracker/multitracker.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracker/multitracker.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/evaluation.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/evaluation.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/evaluation.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/evaluation.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/io.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/io.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/io.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/io.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/kalman_filter.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/kalman_filter.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/kalman_filter.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/kalman_filter.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/log.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/log.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/log.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/log.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/nms.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/nms.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/nms.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/nms.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/parse_config.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/parse_config.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/parse_config.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/parse_config.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/timer.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/timer.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/timer.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/timer.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/utils.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/utils.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/utils.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/utils.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/tracking_utils/visualization.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/visualization.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/tracking_utils/visualization.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/tracking_utils/visualization.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/trains/base_trainer.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/base_trainer.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/trains/base_trainer.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/base_trainer.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/trains/mot.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/mot.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/trains/mot.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/mot.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/trains/train_factory.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/train_factory.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/trains/train_factory.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/trains/train_factory.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/utils/image.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/image.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/utils/image.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/image.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/utils/post_process.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/post_process.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/utils/post_process.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/post_process.py diff --git a/cv/tracking/fairmot/pytorch/src/lib/utils/utils.py b/cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/utils.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/lib/utils/utils.py rename to cv/multi_object_tracking/fairmot/pytorch/src/lib/utils/utils.py diff --git a/cv/tracking/fairmot/pytorch/src/test_det.py b/cv/multi_object_tracking/fairmot/pytorch/src/test_det.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/test_det.py rename to cv/multi_object_tracking/fairmot/pytorch/src/test_det.py diff --git a/cv/tracking/fairmot/pytorch/src/test_emb.py b/cv/multi_object_tracking/fairmot/pytorch/src/test_emb.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/test_emb.py rename to cv/multi_object_tracking/fairmot/pytorch/src/test_emb.py diff --git a/cv/tracking/fairmot/pytorch/src/track.py b/cv/multi_object_tracking/fairmot/pytorch/src/track.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/track.py rename to cv/multi_object_tracking/fairmot/pytorch/src/track.py diff --git a/cv/tracking/fairmot/pytorch/src/track_half.py b/cv/multi_object_tracking/fairmot/pytorch/src/track_half.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/track_half.py rename to cv/multi_object_tracking/fairmot/pytorch/src/track_half.py diff --git a/cv/tracking/fairmot/pytorch/src/train.py b/cv/multi_object_tracking/fairmot/pytorch/src/train.py similarity index 100% rename from cv/tracking/fairmot/pytorch/src/train.py rename to cv/multi_object_tracking/fairmot/pytorch/src/train.py diff --git a/cv/tracking/fairmot/pytorch/train_dla34_mot17.sh b/cv/multi_object_tracking/fairmot/pytorch/train_dla34_mot17.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/train_dla34_mot17.sh rename to cv/multi_object_tracking/fairmot/pytorch/train_dla34_mot17.sh diff --git a/cv/tracking/fairmot/pytorch/train_hrnet18_mot17.sh b/cv/multi_object_tracking/fairmot/pytorch/train_hrnet18_mot17.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/train_hrnet18_mot17.sh rename to cv/multi_object_tracking/fairmot/pytorch/train_hrnet18_mot17.sh diff --git a/cv/tracking/fairmot/pytorch/train_hrnet32_mot17.sh b/cv/multi_object_tracking/fairmot/pytorch/train_hrnet32_mot17.sh similarity index 100% rename from cv/tracking/fairmot/pytorch/train_hrnet32_mot17.sh rename to cv/multi_object_tracking/fairmot/pytorch/train_hrnet32_mot17.sh diff --git a/cv/traffic_forecast/graph_wavenet/pytorch/README.md b/cv/traffic_forecast/graph_wavenet/pytorch/README.md index 0e5547fa4a680778aaa4e674edec1c3b32ab8a3e..a2fa7e0397d1b0da8a2281ff3c2c5de7c0f5bb15 100644 --- a/cv/traffic_forecast/graph_wavenet/pytorch/README.md +++ b/cv/traffic_forecast/graph_wavenet/pytorch/README.md @@ -1,12 +1,16 @@ -# Graph WaveNet for Deep Spatial-Temporal Graph Modeling +# Graph WaveNet ## Model description -Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system. Existing approaches mostly capture the spatial dependency on a fixed graph structure, assuming that the underlying relation between entities is pre-determined. However, the explicit graph structure (relation) does not necessarily reflect the true dependency and genuine relation may be missing due to the incomplete connections in the data. Furthermore, existing methods are ineffective to capture the temporal trends as the RNNs or CNNs employed in these methods cannot capture long-range temporal sequences. To overcome these limitations, we propose in this paper a novel graph neural network architecture, Graph WaveNet, for spatial-temporal graph modeling. By developing a novel adaptive dependency matrix and learn it through node embedding, our model can precisely capture the hidden spatial dependency in the data. With a stacked dilated 1D convolution component whose receptive field grows exponentially as the number of layers increases, Graph WaveNet is able to handle very long sequences. These two components are integrated seamlessly in a unified framework and the whole framework is learned in an end-to-end manner. Experimental results on two public traffic network datasets, METR-LA and PEMS-BAY, demonstrate the superior performance of our algorithm. - -

- -

+Graph WaveNet is a graph neural network designed for spatial-temporal graph +modeling. It captures both spatial dependencies and long-range temporal patterns +using two key innovations: an adaptive dependency matrix for spatial +relationships and stacked dilated 1D convolutions for long-term temporal +dependencies. These components are integrated into a unified, end-to-end +framework. Graph WaveNet effectively handles complex, large-scale datasets, +excelling in tasks like traffic prediction and energy forecasting. Experimental +results on datasets like METR-LA and PEMS-BAY show its superior performance +compared to existing models in terms of accuracy and efficiency. ## Step 1: Installing packages @@ -14,12 +18,11 @@ Spatial-temporal graph modeling is an important task to analyze the spatial rela pip3 install -r requirements.txt ``` - ## Step 2: Preparing datasets -### Step 2.1: Download METR-LA and PEMS-BAY data from [Google Drive](https://drive.google.com/open?id=10FOTa6HXPqX8Pf5WRoRwcFnW9BrNZEIX) or [Baidu Yun](https://pan.baidu.com/s/14Yy9isAIZYdU__OYEQGa_g) links provided by [DCRNN](https://github.com/liyaguang/DCRNN). +### Step 2.1: Download METR-LA and PEMS-BAY data from [Google Drive](https://drive.google.com/open?id=10FOTa6HXPqX8Pf5WRoRwcFnW9BrNZEIX) or [Baidu Yun](https://pan.baidu.com/s/14Yy9isAIZYdU__OYEQGa_g) links provided by [DCRNN](https://github.com/liyaguang/DCRNN) -### Step 2.2: Process raw data +### Step 2.2: Process raw data ```shell # Create data and garage directories @@ -52,4 +55,3 @@ python3 generate_training_data.py --output_dir=data/PEMS-BAY --traffic_df_filena ```shell python3 train.py --gcn_bool --adjtype doubletransition --addaptadj --randomadj ``` - diff --git a/multimodal/BLIP/pytorch/README.md b/multimodal/BLIP/pytorch/README.md index 4e57721112ca07ff59cbf9906a400587b93dd563..d69d7452d45ae3fcd80bd5dd0191b57f81edc8c7 100755 --- a/multimodal/BLIP/pytorch/README.md +++ b/multimodal/BLIP/pytorch/README.md @@ -1,13 +1,17 @@ # BLIP -> [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://proceedings.mlr.press/v162/li22n/li22n.pdf) -## Abstract +## Description -Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. +Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing +pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance +improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, +which is a suboptimal source of supervision. BLIP, a new VLP framework which transfers flexibly to both vision-language +understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a +captioner generates synthetic captions and a filter removes the noisy ones. ## Step 1: Installation -```bash +```sh yum install mesa-libGL yum install -y java-1.8.0-openjdk git clone https://github.com/salesforce/BLIP.git @@ -16,11 +20,14 @@ pip3 install -r requirements.txt pip3 install ruamel_yaml pip3 install urllib3==1.26.6 ``` + ## Step 2: Preparing datasets + Go to visit [COCO official website](https://cocodataset.org/#download), then select the COCO2014 dataset. The COCO2014 dataset path structure should look like: -```bash + +```sh coco2014 ├── annotations │ ├── captions_train2014.json @@ -48,9 +55,10 @@ coco2014 ## Step 3: Training ### 8 GPUs on one machine + Set 'image_root' in configs/caption_coco.yaml to '/path/to/coco2014/' -```bash +```sh rm -rf train_caption.py mv ../train_caption.py . python3 -m torch.distributed.run --nproc_per_node=8 train_caption.py @@ -59,15 +67,18 @@ python3 -m torch.distributed.run --nproc_per_node=8 train_caption.py ## Step 4: Evaluation Set 'pretrained' in configs/caption_coco.yaml to 'output/Caption_coco/checkpoint_best.pth' -```bash + +```sh python3 -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate ``` ## Results -| GPUs | Bleu score | Training performance| -| ----------| -----------------------------------------------------------|------------------| -| BI V100×8 | Bleu_1: 0.797, Bleu_2: 0.644,Bleu_3: 0.503, ,Bleu_4: 0.388| 1.9790 s / it | +| GPUs | Bleu score | Training performance | +|-----------|------------------------------------------------------------|----------------------| +| BI V100×8 | Bleu_1: 0.797, Bleu_2: 0.644,Bleu_3: 0.503, ,Bleu_4: 0.388 | 1.9790 s / it | ## Reference -- [BLIP](https://github.com/salesforce/BLIP) \ No newline at end of file + +- [BLIP](https://github.com/salesforce/BLIP) +- [paper](https://proceedings.mlr.press/v162/li22n/li22n.pdf) diff --git a/multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md b/multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md index 60689c62180e40cc961e5eb069d408156f8baa0f..b64f00b8647ac12668ad47548868b25b207ff1eb 100644 --- a/multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md +++ b/multimodal/Language-Image_Pre-Training/L-Verse/pytorch/README.md @@ -2,17 +2,30 @@ ## Model description -Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalability. Especially with cross-modal tasks between image and text, vector quantized variational autoencoders (VQ-VAEs) are widely used to make a raw RGB image into a sequence of feature vectors. To better leverage the correlation between image and text, we propose L-Verse, a novel architecture consisting of feature-augmented variational autoencoder (AugVAE) and bidirectional auto-regressive transformer (BiART) for image-to-text and text-to-image generation. Our AugVAE shows the state-of-the-art reconstruction performance on ImageNet1K validation set, along with the robustness to unseen images in the wild. Unlike other models, BiART can distinguish between image (or text) as a conditional reference and a generation target. L-Verse can be directly used for image-to-text or text-to-image generation without any finetuning or extra object detection framework. In quantitative and qualitative experiments, L-Verse shows impressive results against previous methods in both image-to-text and text-to-image generation on MS-COCO Captions. We furthermore assess the scalability of L-Verse architecture on Conceptual Captions and present the initial result of bidirectional vision-language representation learning on general domain. +Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for +many vision tasks with their power and scalability. Especially with cross-modal tasks between image and text, vector +quantized variational autoencoders (VQ-VAEs) are widely used to make a raw RGB image into a sequence of feature vectors. +To better leverage the correlation between image and text, we propose L-Verse, a novel architecture consisting of +feature-augmented variational autoencoder (AugVAE) and bidirectional auto-regressive transformer (BiART) for +image-to-text and text-to-image generation. Our AugVAE shows the state-of-the-art reconstruction performance on +ImageNet1K validation set, along with the robustness to unseen images in the wild. Unlike other models, BiART can +distinguish between image (or text) as a conditional reference and a generation target. L-Verse can be directly used for +image-to-text or text-to-image generation without any finetuning or extra object detection framework. In quantitative +and qualitative experiments, L-Verse shows impressive results against previous methods in both image-to-text and +text-to-image generation on MS-COCO Captions. We furthermore assess the scalability of L-Verse architecture on +Conceptual Captions and present the initial result of bidirectional vision-language representation learning on general +domain. ## Step 1: Installing packages -``` +```bash pip3 install pudb "pytorch-lightning==1.5" einops regex ftfy cython webdataset==0.2.20 pillow wandb scikit-learn tensorboard ``` ## Step 2: Preparing datasets -Sign up and login in [ImageNet official website](https://www.image-net.org/index.php), then choose 'Download' to download the whole ImageNet dataset. Specify `/path/to/imagenet` to your ImageNet path in later training process. +Sign up and login in [ImageNet official website](https://www.image-net.org/index.php), then choose 'Download' to +download the whole ImageNet dataset. Specify `/path/to/imagenet` to your ImageNet path in later training process. The ImageNet dataset path structure should look like: @@ -41,4 +54,5 @@ python3 train_vae.py --config ./configs/imagenet_augvae_ml.yaml --train_dir /pat ``` ## Reference -https://github.com/tgisaturday/L-Verse \ No newline at end of file + +- [L-Verse](https://github.com/tgisaturday/L-Verse) \ No newline at end of file diff --git a/multimodal/Language-Image_Pre-Training/clip/pytorch/README.md b/multimodal/Language-Image_Pre-Training/clip/pytorch/README.md index 0dd4b6ef64ed41cc1d1fbb47e71235992f278c76..38c0518c97dc506bfb0d6d743f65844a6bd70e35 100644 --- a/multimodal/Language-Image_Pre-Training/clip/pytorch/README.md +++ b/multimodal/Language-Image_Pre-Training/clip/pytorch/README.md @@ -2,12 +2,15 @@ ## Model description -Contrastive Language-Image Pre-training (CLIP), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. - +Contrastive Language-Image Pre-training (CLIP), consisting of a simplified version of ConVIRT trained from scratch, is +an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image +encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time +the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target +dataset’s classes. ## Step 1: Installing packages -```shell +```sh cd multimodal/Language-Image_Pre-Training/clip/pytorch pip3 install ftfy regex tqdm ``` @@ -18,56 +21,33 @@ Download CIFAR100 ## Step 3: Training -### Zero-shot task - - - -#### On single GPU +### Zero-shot task on single GPU -``` +```sh +# top5描述: CLIP执行zero-shot预测任务,从数据集CIFAR100测试集中获取图像, +# 并计算测试集中图像与文本能够匹配的TOP5的标签的accuracy python3 clip/zero_shot_prediction_top5.py -``` - - -``` +# top1描述:CLIP执行zero-shot预测任务,从数据集CIFAR100测试集中获取图像, +# 并计算测试集中图像与文本能够匹配的TOP1的标签的accuracy python3 clip/zero_shot_prediction_top1.py ``` -### Linear-probe evaluation task by using scikit-learn - - - -#### On single GPU +### Linear-probe evaluation task by using scikit-learn on single GPU -``` +```sh +# 使用scikit-learn对图像特征进行逻辑回归 python3 clip/Linear_probe_evaluation.py ``` -## Results on BI-V100 - -* zero-shot-prediction-top5 - -| metric | BI-V100 | -| ----------- | ------- | -| accuracy(%) | 86.74 | - - -* zero-shot-prediction-top1 - - -| metric | BI-V100 | -| ----------- | ------- | -| accuracy(%) | 61.71 | - - - -* linear-probe-evaluation - -| metric | BI-V100 | -| ----------- | ------- | -| accuracy(%) | 80.01 | +## Results +| Model | GPUs | Type | accuracy(%) | +|-------|---------|---------------------------|-------------| +| CLIP | BI-V100 | zero-shot-prediction-top5 | 86.74 | +| CLIP | BI-V100 | zero-shot-prediction-top1 | 61.71 | +| CLIP | BI-V100 | linear-probe-evaluation | 80.01 | ## Reference -https://github.com/openai/CLIP \ No newline at end of file + +- [CLIP](https://github.com/openai/CLIP) diff --git a/multimodal/diffusion/ControlNet/README.md b/multimodal/diffusion/ControlNet/README.md index 624c8d88019c9e049d37ce958afd26418b3e80c5..658ba82597ca20bd083d4e7224d9b62d90cedbb0 100644 --- a/multimodal/diffusion/ControlNet/README.md +++ b/multimodal/diffusion/ControlNet/README.md @@ -1,8 +1,10 @@ # ControlNet -## Model description +## Description -ControlNet is a neural network structure to control diffusion models by adding extra conditions, a game changer for AI Image generation. It brings unprecedented levels of control to Stable Diffusion. The revolutionary thing about ControlNet is its solution to the problem of spatial consistency. +ControlNet is a neural network structure to control diffusion models by adding extra conditions, a game changer for AI +Image generation. It brings unprecedented levels of control to Stable Diffusion. The revolutionary thing about +ControlNet is its solution to the problem of spatial consistency. This is simple: we want to control SD to fill a circle with colors, and the prompt contains some description of our target. @@ -14,20 +16,26 @@ But it does not know the meaning of that "Control Image (Source Image)". Our tar - Install -```bash +```sh pip3 install open_clip_torch transformers einops omegaconf pip3 install pytorch-lightning==1.9.5 pip3 install urllib3==1.26 yum install -y mesa-libGL ``` + - Build the Stable Difussion to control -You need to decide which Stable Diffusion Model you want to control. In this example, we will just use standard SD1.5. You can download it from the [official page of Stability](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main). You want the file ["v1-5-pruned.ckpt"](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main). (Or ["v2-1_512-ema-pruned.ckpt"](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/tree/main) if you are using SD2.) +You need to decide which Stable Diffusion Model you want to control. In this example, we will just use standard SD1.5. +You can download it from the [official page of +Stability](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main). You want the file +["v1-5-pruned.ckpt"](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main). (Or +["v2-1_512-ema-pruned.ckpt"](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/tree/main) if you are using +SD2.) -```bash +```sh # We provide a simple script for you to achieve this easily. - -#If your SD filename is "./models/v1-5-pruned.ckpt" and you want the script to save the processed model (SD+ControlNet) at location "./models/control_sd15_ini.ckpt", you can just run: +# If your SD filename is "./models/v1-5-pruned.ckpt" and you want the script to save the processed model (SD+ControlNet) +# at location "./models/control_sd15_ini.ckpt", you can just run: python3 tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt @@ -38,9 +46,10 @@ python3 tool_add_control_sd21.py ./models/v2-1_512-ema-pruned.ckpt ./models/cont ## Step 2: Preparing datasets -Just download the Fill50K dataset from [our huggingface page](https://huggingface.co/lllyasviel/ControlNet) (training/fill50k.zip, the file is only 200M!). Make sure that the data is decompressed as +Just download the Fill50K dataset from [our huggingface page](https://huggingface.co/lllyasviel/ControlNet) +(training/fill50k.zip, the file is only 200M!). Make sure that the data is decompressed as: -```bash +```sh training/ └── fill50k ├── source @@ -51,11 +60,12 @@ In the folder "fill50k/source", you will have 50k images of circle lines. In the folder "fill50k/target", you will have 50k images of filled circles. -In the "fill50k/prompt.json", you will have their filenames and prompts. Each prompt is like "a balabala color circle in some other color background." +In the "fill50k/prompt.json", you will have their filenames and prompts. Each prompt is like "a balabala color circle in +some other color background." ## Step 3: Training -```bash +```sh # One GPU python3 tutorial_train.py @@ -66,9 +76,9 @@ python3 tutorial_train_dist.py ## Results -GPUs | FPS ----- | --- -BI-V100 x8 | 5.02 s/it +| GPUs | FPS | +|------------|-----------| +| BI-V100 x8 | 5.02 s/it | Go to `./image_log/train/` to check results of images. diff --git a/multimodal/diffusion/ddpm/README.md b/multimodal/diffusion/ddpm/README.md index 0387b7f387f5143afad25ce73dffcc7590de6cfb..a7160f034ab22a966dc7c36102ed23935309bf75 100644 --- a/multimodal/diffusion/ddpm/README.md +++ b/multimodal/diffusion/ddpm/README.md @@ -2,25 +2,21 @@ ## Model description -Unofficial PyTorch implementation of Denoising Diffusion Probabilistic Models. This implementation follows the most of details in official TensorFlow implementation. +Unofficial PyTorch implementation of Denoising Diffusion Probabilistic Models. This implementation follows the most of details in official TensorFlow implementation. ## Step 1: Installation - -```bash +```sh pip3 install -U pip setuptools pip3 install -r requirements.txt pip3 install protobuf==3.20.3 yum install -y mesa-libGL pip3 install urllib3==1.26.6 - ``` - ## Step 2: Preparing datasets - -```bash +```sh mkdir -p stats && cd stats ``` @@ -30,15 +26,14 @@ Download precalculated statistic for dataset: the dataset structure sholud look like: -``` +```sh stats └── cifar10.train.npz ``` ## Step 3: Training - -```bash +```sh cd .. # 8 GPUs @@ -57,7 +52,7 @@ python3 main.py --train \ ## Step 4: Evaluate -``` +```sh # 8 GPUs export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 @@ -76,14 +71,14 @@ python3 main.py \ --eval ``` - ## Results -| GPUs | FPS | -| ------ | -------- | +| GPUs | FPS | +|------------|-----------| | BI-V100 x8 | 1.65 it/s | ![image](images/cifar10_samples.png) ## Reference -- [DDPM](https://github.com/w86763777/pytorch-ddpm/tree/master) + +- [pytorch-ddpm](https://github.com/w86763777/pytorch-ddpm/tree/master) diff --git a/multimodal/diffusion/stable-diffusion/sd_1.5/README.md b/multimodal/diffusion/stable-diffusion/sd_1.5/README.md index 18283f38c24bd623e13a1adc8d8d783ad198cb08..fd94b132c92fae4cc19cbbaee966e43f49f49552 100644 --- a/multimodal/diffusion/stable-diffusion/sd_1.5/README.md +++ b/multimodal/diffusion/stable-diffusion/sd_1.5/README.md @@ -2,19 +2,23 @@ ## Model description -Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. +Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text +input. ## Step 1: Preparation -You just need to run the script, and it will automatically download the required data and weights. Or you can manually download the weights and data locally. +You just need to run the script, and it will automatically download the required data and weights. Or you can manually +download the weights and data locally. ### Weights -Download the runwayml/stable-diffusion-v1-5 from [huggingface page](https://huggingface.co/runwayml/stable-diffusion-v1-5). +Download the runwayml/stable-diffusion-v1-5 from [huggingface +page](https://huggingface.co/runwayml/stable-diffusion-v1-5). ### Datasets -dataset: download the lambdalabs/pokemon-blip-captions from [huggingface page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). +dataset: download the lambdalabs/pokemon-blip-captions from [huggingface +page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). ## Step 2: Installation diff --git a/multimodal/diffusion/stable-diffusion/sd_2.1/README.md b/multimodal/diffusion/stable-diffusion/sd_2.1/README.md index 09403c958300a32a357f16df751c29a61534b26c..e857499b58762c8b3bccf80e163997c21500f0ff 100644 --- a/multimodal/diffusion/stable-diffusion/sd_2.1/README.md +++ b/multimodal/diffusion/stable-diffusion/sd_2.1/README.md @@ -2,19 +2,23 @@ ## Model description -Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. +Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text +input. ## Step 1: Preparation -You just need to run the script, and it will automatically download the required data and weights. Or you can manually download the weights and data locally. +You just need to run the script, and it will automatically download the required data and weights. Or you can manually +download the weights and data locally. ### Weights -Download the stabilityai/stable-diffusion-2-1-base from [huggingface page](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). +Download the stabilityai/stable-diffusion-2-1-base from [huggingface +page](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). ### Datasets -Download the lambdalabs/pokemon-blip-captions from [huggingface page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). +Download the lambdalabs/pokemon-blip-captions from [huggingface +page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). ## Step 2: Installation @@ -45,12 +49,6 @@ bash run_sd_2.1_single.sh bash run_sd_2.1_multi.sh ``` -## Results - -| Model | GPUs | ips_per_device | ips_per_gpu | -| ------ | ------- | -------------- | ----------- | -| SD 2.1 | BI-V150 | | | - ## Reference - [diffusers](https://github.com/huggingface/diffusers) diff --git a/multimodal/diffusion/stable-diffusion/sd_3/README.md b/multimodal/diffusion/stable-diffusion/sd_3/README.md index f8ce5c820375aa9b566babaf4d83d4deb0bb8925..5d958039490e70cd5486597bd65747f1a9785e15 100644 --- a/multimodal/diffusion/stable-diffusion/sd_3/README.md +++ b/multimodal/diffusion/stable-diffusion/sd_3/README.md @@ -2,19 +2,23 @@ ## Model description -Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. +Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text +input. ## Step 1: Preparation -You just need to run the script, and it will automatically download the required data and weights. Or you can manually download the weights and data locally. +You just need to run the script, and it will automatically download the required data and weights. Or you can manually +download the weights and data locally. ### Weights -Download the stabilityai/stable-diffusion-3-medium-diffusers from [huggingface page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers). +Download the stabilityai/stable-diffusion-3-medium-diffusers from [huggingface +page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers). ### Datasets -dataset: download the diffusers/dog-example from [huggingface page](https://huggingface.co/datasets/diffusers/dog-example). +dataset: download the diffusers/dog-example from [huggingface +page](https://huggingface.co/datasets/diffusers/dog-example). ## Step 2: Installation diff --git a/multimodal/diffusion/stable-diffusion/sd_xl/README.md b/multimodal/diffusion/stable-diffusion/sd_xl/README.md index 9298d73a7c196afa678ffe9ad38ea2083822be80..a340e18126762f8b4a38a64b257c7415da1aa84a 100644 --- a/multimodal/diffusion/stable-diffusion/sd_xl/README.md +++ b/multimodal/diffusion/stable-diffusion/sd_xl/README.md @@ -2,25 +2,30 @@ ## Model description -Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. +Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text +input. ## Step 1: Preparation -You just need to run the script, and it will automatically download the required data and weights. Or you can manually download the weights and data locally. +You just need to run the script, and it will automatically download the required data and weights. Or you can manually +download the weights and data locally. ### Weights -Download the stabilityai/stable-diffusion-xl-base-1.0 from [huggingface page](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). +Download the stabilityai/stable-diffusion-xl-base-1.0 from [huggingface +page](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). -Download the madebyollin/sdxl-vae-fp16-fix from [huggingface page](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). +Download the madebyollin/sdxl-vae-fp16-fix from [huggingface +page](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). ### Datasets -dataset: download the lambdalabs/pokemon-blip-captions from [huggingface page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). +dataset: download the lambdalabs/pokemon-blip-captions from [huggingface +page](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). ## Step 2: Installation -```bash +```sh pip3 install http://files.deepspark.org.cn:880/deepspark/add-ons/diffusers-0.27.0-py3-none-any.whl pip3 install http://files.deepspark.org.cn:880/deepspark/add-ons/transformers-4.38.1-py3-none-any.whl pip3 install huggingface-hub==0.25.2 @@ -32,25 +37,19 @@ pip3 install pillow --upgrade If you have downloaded the weights and dataset, please export the environment variables like below. -```bash +```sh export MODEL_PATH=/path/to/sd_weights export DATASET_PATH=/path/to/data export VAE_PATH=/path/to/vae_weights ``` -```bash +```sh # Go to diffusers path cd ${PROJ_ROOT}/multimodal/diffusion/stable-diffusion/diffusers bash run_sd_xl.sh ``` -## Results - -| Model | GPUs | ips_per_device | ips_per_gpu | -| ----- | ------- | -------------- | ----------- | -| SD XL | BI-V150 | | | - ## Reference - [diffusers](https://github.com/huggingface/diffusers) diff --git a/multimodal/diffusion/stable-diffusion/training/README.md b/multimodal/diffusion/stable-diffusion/training/README.md index 2ae3fb504a8eef92214747e583128d436676f188..3f295d14826a67e208bd949c91d678052a70e8a4 100755 --- a/multimodal/diffusion/stable-diffusion/training/README.md +++ b/multimodal/diffusion/stable-diffusion/training/README.md @@ -1,7 +1,11 @@ # Stable Diffusion ## Model description -Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI, LAION and RunwayML. It's trained on 512x512 images from a subset of the LAION-5B database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM. See the model card for more information. + +Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, +Stability AI, LAION and RunwayML. It's trained on 512x512 images from a subset of the LAION-5B database. This model uses +a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, +the model is relatively lightweight and runs on a GPU with at least 4GB VRAM. See the model card for more information. ## Prepare @@ -34,19 +38,20 @@ pip3 install -r requirements.txt ### Step 1 使用accelerate初始化训练环境 ```bash -accelerate config # 这里可以选择单卡或者多卡训练 - # 这里建议只选择多卡或者单卡,其他优化选项例如:torch dynamo,deepspeed等均不建议使用 +# 这里可以选择单卡或者多卡训练 +# 这里建议只选择多卡或者单卡,其他优化选项例如:torch dynamo,deepspeed等均不建议使用 +accelerate config ``` -**Single GPU example** +#### Single GPU example ![image](IMG/single.png) -**Multi GPU example** +#### Multi GPU example ![image](IMG/multi.png) -if you want to train with "fp32"(mixed precision),you can choose "no" +if you want to train with "fp32"(mixed precision),you can choose "no" ### Step 2 开始训练 @@ -80,6 +85,6 @@ python3 test.py prompt: A pokemon with green eyes and red legs -result: +## Results ![image](IMG/result.png) diff --git a/multimodal/llava/pytorch/README.md b/multimodal/llava/pytorch/README.md index a649fa50905aa2c6ed78a616ec7260896e2a12ae..a31921aa8f19ba5140fff721d9c4957f6463899d 100644 --- a/multimodal/llava/pytorch/README.md +++ b/multimodal/llava/pytorch/README.md @@ -55,4 +55,4 @@ bash train.sh ## Reference -- [LLaVA](https://github.com/haotian-liu/LLaVA) \ No newline at end of file +- [LLaVA](https://github.com/haotian-liu/LLaVA) diff --git a/nlp/llm/ChatGLM2-6b-sft/README.md b/nlp/llm/ChatGLM2-6b-sft/README.md index e08bcc2935e39f0939bc22527d65b68ba585eea0..bccc53df4b126bf3d8e9eb183acc8fd3ec614c96 100644 --- a/nlp/llm/ChatGLM2-6b-sft/README.md +++ b/nlp/llm/ChatGLM2-6b-sft/README.md @@ -1,8 +1,10 @@ -# ChatGLM2-6B SFT +# ChatGLM2-6B SFT (DeepSpeed) ## Model description -This warehouse realizes the fine-tuning of ChatGLM2-6B model based on P-Tuning v2. P-Tuning v2 can reduce the number of parameters that need to be fine-tuned to 0.1% of the original, and then run with a minimum of 7GB of video memory through model quantization, Gradient Checkpoint and other methods +This warehouse realizes the fine-tuning of ChatGLM2-6B model based on P-Tuning v2. P-Tuning v2 can reduce the number of +parameters that need to be fine-tuned to 0.1% of the original, and then run with a minimum of 7GB of video memory +through model quantization, Gradient Checkpoint and other methods ## Step 1: Installation @@ -11,7 +13,9 @@ cd ptuning/ pip3 install -r requirements.txt ``` -Downloading a model from Hugging Face Hub requires first [installing Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) and then running +Downloading a model from Hugging Face Hub requires first [installing Git +LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) +and then running ```bash mkdir -p data @@ -34,8 +38,6 @@ bash train_ptuning_v2.sh bash evaluate_ptuning_v2.sh ``` -## Results - ## Reference -[ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) +- [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) diff --git a/nlp/llm/aquila2-34b/megatron-deepspeed/README.md b/nlp/llm/aquila2-34b/megatron-deepspeed/README.md index 9abf93698736f8cf23e82702b5e25df449508a7c..31c75d141b104b3645fb261dbda44bfd865c11aa 100644 --- a/nlp/llm/aquila2-34b/megatron-deepspeed/README.md +++ b/nlp/llm/aquila2-34b/megatron-deepspeed/README.md @@ -9,7 +9,7 @@ Aquila 2 is a large language model released by Beijing Zhiyuan Artificial Intell 1. Configure the same runing environment on each node and make sure the docker container names are the same 2. Set ssh non-encryption connection on docker container: -```bash +```sh # a. Generate the secret key on master node: ssh-keygen @@ -19,7 +19,7 @@ ssh-copy-id -i ~/.ssh/id_rsa.pub ${host_name} ## {host_name} can be a specified ## Step 2: Installation on all nodes -```bash +```sh # Clone cd /toolbox/Megatron-DeepSpeed @@ -30,7 +30,7 @@ pip3 install urllib3==1.23 ## Step 3: Preparing datasets on all nodes -```bash +```sh cd dataset mkdir BookCorpusDataset && cd BookCorpusDataset wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document.bin @@ -39,39 +39,36 @@ wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document ## Step 4: Training by executing the following command on master node -```bash +```sh cd examples/aquila/ ``` -1. Modify run_meg_aquila2_34b_node4.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, NCCL_SOCKET_IFNAME +1. Modify run_meg_aquila2_34b_node4.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, + NCCL_SOCKET_IFNAME 2. executing run_meg_aquila2_34b_node4.sh -```bash +```sh bash run_meg_aquila2_34b_node4.sh ``` -a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/ru,you can execute the following command on all nodes: +a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at +unix:///var/ru,you can execute the following command on all nodes: -```bash +```sh usermod -aG docker ${user_name} systemctl restart docker chmod 666 /var/run/docker.sock ``` -b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the following command: +b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the +following command: -```bash +```sh scp -r ../../dataset/gpt_small_117M/gpt_small_117M_text_document ${user_name}@${host_name}:path/to/megatron-deepspeed/dataset/gpt_small_117M/gpt_small_117M_text_document ``` -## Results - -| GPUs | Model | Training speed | -| :-----: | :-----------------------------: | :------------: | -| BI-V150 | Aquila2-34B (Megatron-DeepSpeed) | | - ## Reference - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) diff --git a/nlp/llm/baichuan2-7b/Baichuan2/README.md b/nlp/llm/baichuan2-7b/Baichuan2/README.md index 0f73e472ff0e3b03564c25d4780e9f19ab2e13e2..636b9e7ecca5c23cce303a4b7c1d29fd9412cf9e 100644 --- a/nlp/llm/baichuan2-7b/Baichuan2/README.md +++ b/nlp/llm/baichuan2-7b/Baichuan2/README.md @@ -1,14 +1,18 @@ -# Baichuan2-7B +# Baichuan2-7B (DeepSpeed) ## Model description -Baichuan 2 is the new generation of open-source large language models launched by Baichuan Intelligent Technology. It was trained on a high-quality corpus with 2.6 trillion tokens. Baichuan 2 achieved the best performance of its size on multiple authoritative Chinese, English, and multi-language general and domain-specific benchmarks. All versions are fully open to academic research. Developers only need to apply via email and obtain official commercial permission to use it for free commercially. +Baichuan 2 is the new generation of open-source large language models launched by Baichuan Intelligent Technology. It +was trained on a high-quality corpus with 2.6 trillion tokens. Baichuan 2 achieved the best performance of its size on +multiple authoritative Chinese, English, and multi-language general and domain-specific benchmarks. All versions are +fully open to academic research. Developers only need to apply via email and obtain official commercial permission to +use it for free commercially. ## Step 1: Preparing datasets Load model weight, and fix configuration_baichuan.py -```bash +```sh pip install modelscope cd fine-tune/ python3 ./get_Baichuan2_model.py @@ -23,7 +27,7 @@ wget -c --no-check-certificate https://raw.githubusercontent.com/baichuan-inc/Ba ## Step 2: Installation -```bash +```sh cd ../ pip install -r requirements.txt @@ -35,16 +39,16 @@ pip install -r requirements.txt Fine-tuning -```bash +```sh bash ./run_sft.sh ``` ## Results | GPUs | Epochs | train_samples_per_second | -|------------|--------|-----| -| BI-V150 x8 | 1 | 10.674 | +|------------|--------|--------------------------| +| BI-V150 x8 | 1 | 10.674 | ## Reference -- [Baichuan2] +- [Baichuan2](https://github.com/baichuan-inc/Baichuan2) diff --git a/nlp/llm/bloom-7b1/firefly/README.md b/nlp/llm/bloom-7b1/firefly/README.md index d662c6ef175d94566b0d6d2c669d5c558298344e..73174fcd06c8602f122f9a807f2b63a6e6f4437d 100755 --- a/nlp/llm/bloom-7b1/firefly/README.md +++ b/nlp/llm/bloom-7b1/firefly/README.md @@ -1,12 +1,15 @@ -# Bloom-7B1 +# Bloom-7B1 (Firefly) ## Model description -BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks. +BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text +data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 +programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to +perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks. ## Step 1: Installation -```bash +```sh # install firefly pushd /toolbox/firefly pip3 install -r requirements.txt @@ -16,19 +19,19 @@ popd ## Step 2: Preparing datasets and checkpoints -```bash +```sh mkdir -p data && cd data # you can download dataset from huggingface, website here: https://huggingface.co/datasets/BelleGroup/school_math_0.25M ``` -```bash +```sh mkdir -p checkpoint && cd checkpoint # you can download weights from hugginface, website here: https://huggingface.co/bigscience/bloom-7b1 ``` ## Step 3: Training -```bash +```sh # how to train bash train.sh {num_gpus} {config_file} {train_type} @@ -41,10 +44,10 @@ bash train.sh 1 configs/bloom-sft-qlora.json qlora ## Results -| No. | model | peft | num_gpus |train_samples_per_second | train_steps_per_second | -| ---- | --------- | ----------- | ------------------ | ---------------------- | -----------------------| -| 1 | bloom-7B1 | QLoRA | 1 | 2.041 | 0.128 | -| 2 | bloom-7B1 | Full sft | 16 | 4.587 | 0.072 | +| No. | model | peft | num_gpus | train_samples_per_second | train_steps_per_second | +|-----|-----------|----------|----------|--------------------------|------------------------| +| 1 | bloom-7B1 | QLoRA | 1 | 2.041 | 0.128 | +| 2 | bloom-7B1 | Full sft | 16 | 4.587 | 0.072 | ## Reference diff --git a/nlp/llm/chatglm-6b/deepspeed/README.md b/nlp/llm/chatglm-6b/deepspeed/README.md index a8ce2bfe211d43cca8334605e2b82aae5ebd7b2c..ba93757f4e9d88d5f6ca335b52c5c1265faec2e4 100644 --- a/nlp/llm/chatglm-6b/deepspeed/README.md +++ b/nlp/llm/chatglm-6b/deepspeed/README.md @@ -1,12 +1,19 @@ -# DeepSpeed ChatGLM-6B +# ChatGLM-6B (DeepSpeed) ## Model description -ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). -ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference. +ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) +framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade +graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). + +ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T +tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement +learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in +line with human preference. ## Step 1: Installation -```shell + +```sh # Install requirements pip3 install -r requirements.txt @@ -30,8 +37,10 @@ cd ../ && rm -rf Python-3.7.9* ``` ### Install DeepSpeed + ChatGLM-6B model is using DeepSpeed toolbox. Before you run this model, you need to install DeepSpeed first. -```shell + +```sh pushd ../../../../toolbox/DeepSpeed/v0.9.2/ bash install_toolbox_deepspeed.sh popd @@ -39,11 +48,16 @@ popd ## Step 2: Preparing datasets -ADGEN is a large-scale dataset for advertisement text generation proposed by researchers from Hong Kong University of Science and Technology in 2018. -Go to [Google Drive](https://drive.google.com/file/d/13_vf0xRTQsyneRKdD1bZIr93vBGOczrk/view?usp=sharing) or [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1), download the processed ADGEN dataset, and decompress AdvertiseGen directory. +ADGEN is a large-scale dataset for advertisement text generation proposed by researchers from Hong Kong University of +Science and Technology in 2018. Go to [Google +Drive](https://drive.google.com/file/d/13_vf0xRTQsyneRKdD1bZIr93vBGOczrk/view?usp=sharing) or [Tsinghua +Cloud](https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1), download the processed ADGEN dataset, and decompress +AdvertiseGen directory. + +If you want to load the model locally, you can download the model implementation ( `13GB` ) from [Hugging Face +Hub](https://huggingface.co/THUDM/chatglm-6b) -If you want to load the model locally, you can download the model implementation ( `13GB` ) from [Hugging Face Hub](https://huggingface.co/THUDM/chatglm-6b) -```shell +```sh # Install lfs yum install -y rh-git218-git-lfs.x86_64 source /opt/rh/rh-git218/enable @@ -54,16 +68,20 @@ git clone https://huggingface.co/THUDM/chatglm-6b ``` ## Step 3: Training + If you load the model locally, you can change `THUDM/chatglm-6b` in `ds_train_finetune.sh` to your local model path. -```shell +```sh cd ptuning/ bash ds_train_finetune.sh ``` + ## Results -| GPUs | Toolbox | Model | Training speed | -|:-----------:|:---------:|:----------:|:----------------:| -| BI-V100 x8 | DeepSpeed | ChatGLM-6B |0.995 samples/sec | + +| GPUs | Toolbox | Model | Training speed | +|------------|-----------|------------|-------------------| +| BI-V100 x8 | DeepSpeed | ChatGLM-6B | 0.995 samples/sec | ## Reference -[THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) + +- [THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) diff --git a/nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md b/nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md index 38d2d06f5eda97381172b7fdebc23f05ebad49de..9f2e4468a59f322a8fd46e96c90d4100ea0380d3 100644 --- a/nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md +++ b/nlp/llm/chatglm3-6b/deepspeed/finetune_demo/README.md @@ -1,19 +1,21 @@ -# ChatGLM3-6B +# ChatGLM3-6B (DeepSpeed) ## Model description -ChatGLM3 is a generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is the open-source model in the ChatGLM3 series, maintaining many excellent features of the first two generations such as smooth dialogue and low deployment threshold. +ChatGLM3 is a generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is +the open-source model in the ChatGLM3 series, maintaining many excellent features of the first two generations such as +smooth dialogue and low deployment threshold. ## Step 1: Installation -```bash +```sh cd finetune_demo pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets and checkpoints -```bash +```sh # Get AdvertiseGen.tar.gz mkdir -p data @@ -25,7 +27,7 @@ popd python3 process_data.py ``` -```bash +```sh # Get chatglm3-6b from https://modelscope.cn/models/ZhipuAI/chatglm3-6b or huggingface. mkdir -p checkpoint @@ -36,7 +38,7 @@ popd ## Step 3: Training -```bash +```sh bash run.sh {config_file} {num_gpus} # 1 GPU @@ -52,7 +54,7 @@ bash run.sh configs/sft.yaml 16 ## Results | GPUs | model | peft | num_gpus | train_samples_per_second | -| ------- | ---------- | ---------- | -------- | ------------------------ | +|---------|------------|------------|----------|--------------------------| | BI-V150 | ChatGLM-6B | Lora | 1 | 2.11 | | BI-V150 | ChatGLM-6B | ptuning_v2 | 1 | 8.889 | | BI-V150 | ChatGLM-6B | Lora | 16 | 32.639 | diff --git a/nlp/llm/deepseek_moe_7b/colossalai/README.md b/nlp/llm/deepseek_moe_7b/colossalai/README.md index 0c08cfea36f55a4b76b0036bc20813c8c2f5b170..63e0c36e7a67324537d273c9cb51bb45134812c4 100644 --- a/nlp/llm/deepseek_moe_7b/colossalai/README.md +++ b/nlp/llm/deepseek_moe_7b/colossalai/README.md @@ -4,11 +4,13 @@ DeepSeekMoE 7B is a variant of the 16B model. -DeepSeekMoE 16B is a Mixture-of-Experts (MoE) language model with 16.4B parameters. It employs an innovative MoE architecture, which involves two principal strategies: fine-grained expert segmentation and shared experts isolation. +DeepSeekMoE 16B is a Mixture-of-Experts (MoE) language model with 16.4B parameters. It employs an innovative MoE +architecture, which involves two principal strategies: fine-grained expert segmentation and shared experts isolation. ## Step 1: Install -Firstly, you should ensure that ColossalAI is installed in the environment. Generally, ColossalAI is installed by default. +Firstly, you should ensure that ColossalAI is installed in the environment. Generally, ColossalAI is installed by +default. ```sh git clone -b v0.4.4 https://github.com/hpcaitech/ColossalAI.git --depth=1 @@ -19,10 +21,11 @@ pip3 install . ## Step 2: Prepare model and config -Get "deepseek-moe-16b-base" models and config file from huggingface or other place, and mv it to "ColossalAI/examples/language/deepseek/deepseek-ai/deepseek-moe-16b-base". -One recommended link: "". +Get "deepseek-moe-16b-base" models and config file from huggingface or other place, and mv it to +"ColossalAI/examples/language/deepseek/deepseek-ai/deepseek-moe-16b-base". One recommended link: +"". -```bash +```sh cd ColossalAI/examples/language/deepseek mkdir -p deepseek-ai mv /deepseek-moe-16b-base deepseek-ai/ @@ -30,16 +33,16 @@ mv /deepseek-moe-16b-base deepseek-ai/ ## Step 3: Training -```bash +```sh cd ColossalAI/examples/language/deepseek colossalai run --nproc_per_node 16 benchmark.py -c 7b -g -b 16 --tp 1 --pp 4 --num_steps 50 ``` ## Results -| Model | Training speed | -|--------------------|--------------------| -| deepseek-moe-7b | 6.85 samples/sec | +| Model | Training speed | +|-----------------|------------------| +| deepseek-moe-7b | 6.85 samples/sec | ## Reference diff --git a/nlp/llm/llama-7b/colossalai/README.md b/nlp/llm/llama-7b/colossalai/README.md index 26fcc5f98d09c8fc3ef8cee3c5dade9500f546bf..40f05354a3fcc013b0c832ba1cb883cbde6a99ce 100644 --- a/nlp/llm/llama-7b/colossalai/README.md +++ b/nlp/llm/llama-7b/colossalai/README.md @@ -1,36 +1,47 @@ -# Colossal-AI LLaMA-7B +# LLaMA-7B (ColossalAI) ## Model description -LLaMA-7B is part of a collection of foundation language models called **LLaMA**, which range from 7B to 65B parameters. LLaMA models are trained on trillions of tokens from publicly available datasets, and achieve state-of-the-art performance on various natural language understanding tasks. LLaMA-7B is the smallest model in the LLaMA family, but it still has impressive capabilities. It can generate fluent and coherent text, answer -questions, complete sentences, and more. + +LLaMA-7B is part of a collection of foundation language models called **LLaMA**, which range from 7B to 65B parameters. +LLaMA models are trained on trillions of tokens from publicly available datasets, and achieve state-of-the-art +performance on various natural language understanding tasks. LLaMA-7B is the smallest model in the LLaMA family, but it +still has impressive capabilities. It can generate fluent and coherent text, answer questions, complete sentences, and +more. ColossalChat is the project to implement LLM with RLHF, powered by the Colossal-AI project. -Coati stands for ColossalAI Talking Intelligence. It is the name for the module implemented in this project and is also the name of the large language model developed by the ColossalChat project. +Coati stands for ColossalAI Talking Intelligence. It is the name for the module implemented in this project and is also +the name of the large language model developed by the ColossalChat project. ## Step 1: Installation ### Install ColossalAI + LLaMA-7B model is using ColossalAI toolbox. Before you run this model, you need to setup ColossalAI first. -```shell +```sh cd ../../../../toolbox/ColossalAI/v0.3.0/ bash install_toolbox_colossalai.sh ``` ### Install coati -```shell + +```sh cd ColossalAI/applications/Chat pip3 install . ``` + ## Step 2: Preparing datasets You can download dataset and pretrained mode from below link. -- instinwild_en.json: [BaiduPan](https://pan.baidu.com/s/1f22_1dcWr-ZwErOo8OwbzQ?pwd=x3s9), [GoogleDrive](https://drive.google.com/file/d/1qOfrl0RIWgH2_b1rYCEVxjHF3u3Dwqay/view) + +- instinwild_en.json: [BaiduPan](https://pan.baidu.com/s/1f22_1dcWr-ZwErOo8OwbzQ?pwd=x3s9), + [GoogleDrive](https://drive.google.com/file/d/1qOfrl0RIWgH2_b1rYCEVxjHF3u3Dwqay/view) - [llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) ## Step 3: Training -```shell + +```sh # multi node torchrun --nnodes=$NODE_NUMS --node_rank=$NODE_RANK --master_addr=$MASTER_ADDR --nproc_per_node=8 --master_port $MASTER_PORT examples/train_sft.py \ --pretrain /path/to/llama-7b-hf \ @@ -45,14 +56,18 @@ torchrun --nnodes=$NODE_NUMS --node_rank=$NODE_RANK --master_addr=$MASTER_ADDR - --max_datasets_size 512 \ --max_epochs 1 ``` + If the torchrun command cannot be found,you can execute: -```shell + +```sh ln -s /usr/local/corex-3.1.0/lib64/python3/dist-packages/bin/torchrun /usr/local/bin/ ``` + ## Results -| Model | Training speed | -|-------------|-----------------| -| LLaMA-7B | 0.9 samples/sec | + +| Model | Training speed | +|----------|-----------------| +| LLaMA-7B | 0.9 samples/sec | ## Reference diff --git a/nlp/llm/llama2-13b/megatron-deepspeed/README.md b/nlp/llm/llama2-13b/megatron-deepspeed/README.md index 8819c34c9ff382c2fa3a94265d331d2a2c68a2d9..4377d629f7dc3ce8e1b56feb489f83ba10d2a2a3 100644 --- a/nlp/llm/llama2-13b/megatron-deepspeed/README.md +++ b/nlp/llm/llama2-13b/megatron-deepspeed/README.md @@ -2,14 +2,16 @@ ## Model description -Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. +Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, +the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for +understanding and generating longer texts. ## Step1: Configure 2-node environment 1. Configure the same runing environment on each node and make sure the docker container names are the same 2. Set ssh non-encryption connection on docker container: -```bash +```sh # a. Generate the secret key on master node: ssh-keygen @@ -19,7 +21,7 @@ ssh-copy-id -i ~/.ssh/id_rsa.pub ${host_name} ## {host_name} can be a specified ## Step 2: Installation on all nodes -```bash +```sh # Clone cd /toolbox/Megatron-DeepSpeed @@ -30,7 +32,7 @@ pip3 install urllib3==1.23 ## Step 3: Preparing datasets on all nodes -```bash +```sh cd dataset mkdir BookCorpusDataset && cd BookCorpusDataset wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document.bin @@ -39,38 +41,41 @@ wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document ## Step 4: Training by executing the following command on master node -```bash +```sh cd examples/llama2/ ``` -1. Modify run_meg_llama2_13b_node2.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, NCCL_SOCKET_IFNAME +1. Modify run_meg_llama2_13b_node2.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, + NCCL_SOCKET_IFNAME 2. executing run_meg_llama2_13b_node2.sh -```bash +```sh bash run_meg_llama2_13b_node2.sh ``` -a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/ru,you can execute the following command on all nodes: +a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at +unix:///var/ru,you can execute the following command on all nodes: -```bash +```sh usermod -aG docker ${user_name} systemctl restart docker chmod 666 /var/run/docker.sock ``` -b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the following command: +b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the +following command: -```bash +```sh scp -r ../../dataset/gpt_small_117M/gpt_small_117M_text_document ${user_name}@${host_name}:path/to/megatron-deepspeed/dataset/gpt_small_117M/gpt_small_117M_text_document ``` ## Results -| GPUs | Nodes | Model | GBS | DP | TP | PP | Training speed | -| :-----: | ----- | :-----------------------------: | --- | --- | --- | --- | :------------: | -| BI-V150 | 2 | Llama2-13B (Megatron-DeepSpeed) | 32 | 1 | 4 | 8 | 439.94 | +| GPUs | Nodes | Model | GBS | DP | TP | PP | Training speed | +|---------|-------|------------|-----|----|----|----|----------------| +| BI-V150 | 2 | Llama2-13B | 32 | 1 | 4 | 8 | 439.94 | ## Reference diff --git a/nlp/llm/llama2-34b/megatron-deepspeed/README.md b/nlp/llm/llama2-34b/megatron-deepspeed/README.md index 911aeb04b59966af7e118f29873817186e32ddfc..1eee722433888cf31782d975dd21cc9535d9df52 100644 --- a/nlp/llm/llama2-34b/megatron-deepspeed/README.md +++ b/nlp/llm/llama2-34b/megatron-deepspeed/README.md @@ -2,14 +2,16 @@ ## Model description -Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. +Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, +the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for +understanding and generating longer texts. ## Step1: Configure 4-node environment 1. Configure the same runing environment on each node and make sure the docker container names are the same 2. Set ssh non-encryption connection on docker container: -```bash +```sh # a. Generate the secret key on master node: ssh-keygen @@ -19,7 +21,7 @@ ssh-copy-id -i ~/.ssh/id_rsa.pub ${host_name} ## {host_name} can be a specified ## Step 2: Installation on all nodes -```bash +```sh # Clone cd /toolbox/Megatron-DeepSpeed @@ -30,7 +32,7 @@ pip3 install urllib3==1.23 ## Step 3: Preparing datasets on all nodes -```bash +```sh cd dataset mkdir BookCorpusDataset && cd BookCorpusDataset wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document.bin @@ -39,37 +41,40 @@ wget https://the-eye.eu/public/AI/pile_neox/data/BookCorpusDataset_text_document ## Step 4: Training by executing the following command on master node -```bash +```sh cd examples/llama2/ ``` -1. Modify run_meg_llama2_34b_node4.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, NCCL_SOCKET_IFNAME +1. Modify run_meg_llama2_34b_node4.sh according to your machine: for example, HOST_NAME, ADDR_ARRAY, CONTAINER_NAME, + NCCL_SOCKET_IFNAME 2. executing run_meg_llama2_34b_node4.sh -```bash +```sh bash run_meg_llama2_34b_node4.sh ``` -a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/ru,you can execute the following command on all nodes: +a. If there is an error: Got permission denied while trying to connect to the Docker daemon socket at +unix:///var/ru,you can execute the following command on all nodes: -```bash +```sh usermod -aG docker ${user_name} systemctl restart docker chmod 666 /var/run/docker.sock ``` -b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the following command: +b. If an error occurs that the dataset file does not exist,You can copy the dataset file to other nodes by executing the +following command: -```bash +```sh scp -r ../../dataset/gpt_small_117M/gpt_small_117M_text_document ${user_name}@${host_name}:path/to/megatron-deepspeed/dataset/gpt_small_117M/gpt_small_117M_text_document ``` ## Results -| GPUs | Model | Training speed | -| :-----: | :-----------------------------: | :------------: | +| GPUs | Model | Training speed | +|---------|---------------------------------|----------------| | BI-V150 | Llama2-34B (Megatron-DeepSpeed) | | ## Reference diff --git a/nlp/llm/llama2-7b/megatron-deepspeed/README.md b/nlp/llm/llama2-7b/megatron-deepspeed/README.md index 3a6a865196815dfbe277a9a69ca5373d102d2806..716db18e3f31b6db40fa594d9583f469f33e2bec 100644 --- a/nlp/llm/llama2-7b/megatron-deepspeed/README.md +++ b/nlp/llm/llama2-7b/megatron-deepspeed/README.md @@ -2,7 +2,9 @@ ## Model description -Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. +Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, +the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for +understanding and generating longer texts. ## Step 1: Installation @@ -35,9 +37,9 @@ ln -s /usr/local/corex-3.1.0/lib64/python3/dist-packages/bin/torchrun /usr/local ## Results -| GPUs | Toolbox | Model | Training speed | -|:-----------:|:---------:|:----------:|:----------------:| -| BI-V100 x8 | Megatron-DeepSpeed | LLaMA2-7B |1.263 samples/sec | +| GPUs | Toolbox | Model | Training speed | +|------------|--------------------|-----------|-------------------| +| BI-V100 x8 | Megatron-DeepSpeed | LLaMA2-7B | 1.263 samples/sec | ## Reference diff --git a/nlp/llm/llama2-7b_reward_sft/deepspeed/README.md b/nlp/llm/llama2-7b_reward_sft/deepspeed/README.md index e1f1d66ed97419753e00cd9ad2c120a21f734b81..e15aa5132bdd76c0171cf9e3c8e65d1ad01c6e3b 100644 --- a/nlp/llm/llama2-7b_reward_sft/deepspeed/README.md +++ b/nlp/llm/llama2-7b_reward_sft/deepspeed/README.md @@ -1,11 +1,14 @@ -# DeepSpeed Llama-2-7B Reward Model Finetuning +# DeepSpeed Llama-2-7B RMF (DeepSpeed) ## Model description -LLaMA2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, the training corpus of LLaMA2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. + +LLaMA2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, +the training corpus of LLaMA2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for +understanding and generating longer texts. ## Step 1: Installation -```bash +```sh cd deepsparkhub/nlp/llm/llama2-7b_reward_sft/deepspeed pip install -r requirements.txt pip install -e . @@ -13,9 +16,7 @@ pip install -e . ## Step 2: Preparing datasets -Prepare datasets and pretrained model weight - -```bash +```sh # Install lfs wget https://packagecloud.io/github/git-lfs/packages/el/7/git-lfs-2.13.2-1.el7.x86_64.rpm/download -O lfs.rpm rpm -ivh lfs.rpm @@ -35,18 +36,17 @@ mv Llama-2-7b-hf/ datasets/ ## Step 3: Training -Fine-tuning - -```bash +```sh cd training/step2_reward_model_finetuning/training_scripts/llama2/ bash ./run_llama2_7b.sh ``` ## Results -| GPUs | Epochs | FPS | ACC | -|------------|--------|-----|------| -| BI-V100 x8 | 1 | AvgSamplesPerSec: 1.948 | 0.6821 | +| GPUs | Epochs | FPS | ACC | +|------------|--------|-------------------------|--------| +| BI-V100 x8 | 1 | AvgSamplesPerSec: 1.948 | 0.6821 | ## Reference -- [DeepSpeed-Chat] (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat) + +- [DeepSpeedExamples](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat) diff --git a/nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md b/nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md index 54e62c3646fcf3ad67b3bb3486d9f60c07c06a07..643ed14599cf91d747d241c753943085fc3f9c7a 100644 --- a/nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md +++ b/nlp/llm/llama2-7b_rlhf/megatron-deepspeed/README.md @@ -1,12 +1,14 @@ -# Llama2-7B RLHF +# Llama2-7B RLHF (Megatron-DeepSpeed) -In this example, we use [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b) and [Tiny-llama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b) to do RLHF training. You can get them in huggingface through links provided. +In this example, we use [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b) and +[Tiny-llama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b) to do RLHF training. You +can get them in huggingface through links provided. **Notion: You would better to fine-tune this two models, then do RLHF training as below. So that can get good training result.** ## Step 1: Install -``` +```sh bash build_megatron-deepspeed.sh && bash install_megatron-deepspeed.sh ``` @@ -14,21 +16,28 @@ bash build_megatron-deepspeed.sh && bash install_megatron-deepspeed.sh Download dataset and convert it. -``` +```sh cd dataset && bash download_and_convert_dataset.sh ``` ## Step 3: Checkpoint -Download checkpoints as above and put them to proper path (llama2_7b -> checkpoints/llama2-7b, tinyllama_1.1b -> checkpoints/TinyLlama-1.1B), then convert checkpoints. +Download checkpoints as above and put them to proper path (llama2_7b -> checkpoints/llama2-7b, tinyllama_1.1b -> +checkpoints/TinyLlama-1.1B), then convert checkpoints. -``` +```sh cd checkpoints && bash convert_hf_2_meg.sh ``` ## Step 4: Train -``` +```sh cd examples/llama2 bash run_llama2_7b_rlhf_node1.sh ``` + +## Reference + +- [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b) +- [Tiny-llama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b) +- [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) diff --git a/nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md b/nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md index 17ec826ea85bd8ee39a6a19394fe0f158425cf4c..081112d030b372e132ef37fe505b1e8bebcf75ad 100644 --- a/nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md +++ b/nlp/llm/llama2-7b_sft/megatron-deepspeed/README.md @@ -1,12 +1,14 @@ -# Megatron-DeepSpeed Llama-2-7B SFT +# Llama-2-7B SFT (Megatron-DeepSpeed) ## Model description -Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. +Llama 2 is a large language model released by Meta in 2023, with parameters ranging from 7B to 70B. Compared to LLaMA, +the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for +understanding and generating longer texts. ## Step 1: Installation -```bash +```sh # Install sqlite3 wget https://sqlite.org/2019/sqlite-autoconf-3290000.tar.gz tar zxvf sqlite-autoconf-3290000.tar.gz @@ -31,38 +33,41 @@ bash build_megatron-deepspeed.sh && bash install_megatron-deepspeed.sh ## Step 2: Preparing datasets -```bash +```sh cd dataset/ bash download_and_convert_dataset.sh ``` ## Step 3: Download and convert HF weight -You can download huggingface llama2-7b pretrained model from [here](https://huggingface.co/meta-llama/Llama-2-7b), and use below script to convert it. +You can download huggingface llama2-7b pretrained model from [here](https://huggingface.co/meta-llama/Llama-2-7b), and +use below script to convert it. -```bash +```sh cd checkpoints bash convert_hf_2_meg.sh ``` ## Step 4: Training -```bash +```sh cd examples/llama2 bash run_meg_llama2_7b_sft.sh ``` If the torchrun command cannot be found,you can execute: -``` +```sh ln -s /usr/local/corex-3.1.0/lib64/python3/dist-packages/bin/torchrun /usr/local/bin/ ``` ## Results -| GPUs | Toolbox | Model | Training speed | -|:-----------:|:---------:|:----------:|:----------------:| -| BI-V100 x8 | Megatron-DeepSpeed | LLaMA2-7B SFT|1.146 samples/sec | + +| GPUs | Toolbox | Model | Training speed | +|------------|--------------------|---------------|-------------------| +| BI-V100 x8 | Megatron-DeepSpeed | LLaMA2-7B SFT | 1.146 samples/sec | ## Reference + - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) diff --git a/nlp/llm/llama3_8b/colossalai/README.md b/nlp/llm/llama3_8b/colossalai/README.md index 16f04c034426b6ac6fac5bcc78652741a89f5508..a260a6ca986825b7e47f96162c4b9f46ab65fbd2 100644 --- a/nlp/llm/llama3_8b/colossalai/README.md +++ b/nlp/llm/llama3_8b/colossalai/README.md @@ -2,7 +2,9 @@ ## Model description -The Llama 3 Herd of models natively supports multilinguality, coding, reasoning, and tool usage. Our largest model is dense Transformer with 405B parameters, processing information in a context window of up to 128K tokens, Llama 3 8B is the smallest model of Llama 3 Herd of models. +The Llama 3 Herd of models natively supports multilinguality, coding, reasoning, and tool usage. Our largest model is +dense Transformer with 405B parameters, processing information in a context window of up to 128K tokens, Llama 3 8B is +the smallest model of Llama 3 Herd of models. ## Step 1: Preparing checkpoints @@ -19,7 +21,8 @@ cp tokenizer.model /home/model_zoos/Meta-Llama-3-8B ## Step 2: Installation and preparing datasets -You should ensure that the corresponding version of ColossalAI has been installed in the iluvatar environment. Then install applications as follows: +You should ensure that the corresponding version of ColossalAI has been installed in the iluvatar environment. Then +install applications as follows: ```sh git clone -b v0.4.4 https://github.com/hpcaitech/ColossalAI.git --depth=1 @@ -43,10 +46,10 @@ bash run_llama3_8b_sft_3d.sh ## Results -| model | peft | num_gpus |train_samples_per_second | -| --------- | ----------- | ------------------ | ---------------------- | -| llama3-8b | Full sft | 16 | 1.53 | +| model | peft | num_gpus | train_samples_per_second | +|-----------|----------|----------|--------------------------| +| Llama3-8B | Full sft | 16 | 1.53 | ## Reference -- [ColossalAI (tag:v0.4.4)](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Colossal-LLaMA) +- [ColossalAI](https://github.com/hpcaitech/ColossalAI/tree/v0.4.4/applications/Colossal-LLaMA) diff --git a/nlp/llm/llama3_8b/megatron-deepspeed/README.md b/nlp/llm/llama3_8b/megatron-deepspeed/README.md index 17a2301f50912c65b38bca12d41026cb5a5e61c0..509090154e7483aefc22ce4cf93064b2349e3b68 100644 --- a/nlp/llm/llama3_8b/megatron-deepspeed/README.md +++ b/nlp/llm/llama3_8b/megatron-deepspeed/README.md @@ -2,11 +2,14 @@ ## Model description -Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. Llama 3 uses a tokenizer with a vocabulary of 128K tokens, and was trained on on sequences of 8,192 tokens. Grouped-Query Attention (GQA) is used for all models to improve inference efficiency. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. +Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. Llama 3 uses a tokenizer +with a vocabulary of 128K tokens, and was trained on on sequences of 8,192 tokens. Grouped-Query Attention (GQA) is used +for all models to improve inference efficiency. The tuned versions use supervised fine-tuning (SFT) and reinforcement +learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ## Step 1: Installation -```bash +```sh # Clone cd /toolbox/Megatron-DeepSpeed @@ -17,7 +20,7 @@ pip3 install urllib3==1.23 ## Step 2: Preparing datasets -```bash +```sh pushd dataset # get gpt_small_117M_llama3.tar wget http://files.deepspark.org.cn:880/deepspark/data/datasets/gpt_small_117M_llama3.tar @@ -28,18 +31,12 @@ popd ## Step 3: Training -```bash +```sh export NCCL_SOCKET_IFNAME="eth0" cd examples/llama3 bash run_te_llama3_8b_node1.sh ``` -## Results - -| GPUs | Model | Training speed | -| :-----: | :----------------------------: | :------------: | -| BI-V150 | Llama3-8B (Megatron-DeepSpeed) | | - ## Reference - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) diff --git a/nlp/llm/mamba-2/megatron-lm/README.md b/nlp/llm/mamba-2/megatron-lm/README.md index 76b28465085897424a5a0aca3bf666ee1484c105..a25acbc6b307612d0b3ff4a4362785000685df52 100644 --- a/nlp/llm/mamba-2/megatron-lm/README.md +++ b/nlp/llm/mamba-2/megatron-lm/README.md @@ -2,7 +2,10 @@ ## Model description -Mamba-2 is a cutting-edge state space model (SSM) architecture designed as a highly efficient alternative to traditional Transformer-based large language models (LLMs). It is the second version of the Mamba model and builds on the strengths of its predecessor by offering faster inference, improved scalability for long sequences, and lower computational complexity. +Mamba-2 is a cutting-edge state space model (SSM) architecture designed as a highly efficient alternative to traditional +Transformer-based large language models (LLMs). It is the second version of the Mamba model and builds on the strengths +of its predecessor by offering faster inference, improved scalability for long sequences, and lower computational +complexity. ## Step 1: Installation @@ -29,7 +32,7 @@ bash download_and_convert_dataset.sh ## Step 3: Training -```bash +```sh cd examples/mamba bash train.sh ``` diff --git a/nlp/llm/mixtral/megatron-lm/README.md b/nlp/llm/mixtral/megatron-lm/README.md index 6963e4e5bc5225c5d545355b378b6d9558e82259..23bd4064005c198e155dc03f6a4bc0c6f69e7824 100644 --- a/nlp/llm/mixtral/megatron-lm/README.md +++ b/nlp/llm/mixtral/megatron-lm/README.md @@ -2,7 +2,9 @@ ## Model description -The Mixtral model is a Mixture of Experts (MoE)-based large language model developed by Mistral AI, an innovative company focusing on open-source AI models. Mixtral is designed to achieve high performance while maintaining computational efficiency, making it an excellent choice for real-world applications. +The Mixtral model is a Mixture of Experts (MoE)-based large language model developed by Mistral AI, an innovative +company focusing on open-source AI models. Mixtral is designed to achieve high performance while maintaining +computational efficiency, making it an excellent choice for real-world applications. ## Step 1: Installation @@ -29,7 +31,7 @@ bash download_and_convert_dataset.sh ## Step 3: Training -```bash +```sh cd examples/mixtral bash train_mixtral_8x7b_distributed.sh ``` diff --git a/nlp/llm/qwen-7b/firefly/README.md b/nlp/llm/qwen-7b/firefly/README.md index a84d62afd626f4ef69111d51a0b5f4fc10f3a15b..b1a2a5902679fb12e1ff8d868e80ab7b6c1a974d 100644 --- a/nlp/llm/qwen-7b/firefly/README.md +++ b/nlp/llm/qwen-7b/firefly/README.md @@ -1,12 +1,15 @@ -# Qwen-7B +# Qwen-7B (Firefly) ## Model description -Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. +Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba +Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web +texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI +assistant, which is trained with alignment techniques. ## Step 1: Installation -```bash +```sh # install firefly pushd /toolbox/firefly pip3 install -r requirements.txt @@ -16,7 +19,7 @@ popd ## Step 2: Preparing datasets and checkpoints -```bash +```sh pip install modelscope python3 ./get_Qwen-7B.py mkdir -p /home/model_zoo/nlp @@ -25,7 +28,7 @@ mv /root/.cache/modelscope/hub/qwen/Qwen-7B /home/model_zoo/nlp ## Step 3: Training -```bash +```sh # how to train # train with sft full @@ -40,11 +43,11 @@ bash train.sh 1 configs/qwen-7b-sft-ptuning_v2.json ptuning_v2 ## Results -| No. | model | peft | num_gpus |train_samples_per_second | -| ---- | --------- | ----------- | ------------------ | ---------------------- | -| 1 | qwn-7B | Full sft | 16 | 12.430 | -| 2 | qwn-7B | LoRA | 1 | 3.409 | -| 3 | qwn-7B | Ptuning_V2 | 1 | 4.827 | +| No. | model | peft | num_gpus | train_samples_per_second | +|-----|--------|------------|----------|--------------------------| +| 1 | qwn-7B | Full sft | 16 | 12.430 | +| 2 | qwn-7B | LoRA | 1 | 3.409 | +| 3 | qwn-7B | Ptuning_V2 | 1 | 4.827 | ## Reference diff --git a/nlp/llm/qwen1.5-14b/firefly/README.md b/nlp/llm/qwen1.5-14b/firefly/README.md index 4383db17e6add615f8fd48a4c397da0a197f8ab6..753870d5108a1f9f68361478cc3bfe97e372d476 100644 --- a/nlp/llm/qwen1.5-14b/firefly/README.md +++ b/nlp/llm/qwen1.5-14b/firefly/README.md @@ -1,12 +1,16 @@ -# Qwen1.5-14B +# Qwen1.5-14B (Firefly) ## Model description -Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;Significant performance improvement in Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all sizes;No need of trust_remote_code. +Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of +data. In comparison with the previous released Qwen, the improvements include:8 model sizes, including 0.5B, 1.8B, 4B, +7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;Significant performance improvement in +Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all +sizes;No need of trust_remote_code. ## Step 1: Installation -```bash +```sh # install firefly pushd /toolbox/firefly pip3 install -r requirements.txt @@ -16,7 +20,7 @@ popd ## Step 2: Preparing datasets and checkpoints -```bash +```sh mkdir -p /home/datasets/nlp git clone -b school_math_0.25M git@gitee.com:sanghui-ilu/datasets.git mv datasets/school_math_0.25M.jsonl /home/datasets/nlp @@ -30,14 +34,14 @@ mv /root/.cache/modelscope/hub/qwen/Qwen1___5-14B /home/model_zoo/nlp ## Step 3: Training -```bash +```sh bash train.sh 16 configs/qwen-14b-sft-full.json full ``` ## Results | No. | model | peft | num_gpus | train_samples_per_second | -| --- | ----------- | -------- | -------- | ------------------------ | +|-----|-------------|----------|----------|--------------------------| | 1 | Qwen1.5-14B | Full sft | 16 | 2.099 | ## Reference diff --git a/nlp/llm/qwen1.5-7b/firefly/README.md b/nlp/llm/qwen1.5-7b/firefly/README.md index cdfe7ab1a13cec9630a73ab73dea44b8058ab0cb..7e9b0e526cf7a5e925592ff7e40ff5d02c22f0eb 100644 --- a/nlp/llm/qwen1.5-7b/firefly/README.md +++ b/nlp/llm/qwen1.5-7b/firefly/README.md @@ -1,12 +1,16 @@ -# Qwen1.5-7B +# Qwen1.5-7B (Firefly) ## Model description -Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;Significant performance improvement in Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all sizes;No need of trust_remote_code. +Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of +data. In comparison with the previous released Qwen, the improvements include:8 model sizes, including 0.5B, 1.8B, 4B, +7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;Significant performance improvement in +Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all +sizes;No need of trust_remote_code. ## Step 1: Installation -```bash +```sh # install firefly pushd /toolbox/firefly pip3 install -r requirements.txt @@ -16,7 +20,7 @@ popd ## Step 2: Preparing datasets and checkpoints -```bash +```sh mkdir -p /home/datasets/nlp git clone -b school_math_0.25M git@gitee.com:sanghui-ilu/datasets.git mv datasets/school_math_0.25M.jsonl /home/datasets/nlp @@ -30,14 +34,14 @@ mv /root/.cache/modelscope/hub/qwen/Qwen1___5-7B /home/model_zoo/nlp ## Step 3: Training -```bash +```sh bash train.sh 16 configs/qwen-7b-sft-full.json full ``` ## Results | No. | model | peft | num_gpus | train_samples_per_second | -| --- | ---------- | -------- | -------- | ------------------------ | +|-----|------------|----------|----------|--------------------------| | 1 | Qwen1.5-7B | Full sft | 16 | 9.805 | ## Reference diff --git a/nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md b/nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md index 65c87312ce9fca5ed298f721b39bcb8b7af027cb..fd55e36f4c5dcc1680ac661b479b323e844764bf 100644 --- a/nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md +++ b/nlp/llm/qwen2.5-7b/LLaMA-Factory/README.md @@ -2,23 +2,20 @@ ## Model description -Qwen2.5 is the latest series of Qwen large language models. Qwen2.5 brings the -following improvements upon Qwen2: - -- Significantly more knowledge and has greatly improved capabilities in coding -and mathematics, thanks to our specialized expert models in these domains. -- Significant improvements in instruction following, generating long texts (over -8K tokens), understanding structured data (e.g, tables), and generating -structured outputs especially JSON. More resilient to the diversity of system +Qwen2.5 is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: + +- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our +specialized expert models in these domains. +- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured +data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. -- Multilingual support for over 29 languages, including Chinese, English, -French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, -Vietnamese, Thai, Arabic, and more. +- Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, +Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ## Step 1: Installation -```bash +```sh git clone -b 6b62550af1acda93246e05b37061f2ad7db58e55 --depth 1 https://github.com/hiyouga/LLaMA-Factory.git cp qwen2_5-7b_full_sft.yaml LLaMA-Factory/examples/train_full/ cp qwen2_5-7b_lora_sft.yaml LLaMA-Factory/examples/train_lora/ @@ -29,7 +26,7 @@ pip3 install --no-deps -e . ## Step 2: Preparing datasets -```bash +```sh # get qwen2.7-7b from https://huggingface.co/Qwen/Qwen2.5-7B and put it # in checkpoints/Qwen2.5-7B mkdir -p checkpoints @@ -39,21 +36,21 @@ mkdir -p checkpoints ### Full SFT -```bash +```sh llamafactory-cli train examples/train_full/qwen2_5-7b_full_sft.yaml ``` ### LoRA SFT -```bash +```sh llamafactory-cli train examples/train_lora/qwen2_5-7b_lora_sft.yaml ``` ## Results -| GPUs | Model | type |train_samples_per_second | -| :-----: | :----------------------------: | :------------: | :------------: | -| BI-V150 x 8 | Qwen2.5-7b| full | 1.889 | +| GPUs | Model | type | train_samples_per_second | +|-------------|------------|------|--------------------------| +| BI-V150 x 8 | Qwen2.5-7b | full | 1.889 | ## Reference diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/README.md b/others/kolmogorov_arnold_networks/kan/pytorch/README.md similarity index 31% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/README.md rename to others/kolmogorov_arnold_networks/kan/pytorch/README.md index ec6491e9e7a8102d464fef01aa5d01fbf3bd319e..52d88bded666d8778fcdf9c0b4d1f355348e2b8b 100644 --- a/methodology/kolmogorov_arnold_networks/kan/pytorch/README.md +++ b/others/kolmogorov_arnold_networks/kan/pytorch/README.md @@ -2,7 +2,11 @@ ## Model description -Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. +Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong +mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on +Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs +have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of +both model accuracy and interpretability. ## Run @@ -13,9 +17,9 @@ bash ./run_train.sh ## Result -| Model | Training speed | -|-------------|------------------| -| KAN | 6490 samples/sec | +| Model | Training speed | +|-------|------------------| +| KAN | 6490 samples/sec | ## Reference diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/__init__.py b/others/kolmogorov_arnold_networks/kan/pytorch/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/__init__.py rename to others/kolmogorov_arnold_networks/kan/pytorch/__init__.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/KANLayer-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/KANLayer-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/KANLayer-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/KANLayer-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/LBFGS-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/LBFGS-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/LBFGS-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/LBFGS-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MLP-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MLP-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MLP-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MLP-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MultKAN-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MultKAN-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MultKAN-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/MultKAN-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/Symbolic_KANLayer-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/Symbolic_KANLayer-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/Symbolic_KANLayer-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/Symbolic_KANLayer-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/__init__-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/__init__-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/__init__-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/__init__-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/compiler-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/compiler-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/compiler-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/compiler-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/experiment-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/experiment-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/experiment-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/experiment-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/feynman-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/feynman-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/feynman-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/feynman-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/hypothesis-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/hypothesis-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/hypothesis-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/hypothesis-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/spline-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/spline-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/spline-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/spline-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/utils-checkpoint.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/utils-checkpoint.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/utils-checkpoint.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/.ipynb_checkpoints/utils-checkpoint.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/KANLayer.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/KANLayer.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/KANLayer.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/KANLayer.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/LBFGS.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/LBFGS.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/LBFGS.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/LBFGS.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/MLP.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/MLP.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/MLP.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/MLP.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/MultKAN.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/MultKAN.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/MultKAN.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/MultKAN.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/Symbolic_KANLayer.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/Symbolic_KANLayer.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/Symbolic_KANLayer.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/Symbolic_KANLayer.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/__init__.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/__init__.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/__init__.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/__init__.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/mult_symbol.png b/others/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/mult_symbol.png similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/mult_symbol.png rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/mult_symbol.png diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/sum_symbol.png b/others/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/sum_symbol.png similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/sum_symbol.png rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/assets/img/sum_symbol.png diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/compiler.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/compiler.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/compiler.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/compiler.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/experiment.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/experiment.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/experiment.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/experiment.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/experiments/experiment1.ipynb b/others/kolmogorov_arnold_networks/kan/pytorch/kan/experiments/experiment1.ipynb similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/experiments/experiment1.ipynb rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/experiments/experiment1.ipynb diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/feynman.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/feynman.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/feynman.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/feynman.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/hypothesis.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/hypothesis.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/hypothesis.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/hypothesis.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/spline.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/spline.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/spline.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/spline.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/kan/utils.py b/others/kolmogorov_arnold_networks/kan/pytorch/kan/utils.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/kan/utils.py rename to others/kolmogorov_arnold_networks/kan/pytorch/kan/utils.py diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/requirements.txt b/others/kolmogorov_arnold_networks/kan/pytorch/requirements.txt similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/requirements.txt rename to others/kolmogorov_arnold_networks/kan/pytorch/requirements.txt diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/run_train.sh b/others/kolmogorov_arnold_networks/kan/pytorch/run_train.sh similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/run_train.sh rename to others/kolmogorov_arnold_networks/kan/pytorch/run_train.sh diff --git a/methodology/kolmogorov_arnold_networks/kan/pytorch/train_kan.py b/others/kolmogorov_arnold_networks/kan/pytorch/train_kan.py similarity index 100% rename from methodology/kolmogorov_arnold_networks/kan/pytorch/train_kan.py rename to others/kolmogorov_arnold_networks/kan/pytorch/train_kan.py diff --git a/recommendation/ctr/deepfm/paddlepaddle/README.md b/others/recommendation_systems/deepfm/paddlepaddle/README.md similarity index 45% rename from recommendation/ctr/deepfm/paddlepaddle/README.md rename to others/recommendation_systems/deepfm/paddlepaddle/README.md index 5e71bf7c2a4b6aa729dee8c0e859737129c8fdc7..da80ffe7d7ea151b9a8ce1d4e4c67c29a5fb8808 100644 --- a/recommendation/ctr/deepfm/paddlepaddle/README.md +++ b/others/recommendation_systems/deepfm/paddlepaddle/README.md @@ -1,8 +1,16 @@ # DeepFM +## Description + +DeepFM (Deep Factorization Machine) combines Factorization Machines (FM) and Deep Neural Networks (DNN) for +recommendation systems. FM captures low-order feature interactions, while DNN models high-order non-linear interactions. +The model is end-to-end trainable and excels in tasks like click-through rate (CTR) prediction and personalized +recommendations. By integrating both FM and DNN, DeepFM efficiently handles sparse data, offering better performance +compared to traditional methods, especially in large-scale applications such as advertising and product recommendations. + ## Step 1: Installation -```bash +```sh git clone -b release/2.3.0 https://github.com/PaddlePaddle/PaddleRec.git cd PaddleRec pip3 install -r requirements.txt @@ -10,16 +18,15 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets -```bash +```sh pushd datasets/criteo/ sh run.sh popd ``` - ## Step 3: Training -```bash +```sh cd models/rank/deepfm export FLAGS_cudnn_exhaustive_search=True export FLAGS_cudnn_batchnorm_spatial_persistent=True @@ -31,4 +38,5 @@ python3 -u ../../../tools/infer.py -m config_bigdata.yaml ``` ## Reference -- [PaddleRec](https://github.com/PaddlePaddle/PaddleRec.git) \ No newline at end of file + +- [PaddleRec](https://github.com/PaddlePaddle/PaddleRec.git) diff --git a/recommendation/ctr/dlrm/paddlepaddle/README.md b/others/recommendation_systems/dlrm/paddlepaddle/README.md similarity index 33% rename from recommendation/ctr/dlrm/paddlepaddle/README.md rename to others/recommendation_systems/dlrm/paddlepaddle/README.md index 47eb0d2e9cc067216f920f8bdf0338f79b0ed8a0..e96b9f33be95a6cc50b119d662e575459c1c6ddb 100644 --- a/recommendation/ctr/dlrm/paddlepaddle/README.md +++ b/others/recommendation_systems/dlrm/paddlepaddle/README.md @@ -1,11 +1,19 @@ # DLRM ## Model description -With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design. + +With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for +tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks +due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a +state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 +frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding +tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected +layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI +platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design. ## Step 1: Installation -```bash +```sh git clone -b master --recursive https://github.com/PaddlePaddle/PaddleRec.git cd PaddleRec git checkout eb869a15b7d858f9f3788d9b25af4f61a022f9c4 @@ -15,7 +23,7 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets -```bash +```sh pushd datasets/criteo sh run.sh popd @@ -23,8 +31,7 @@ popd ## Step 3: Training - -```bash +```sh cd models/rank/dlrm python3 -u ../../../tools/trainer.py -m config_bigdata.yaml python3 -u ../../../tools/infer.py -m config_bigdata.yaml @@ -33,9 +40,10 @@ python3 -u ../../../tools/infer.py -m config_bigdata.yaml ## Results -| GPUs | IPS | AUC | -|-------------|-----------|-------------| -| BI-V100 x1 | 300 | 0.802409 | +| GPUs | IPS | AUC | +|------------|-----|----------| +| BI-V100 x1 | 300 | 0.802409 | ## Reference -- [DLRM](https://github.com/PaddlePaddle/PaddleRec/tree/release/2.3.0/models/rank/dlrm) + +- [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/release/2.3.0/models/rank/dlrm) diff --git a/recommendation/ctr/dlrm/pytorch/README.md b/others/recommendation_systems/dlrm/pytorch/README.md similarity index 57% rename from recommendation/ctr/dlrm/pytorch/README.md rename to others/recommendation_systems/dlrm/pytorch/README.md index 41d5345fb6b7383c5f227f7705f4b11482a64fbd..61afad618014714b12549e19d4f8dfa8b5d83c37 100644 --- a/recommendation/ctr/dlrm/pytorch/README.md +++ b/others/recommendation_systems/dlrm/pytorch/README.md @@ -2,11 +2,18 @@ ## Model description -With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design. +With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for +tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks +due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a +state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 +frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding +tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected +layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI +platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design. ## Step 1: Installing packages -```shell +```sh pip3 install -r requirements.txt && python3 ./setup.py install ``` @@ -14,7 +21,7 @@ pip3 install -r requirements.txt && python3 ./setup.py install Criteo_Terabyte consists of 23 days data, as it is very large, here only take 3 days data for an example. -```shell +```sh # Check gzip version gzip -V @@ -31,33 +38,34 @@ cd dlrm/data/ bash download_and_preprocess.sh ``` -After above steps, you can get files: terabyte_processed_test.bin, terabyte_processed_train.bin, terabyte_processed_val.bin in "/home/datasets/recommendation/Criteo_Terabyte/". +After above steps, you can get files: terabyte_processed_test.bin, terabyte_processed_train.bin, +terabyte_processed_val.bin in "/home/datasets/recommendation/Criteo_Terabyte/". ## Step 3: Training ### On single GPU -```shell +```sh python3 -u scripts/train.py --model_config dlrm/config/official_config.json --dataset /home/datasets/recommendation/Criteo_Terabyte --lr 0.1 --warmup_steps 2750 --decay_end_lr 0 --decay_steps 27772 --decay_start_step 49315 --batch_size 2048 --epochs 5 |& tee 1card.txt ``` ### Multiple GPUs on one machine -```shell +```sh python3 -u -m torch.distributed.launch --nproc_per_node=8 --use_env scripts/dist_train.py --model_config dlrm/config/official_config.json --dataset /home/datasets/recommendation/Criteo_Terabyte --lr 0.1 --warmup_steps 2750 --decay_end_lr 0 --decay_steps 27772 --decay_start_step 49315 --batch_size 2048 --epochs 5 |& tee 8cards.txt ``` ## Results on BI-V100 -| GPUs | FPS | AUC | -| ---- | ------ | ---- | +| GPUs | FPS | AUC | +|------|--------|------| | 1x1 | 196958 | N/A | | 1x8 | 346555 | 0.75 | | Convergence criteria | Configuration (x denotes number of GPUs) | Performance | Accuracy | Power(W) | Scalability | Memory utilization(G) | Stability | -| -------------------- | ---------------------------------------- | ----------- | -------- | ---------- | ----------- | ----------------------- | --------- | +|----------------------|------------------------------------------|-------------|----------|------------|-------------|-------------------------|-----------| | AUC:0.75 | SDK V2.2,bs:2048,8x,AMP | 793486 | 0.75 | 60\*8 | 0.97 | 3.7\*8 | 1 | - ## Reference -https://github.com/mlcommons/training_results_v0.7/tree/master/NVIDIA/benchmarks/dlrm/implementations/pytorch + +- [mlcommons](https://github.com/mlcommons/training_results_v0.7/tree/master/NVIDIA/benchmarks/dlrm/implementations/pytorch) diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__init__.py b/others/recommendation_systems/dlrm/pytorch/dlrm/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__init__.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/config/criteo_kaggle.json b/others/recommendation_systems/dlrm/pytorch/dlrm/config/criteo_kaggle.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/config/criteo_kaggle.json rename to others/recommendation_systems/dlrm/pytorch/dlrm/config/criteo_kaggle.json diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/config/criteo_kaggle_tiny.json b/others/recommendation_systems/dlrm/pytorch/dlrm/config/criteo_kaggle_tiny.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/config/criteo_kaggle_tiny.json rename to others/recommendation_systems/dlrm/pytorch/dlrm/config/criteo_kaggle_tiny.json diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/config/mlperf_10m.limit.json b/others/recommendation_systems/dlrm/pytorch/dlrm/config/mlperf_10m.limit.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/config/mlperf_10m.limit.json rename to others/recommendation_systems/dlrm/pytorch/dlrm/config/mlperf_10m.limit.json diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/config/mlperf_40m.limit.json b/others/recommendation_systems/dlrm/pytorch/dlrm/config/mlperf_40m.limit.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/config/mlperf_40m.limit.json rename to others/recommendation_systems/dlrm/pytorch/dlrm/config/mlperf_40m.limit.json diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/config/official_config.json b/others/recommendation_systems/dlrm/pytorch/dlrm/config/official_config.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/config/official_config.json rename to others/recommendation_systems/dlrm/pytorch/dlrm/config/official_config.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/__init__.py b/others/recommendation_systems/dlrm/pytorch/dlrm/data/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/__init__.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/data_loader_terabyte.py b/others/recommendation_systems/dlrm/pytorch/dlrm/data/data_loader_terabyte.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/data_loader_terabyte.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/data_loader_terabyte.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/data_utils.py b/others/recommendation_systems/dlrm/pytorch/dlrm/data/data_utils.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/data_utils.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/data_utils.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/dataset.py b/others/recommendation_systems/dlrm/pytorch/dlrm/data/dataset.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/dataset.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/dataset.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/dlrm_data_pytorch.py b/others/recommendation_systems/dlrm/pytorch/dlrm/data/dlrm_data_pytorch.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/dlrm_data_pytorch.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/dlrm_data_pytorch.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/data/download_and_preprocess.sh b/others/recommendation_systems/dlrm/pytorch/dlrm/data/download_and_preprocess.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/data/download_and_preprocess.sh rename to others/recommendation_systems/dlrm/pytorch/dlrm/data/download_and_preprocess.sh diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/deprecated_model.py b/others/recommendation_systems/dlrm/pytorch/dlrm/deprecated_model.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/deprecated_model.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/deprecated_model.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/dist_model.py b/others/recommendation_systems/dlrm/pytorch/dlrm/dist_model.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/dist_model.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/dist_model.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/mlperf_logger.py b/others/recommendation_systems/dlrm/pytorch/dlrm/mlperf_logger.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/mlperf_logger.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/mlperf_logger.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/nn/__init__.py b/others/recommendation_systems/dlrm/pytorch/dlrm/nn/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/nn/__init__.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/nn/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/nn/functional.py b/others/recommendation_systems/dlrm/pytorch/dlrm/nn/functional.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/nn/functional.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/nn/functional.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__init__.py b/others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__init__.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/buckle_embedding.py b/others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/buckle_embedding.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/buckle_embedding.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/buckle_embedding.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/gather.py b/others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/gather.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/nn/modules/gather.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/nn/modules/gather.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/__init__.py b/others/recommendation_systems/dlrm/pytorch/dlrm/utils/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/__init__.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/utils/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/utils/distributed.py b/others/recommendation_systems/dlrm/pytorch/dlrm/utils/distributed.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/utils/distributed.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/utils/distributed.py diff --git a/recommendation/ctr/dlrm/pytorch/dlrm/utils/metrics.py b/others/recommendation_systems/dlrm/pytorch/dlrm/utils/metrics.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/dlrm/utils/metrics.py rename to others/recommendation_systems/dlrm/pytorch/dlrm/utils/metrics.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/CONTRIBUTING.md b/others/recommendation_systems/dlrm/pytorch/logging/CONTRIBUTING.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/CONTRIBUTING.md rename to others/recommendation_systems/dlrm/pytorch/logging/CONTRIBUTING.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/LICENSE.md b/others/recommendation_systems/dlrm/pytorch/logging/LICENSE.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/LICENSE.md rename to others/recommendation_systems/dlrm/pytorch/logging/LICENSE.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/MANIFEST.in b/others/recommendation_systems/dlrm/pytorch/logging/MANIFEST.in similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/MANIFEST.in rename to others/recommendation_systems/dlrm/pytorch/logging/MANIFEST.in diff --git a/recommendation/ctr/dlrm/pytorch/logging/README.md b/others/recommendation_systems/dlrm/pytorch/logging/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/README.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/RELEASING.md b/others/recommendation_systems/dlrm/pytorch/logging/RELEASING.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/RELEASING.md rename to others/recommendation_systems/dlrm/pytorch/logging/RELEASING.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/VERSION b/others/recommendation_systems/dlrm/pytorch/logging/VERSION similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/VERSION rename to others/recommendation_systems/dlrm/pytorch/logging/VERSION diff --git a/recommendation/ctr/dlrm/pytorch/logging/log_parsers/README.md b/others/recommendation_systems/dlrm/pytorch/logging/log_parsers/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/log_parsers/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/log_parsers/README.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/log_parsers/parse_mlperf.py b/others/recommendation_systems/dlrm/pytorch/logging/log_parsers/parse_mlperf.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/log_parsers/parse_mlperf.py rename to others/recommendation_systems/dlrm/pytorch/logging/log_parsers/parse_mlperf.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/benchmark_meta.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/benchmark_meta.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/benchmark_meta.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/benchmark_meta.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/README.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_cosmoflow.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_cosmoflow.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_cosmoflow.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_cosmoflow.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_cosine_annealing.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_cosine_annealing.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_cosine_annealing.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_cosine_annealing.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_lamb.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_lamb.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_lamb.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_lamb.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_multistep.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_multistep.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_multistep.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_deepcam_multistep.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_oc20.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_oc20.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_oc20.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/closed_oc20.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_cosmoflow.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_cosmoflow.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_cosmoflow.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_cosmoflow.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_deepcam.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_deepcam.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_deepcam.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_deepcam.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_oc20.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_oc20.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_oc20.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/hpc_1.0.0/open_oc20.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_compliance.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_compliance.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_compliance.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_compliance.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_060.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_060.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_060.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_060.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_070.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_070.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_070.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_070.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_100.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_100.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_100.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_100.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_110.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_110.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_110.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_110.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_200.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_200.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_200.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/mlp_parser/ruleset_200.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/gnmt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/gnmt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/gnmt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/gnmt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/score.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/score.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/score.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/score.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/transformer.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/transformer.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/transformer.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.6.0/transformer.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_gnmt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_gnmt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_gnmt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_gnmt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_lars.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_lars.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_lars.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_lars.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_sgd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_sgd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_sgd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_resnet_sgd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_transformer.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_transformer.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_transformer.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/closed_transformer.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_gnmt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_gnmt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_gnmt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_gnmt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_transformer.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_transformer.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_transformer.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0/open_transformer.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/gnmt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/gnmt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/gnmt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/gnmt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_lars.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_lars.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_lars.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_lars.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_sgd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_sgd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_sgd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/resnet_sgd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/gnmt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/gnmt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/gnmt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/gnmt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/transformer.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/transformer.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/transformer.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/summary/transformer.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/transformer.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/transformer.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/transformer.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_0.7.0_warn/transformer.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_lars.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_lars.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_lars.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_lars.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_sgd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_sgd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_sgd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_resnet_sgd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/closed_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.0.0/open_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_lars.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_lars.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_lars.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_lars.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_sgd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_sgd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_sgd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_resnet_sgd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/closed_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_1.1.0/open_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_lars.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_lars.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_lars.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_lars.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_sgd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_sgd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_sgd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_resnet_sgd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/closed_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_bert.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_bert.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_bert.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_bert.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_common.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_common.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_common.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_common.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_dlrm.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_dlrm.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_dlrm.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_dlrm.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_maskrcnn.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_maskrcnn.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_maskrcnn.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_maskrcnn.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_minigo.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_minigo.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_minigo.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_minigo.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_resnet.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_resnet.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_resnet.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_resnet.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_rnnt.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_rnnt.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_rnnt.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_rnnt.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_ssd.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_ssd.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_ssd.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_ssd.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_unet3d.yaml b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_unet3d.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_unet3d.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/compliance_checker/training_2.0.0/open_unet3d.yaml diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/README.md diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/constants.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/constants.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/constants.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/constants.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/examples/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/examples/__init__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/examples/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/examples/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/examples/dummy_example.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/examples/dummy_example.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/examples/dummy_example.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/examples/dummy_example.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/mllog.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/mllog.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/mllog.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/mllog.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/test_mllog.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/test_mllog.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/mllog/test_mllog.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/mllog/test_mllog.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/README.md diff --git a/speech/speech_recognition/rnnt/pytorch/common/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/common/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/package_checker.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/package_checker.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/package_checker.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/package_checker.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/seed_checker.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/seed_checker.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/package_checker/seed_checker.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/package_checker/seed_checker.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/README.md diff --git a/speech/speech_recognition/rnnt/pytorch/mlperf/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/mlperf/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_cosmoflow.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_cosmoflow.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_cosmoflow.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_cosmoflow.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_deepcam.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_deepcam.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_deepcam.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_deepcam.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_oc20.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_oc20.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_oc20.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/hpc_1.0.0/rcps_oc20.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/rcp_checker.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/rcp_checker.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/rcp_checker.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/rcp_checker.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_bert.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_bert.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_bert.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_bert.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_dlrm.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_dlrm.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_dlrm.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_dlrm.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_maskrcnn.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_maskrcnn.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_maskrcnn.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_maskrcnn.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_resnet.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_resnet.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_resnet.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_resnet.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_rnnt.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_rnnt.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_rnnt.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_rnnt.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_ssd.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_ssd.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_ssd.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_ssd.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_unet3d.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_unet3d.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_unet3d.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.0.0/rcps_unet3d.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_bert.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_bert.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_bert.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_bert.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_dlrm.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_dlrm.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_dlrm.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_dlrm.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_maskrcnn.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_maskrcnn.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_maskrcnn.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_maskrcnn.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_resnet.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_resnet.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_resnet.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_resnet.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_rnnt.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_rnnt.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_rnnt.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_rnnt.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_ssd.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_ssd.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_ssd.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_ssd.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_unet3d.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_unet3d.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_unet3d.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_1.1.0/rcps_unet3d.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_bert.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_bert.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_bert.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_bert.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_dlrm.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_dlrm.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_dlrm.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_dlrm.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_maskrcnn.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_maskrcnn.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_maskrcnn.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_maskrcnn.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_resnet.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_resnet.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_resnet.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_resnet.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_rnnt.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_rnnt.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_rnnt.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_rnnt.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_ssd.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_ssd.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_ssd.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_ssd.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_unet3d.json b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_unet3d.json similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_unet3d.json rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/rcp_checker/training_2.0.0/rcps_unet3d.json diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/README.md diff --git a/speech/speech_recognition/rnnt/pytorch/utils/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/__init__.py similarity index 100% rename from speech/speech_recognition/rnnt/pytorch/utils/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/repo_checker.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/repo_checker.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/repo_checker/repo_checker.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/repo_checker/repo_checker.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/README.md diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/bin/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/result_summarizer.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/result_summarizer.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/result_summarizer/result_summarizer.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/result_summarizer/result_summarizer.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/README.md b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/README.md similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/README.md rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/README.md diff --git a/speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/__init__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__init__.py similarity index 100% rename from speech/speech_synthesis/vqmivc/pytorch/ParallelWaveGAN/parallel_wavegan/distributed/__init__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__init__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__main__.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__main__.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__main__.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/__main__.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/system_desc_checker.py b/others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/system_desc_checker.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/system_desc_checker.py rename to others/recommendation_systems/dlrm/pytorch/logging/mlperf_logging/system_desc_checker/system_desc_checker.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/requirements.txt b/others/recommendation_systems/dlrm/pytorch/logging/requirements.txt similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/requirements.txt rename to others/recommendation_systems/dlrm/pytorch/logging/requirements.txt diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/pack_submission.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/pack_submission.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/pack_submission.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/pack_submission.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v0.7_training.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v0.7_training.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v0.7_training.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v0.7_training.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.0_hpc.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.0_hpc.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.0_hpc.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.0_hpc.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.0_training.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.0_training.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.0_training.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.0_training.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.1_training.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.1_training.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v1.1_training.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v1.1_training.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v2.0_training.sh b/others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v2.0_training.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/scripts/verify_for_v2.0_training.sh rename to others/recommendation_systems/dlrm/pytorch/logging/scripts/verify_for_v2.0_training.sh diff --git a/recommendation/ctr/dlrm/pytorch/logging/setup.py b/others/recommendation_systems/dlrm/pytorch/logging/setup.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/setup.py rename to others/recommendation_systems/dlrm/pytorch/logging/setup.py diff --git a/recommendation/ctr/dlrm/pytorch/logging/system_description/system_description.yaml b/others/recommendation_systems/dlrm/pytorch/logging/system_description/system_description.yaml similarity index 100% rename from recommendation/ctr/dlrm/pytorch/logging/system_description/system_description.yaml rename to others/recommendation_systems/dlrm/pytorch/logging/system_description/system_description.yaml diff --git a/recommendation/ctr/dlrm/pytorch/requirements.txt b/others/recommendation_systems/dlrm/pytorch/requirements.txt similarity index 100% rename from recommendation/ctr/dlrm/pytorch/requirements.txt rename to others/recommendation_systems/dlrm/pytorch/requirements.txt diff --git a/recommendation/ctr/dlrm/pytorch/scripts/dist_train.py b/others/recommendation_systems/dlrm/pytorch/scripts/dist_train.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/dist_train.py rename to others/recommendation_systems/dlrm/pytorch/scripts/dist_train.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/baseline.py b/others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/baseline.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/baseline.py rename to others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/baseline.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/custom_kernel.py b/others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/custom_kernel.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/custom_kernel.py rename to others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/custom_kernel.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/run.sh b/others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/run.sh similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/dot_based_interact/run.sh rename to others/recommendation_systems/dlrm/pytorch/scripts/dot_based_interact/run.sh diff --git a/recommendation/ctr/dlrm/pytorch/scripts/dummy_top_model.py b/others/recommendation_systems/dlrm/pytorch/scripts/dummy_top_model.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/dummy_top_model.py rename to others/recommendation_systems/dlrm/pytorch/scripts/dummy_top_model.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/model_timer.py b/others/recommendation_systems/dlrm/pytorch/scripts/model_timer.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/model_timer.py rename to others/recommendation_systems/dlrm/pytorch/scripts/model_timer.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/split_data.py b/others/recommendation_systems/dlrm/pytorch/scripts/split_data.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/split_data.py rename to others/recommendation_systems/dlrm/pytorch/scripts/split_data.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/train.py b/others/recommendation_systems/dlrm/pytorch/scripts/train.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/train.py rename to others/recommendation_systems/dlrm/pytorch/scripts/train.py diff --git a/recommendation/ctr/dlrm/pytorch/scripts/utils.py b/others/recommendation_systems/dlrm/pytorch/scripts/utils.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/scripts/utils.py rename to others/recommendation_systems/dlrm/pytorch/scripts/utils.py diff --git a/recommendation/ctr/dlrm/pytorch/setup.py b/others/recommendation_systems/dlrm/pytorch/setup.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/setup.py rename to others/recommendation_systems/dlrm/pytorch/setup.py diff --git a/recommendation/ctr/dlrm/pytorch/src/dot_based_interact.cu b/others/recommendation_systems/dlrm/pytorch/src/dot_based_interact.cu similarity index 100% rename from recommendation/ctr/dlrm/pytorch/src/dot_based_interact.cu rename to others/recommendation_systems/dlrm/pytorch/src/dot_based_interact.cu diff --git a/recommendation/ctr/dlrm/pytorch/src/dot_based_interact_pytorch_types.cu b/others/recommendation_systems/dlrm/pytorch/src/dot_based_interact_pytorch_types.cu similarity index 100% rename from recommendation/ctr/dlrm/pytorch/src/dot_based_interact_pytorch_types.cu rename to others/recommendation_systems/dlrm/pytorch/src/dot_based_interact_pytorch_types.cu diff --git a/recommendation/ctr/dlrm/pytorch/src/gather_gpu.cu b/others/recommendation_systems/dlrm/pytorch/src/gather_gpu.cu similarity index 100% rename from recommendation/ctr/dlrm/pytorch/src/gather_gpu.cu rename to others/recommendation_systems/dlrm/pytorch/src/gather_gpu.cu diff --git a/recommendation/ctr/dlrm/pytorch/src/pytorch_ops.cpp b/others/recommendation_systems/dlrm/pytorch/src/pytorch_ops.cpp similarity index 100% rename from recommendation/ctr/dlrm/pytorch/src/pytorch_ops.cpp rename to others/recommendation_systems/dlrm/pytorch/src/pytorch_ops.cpp diff --git a/recommendation/ctr/dlrm/pytorch/tests/buckle_embedding_test.py b/others/recommendation_systems/dlrm/pytorch/tests/buckle_embedding_test.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/buckle_embedding_test.py rename to others/recommendation_systems/dlrm/pytorch/tests/buckle_embedding_test.py diff --git a/recommendation/ctr/dlrm/pytorch/tests/dataset_test.py b/others/recommendation_systems/dlrm/pytorch/tests/dataset_test.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/dataset_test.py rename to others/recommendation_systems/dlrm/pytorch/tests/dataset_test.py diff --git a/recommendation/ctr/dlrm/pytorch/tests/dist_model_test.py b/others/recommendation_systems/dlrm/pytorch/tests/dist_model_test.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/dist_model_test.py rename to others/recommendation_systems/dlrm/pytorch/tests/dist_model_test.py diff --git a/recommendation/ctr/dlrm/pytorch/tests/dot_based_interact.py b/others/recommendation_systems/dlrm/pytorch/tests/dot_based_interact.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/dot_based_interact.py rename to others/recommendation_systems/dlrm/pytorch/tests/dot_based_interact.py diff --git a/recommendation/ctr/dlrm/pytorch/tests/metrics_test.py b/others/recommendation_systems/dlrm/pytorch/tests/metrics_test.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/metrics_test.py rename to others/recommendation_systems/dlrm/pytorch/tests/metrics_test.py diff --git a/recommendation/ctr/dlrm/pytorch/tests/model_test.py b/others/recommendation_systems/dlrm/pytorch/tests/model_test.py similarity index 100% rename from recommendation/ctr/dlrm/pytorch/tests/model_test.py rename to others/recommendation_systems/dlrm/pytorch/tests/model_test.py diff --git a/recommendation/ctr/ffm/paddlepaddle/README.md b/others/recommendation_systems/ffm/paddlepaddle/README.md similarity index 57% rename from recommendation/ctr/ffm/paddlepaddle/README.md rename to others/recommendation_systems/ffm/paddlepaddle/README.md index 105672bd7056462bed17fbea4101d5cc993872a8..f35394057e626f580ae9fc8f47f57de1eb515273 100644 --- a/recommendation/ctr/ffm/paddlepaddle/README.md +++ b/others/recommendation_systems/ffm/paddlepaddle/README.md @@ -1,28 +1,30 @@ -# FFM +# FFM ## Model description -FFM is further improved on the basis of FM, introducing the concept of category in the model, that is, field. -The features of the same field are one-hot separately, so in FFM, each one-dimensional feature learns a hidden variable for each field of the other features, which is not only related to the feature, but also to the field. -By introducing the concept of field, FFM attributes features of the same nature to the same field. + +FFM is further improved on the basis of FM, introducing the concept of category in the model, that is, field. The +features of the same field are one-hot separately, so in FFM, each one-dimensional feature learns a hidden variable for +each field of the other features, which is not only related to the feature, but also to the field. By introducing the +concept of field, FFM attributes features of the same nature to the same field. ## Step 1: Installing -``` + +```sh git clone -b release/2.3.0 https://github.com/PaddlePaddle/PaddleRec.git ``` -``` +```sh cd PaddleRec pip3 install -r requirements.txt ``` ## Step 2: Training -``` +```sh # Download dataset cd datasets/criteo/ sh run.sh - cd ../../models/rank/ffm export FLAGS_cudnn_exhaustive_search=True export FLAGS_cudnn_batchnorm_spatial_persistent=True @@ -34,11 +36,11 @@ python3 -u ../../../tools/infer.py -m config_bigdata.yaml ``` ## Result -| GPU | AUC | IPS | -|--- |--- |--- | -| BI-V100 x1 | 0.792128| 714.42ins/s | - +| GPU | AUC | IPS | +|------------|----------|-------------| +| BI-V100 x1 | 0.792128 | 714.42ins/s | ## Reference + - [PaddleRec](https://github.com/PaddlePaddle/PaddleRec) diff --git a/recommendation/collaborative_filtering/ncf/pytorch/.gitignore b/others/recommendation_systems/ncf/pytorch/.gitignore similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/.gitignore rename to others/recommendation_systems/ncf/pytorch/.gitignore diff --git a/recommendation/collaborative_filtering/ncf/pytorch/Dockerfile b/others/recommendation_systems/ncf/pytorch/Dockerfile similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/Dockerfile rename to others/recommendation_systems/ncf/pytorch/Dockerfile diff --git a/recommendation/collaborative_filtering/ncf/pytorch/LICENSE.md b/others/recommendation_systems/ncf/pytorch/LICENSE.md similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/LICENSE.md rename to others/recommendation_systems/ncf/pytorch/LICENSE.md diff --git a/recommendation/collaborative_filtering/ncf/pytorch/README.md b/others/recommendation_systems/ncf/pytorch/README.md similarity index 48% rename from recommendation/collaborative_filtering/ncf/pytorch/README.md rename to others/recommendation_systems/ncf/pytorch/README.md index 2edb294c5b24d464124c3904fb9e459ea611038e..6283a3e7e3614874048032894f44ca3e476be1ab 100644 --- a/recommendation/collaborative_filtering/ncf/pytorch/README.md +++ b/others/recommendation_systems/ncf/pytorch/README.md @@ -2,21 +2,24 @@ ## Model description -By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance. - +By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a +general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and +generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to +leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world +datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical +evidence shows that using deeper layers of neural networks offers better recommendation performance. ## Step 1: Installing packages -```shell +```sh pip3 install -r requirements.txt ``` - ## Step 2: Preparing datasets Dataset is movielens -```shell +```sh # Download dataset mkdir -p data/ wget http://files.grouplens.org/datasets/movielens/ml-20m.zip -P data/ @@ -28,23 +31,23 @@ unzip data/ml-20m.zip -d data/ python3 convert.py --path ./data/ml-20m/ratings.csv --output ./data/ml-20m ``` - ## Step 3: Training ### Multiple GPUs on one machine -```shell +```sh export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash run_train_fp32.sh ``` ### Multiple GPUs on one machine (AMP) -```shell +```sh export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # fp16 train bash run_train_fp16.sh ``` ## Reference -https://github.com/mlcommons/training_results_v0.5/tree/master/v0.5.0/nvidia/submission/code/recommendation/pytorch + +- [mlcommons](https://github.com/mlcommons/training_results_v0.5/tree/master/v0.5.0/nvidia/submission/code/recommendation/pytorch) diff --git a/recommendation/collaborative_filtering/ncf/pytorch/bind_launch.py b/others/recommendation_systems/ncf/pytorch/bind_launch.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/bind_launch.py rename to others/recommendation_systems/ncf/pytorch/bind_launch.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/convert.py b/others/recommendation_systems/ncf/pytorch/convert.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/convert.py rename to others/recommendation_systems/ncf/pytorch/convert.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/dataset.py b/others/recommendation_systems/ncf/pytorch/dataset.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/dataset.py rename to others/recommendation_systems/ncf/pytorch/dataset.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/download_dataset.sh b/others/recommendation_systems/ncf/pytorch/download_dataset.sh similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/download_dataset.sh rename to others/recommendation_systems/ncf/pytorch/download_dataset.sh diff --git a/recommendation/collaborative_filtering/ncf/pytorch/fp_optimizers.py b/others/recommendation_systems/ncf/pytorch/fp_optimizers.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/fp_optimizers.py rename to others/recommendation_systems/ncf/pytorch/fp_optimizers.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/load.py b/others/recommendation_systems/ncf/pytorch/load.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/load.py rename to others/recommendation_systems/ncf/pytorch/load.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/ncf.py b/others/recommendation_systems/ncf/pytorch/ncf.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/ncf.py rename to others/recommendation_systems/ncf/pytorch/ncf.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/ncf_16.py b/others/recommendation_systems/ncf/pytorch/ncf_16.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/ncf_16.py rename to others/recommendation_systems/ncf/pytorch/ncf_16.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/ncf_32.py b/others/recommendation_systems/ncf/pytorch/ncf_32.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/ncf_32.py rename to others/recommendation_systems/ncf/pytorch/ncf_32.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/neumf.py b/others/recommendation_systems/ncf/pytorch/neumf.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/neumf.py rename to others/recommendation_systems/ncf/pytorch/neumf.py diff --git a/recommendation/collaborative_filtering/ncf/pytorch/requirements.txt b/others/recommendation_systems/ncf/pytorch/requirements.txt similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/requirements.txt rename to others/recommendation_systems/ncf/pytorch/requirements.txt diff --git a/recommendation/collaborative_filtering/ncf/pytorch/run.sub b/others/recommendation_systems/ncf/pytorch/run.sub similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/run.sub rename to others/recommendation_systems/ncf/pytorch/run.sub diff --git a/recommendation/collaborative_filtering/ncf/pytorch/run_train_fp16.sh b/others/recommendation_systems/ncf/pytorch/run_train_fp16.sh similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/run_train_fp16.sh rename to others/recommendation_systems/ncf/pytorch/run_train_fp16.sh diff --git a/recommendation/collaborative_filtering/ncf/pytorch/run_train_fp32.sh b/others/recommendation_systems/ncf/pytorch/run_train_fp32.sh similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/run_train_fp32.sh rename to others/recommendation_systems/ncf/pytorch/run_train_fp32.sh diff --git a/recommendation/collaborative_filtering/ncf/pytorch/utils.py b/others/recommendation_systems/ncf/pytorch/utils.py similarity index 100% rename from recommendation/collaborative_filtering/ncf/pytorch/utils.py rename to others/recommendation_systems/ncf/pytorch/utils.py diff --git a/recommendation/ctr/wide_deep/paddlepaddle/README.md b/others/recommendation_systems/wide_deep/paddlepaddle/README.md similarity index 68% rename from recommendation/ctr/wide_deep/paddlepaddle/README.md rename to others/recommendation_systems/wide_deep/paddlepaddle/README.md index 7576e1acde0aad4a068c787eb7e886016d8ebf28..18593c27ddf3cb036947d923f990a8a07e74b1b8 100644 --- a/recommendation/ctr/wide_deep/paddlepaddle/README.md +++ b/others/recommendation_systems/wide_deep/paddlepaddle/README.md @@ -2,11 +2,13 @@ ## Model description -"Wide & Deep Learning" is a machine learning model architecture that combines the strengths of both memorization and generalization. It was introduced by Google in the context of recommender systems, particularly for improving the performance of large-scale recommendation tasks. +"Wide&Deep Learning" is a machine learning model architecture that combines the strengths of both memorization and +generalization. It was introduced by Google in the context of recommender systems, particularly for improving the +performance of large-scale recommendation tasks. ## Step 1: Installation -```bash +```sh git clone -b release/2.3.0 https://github.com/PaddlePaddle/PaddleRec.git cd PaddleRec @@ -15,7 +17,7 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets -```bash +```sh # Download dataset pushd datasets/criteo/ sh run.sh @@ -24,7 +26,7 @@ popd ## Step 3: Training -```bash +```sh cd models/rank/wide_deep export FLAGS_cudnn_exhaustive_search=True export FLAGS_cudnn_batchnorm_spatial_persistent=True @@ -38,4 +40,5 @@ python3 -u ../../../tools/infer.py -m config_bigdata.yaml ``` ## Reference + - [PaddleRec](https://github.com/PaddlePaddle/PaddleRec) diff --git a/recommendation/ctr/xdeepfm/paddlepaddle/README.md b/others/recommendation_systems/xdeepfm/paddlepaddle/README.md similarity index 51% rename from recommendation/ctr/xdeepfm/paddlepaddle/README.md rename to others/recommendation_systems/xdeepfm/paddlepaddle/README.md index c779e1068deb54c890a43b0f07c82e8d5ba8601e..690c285f3a50d35617292782547566b9d259020e 100644 --- a/recommendation/ctr/xdeepfm/paddlepaddle/README.md +++ b/others/recommendation_systems/xdeepfm/paddlepaddle/README.md @@ -1,11 +1,14 @@ -# xDeepFM +# xDeepFM ## Model description -xDeepFM proposes a novel network named Compressed Interaction Network (CIN), which aims to learn high-order feature interactions explicitly. xDeepFM can automatically learn high-order feature interactions in both explicit and implicit fashions, which is of great significance to reducing manual feature engineering work. + +xDeepFM proposes a novel network named Compressed Interaction Network (CIN), which aims to learn high-order feature +interactions explicitly. xDeepFM can automatically learn high-order feature interactions in both explicit and implicit +fashions, which is of great significance to reducing manual feature engineering work. ## Step 1: Installation -```bash +```sh git clone -b release/2.3.0 https://github.com/PaddlePaddle/PaddleRec.git cd PaddleRec pip3 install -r requirements.txt @@ -13,7 +16,7 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets -```bash +```sh pushd datasets/criteo/ sh run.sh popd @@ -21,7 +24,7 @@ popd ## Step 3: Training -```bash +```sh cd models/rank/xdeepfm # Training @@ -32,10 +35,11 @@ python3 -u ../../../tools/infer.py -m config_bigdata.yaml ``` ## Results -| GPUs | IPS | AUC | -|-------------|-----------|-------------| -| BI-V100 x1 | 6000 | 0.7955 | + +| GPUs | IPS | AUC | +|------------|------|--------| +| BI-V100 x1 | 6000 | 0.7955 | ## Reference -- [xDeepFM](https://github.com/PaddlePaddle/PaddleRec/tree/master/models/rank/xdeepfm) +- [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/models/rank/xdeepfm) diff --git a/recommendation/collaborative_filtering/README.md b/recommendation/collaborative_filtering/README.md deleted file mode 100644 index 189096119670cf12e732dad5f0db03eec43719ee..0000000000000000000000000000000000000000 --- a/recommendation/collaborative_filtering/README.md +++ /dev/null @@ -1 +0,0 @@ -# Collaborative Filtering diff --git a/recommendation/ctr/README.md b/recommendation/ctr/README.md deleted file mode 100644 index dba43edf787c80bb9ad7145a45efec1b974dc232..0000000000000000000000000000000000000000 --- a/recommendation/ctr/README.md +++ /dev/null @@ -1 +0,0 @@ -# Click Through Rate diff --git a/speech/speech_recognition/conformer/paddlepaddle/README.md b/speech/speech_recognition/conformer/paddlepaddle/README.md deleted file mode 100644 index c08f0350dcf92e4186f95a9f6c11f401c5da9b3a..0000000000000000000000000000000000000000 --- a/speech/speech_recognition/conformer/paddlepaddle/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Conformer - -## Model description -Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters. - -## Step 1:Installation -```bash -git clone --recursive -b r1.4 https://github.com/PaddlePaddle/PaddleSpeech.git -cd PaddleSpeech -pip3 install . -``` - -## Step 2:Preparing datasets -```bash -cd examples/aishell/asr1/ -bash run.sh --stage 0 --stop_stage 0 -``` -run.sh will download and process the datasets, The download process may be slow, you can download the data_aishell.tgz from [wenet](http://openslr.magicdatatech.com/resources/33/data_aishell.tgz) and put it in the /path/to/PaddleSpeech/dataset/aishell/, then return to execute the above command. - -## Step 3:Training -```bash -bash run.sh --stage 1 --stop_stage 3 -``` - -## Results -| GPUs | IPS | CER | -| :---------: | :--: | :----: | -| BI-V100 x 4 | 48.5 | 0.0495(checkpoint 81) | -