From 457a36b11e67aeb2428d10f5196f2e6f9759c28c Mon Sep 17 00:00:00 2001 From: "jino.yang" Date: Fri, 26 Apr 2024 11:37:59 +0800 Subject: [PATCH] Switch DCNv2 branch to pytorch_1.11 and update h5py to 3.10.0 link issue: [I9JUNE](https://gitee.com/deep-spark/deepsparkhub/issues/I9JUNE) Signed-off-by: jino.yang --- cv/detection/centernet/pytorch/README.md | 8 +++++++- .../bart_fairseq/pytorch/README.md | 20 +++++++++++-------- .../bert/pytorch/requirements.txt | 2 +- 3 files changed, 20 insertions(+), 10 deletions(-) diff --git a/cv/detection/centernet/pytorch/README.md b/cv/detection/centernet/pytorch/README.md index f02cd94be..581cdb58d 100644 --- a/cv/detection/centernet/pytorch/README.md +++ b/cv/detection/centernet/pytorch/README.md @@ -1,6 +1,7 @@ # CenterNet ## Model description + Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time. ## Step 1: Installing packages @@ -9,7 +10,7 @@ Detection identifies objects as axis-aligned boxes in an image. Most successful pip3 install -r requirements.txt # Compile deformable convolutional(DCNv2) cd ./src/lib/models/networks/ -git clone -b pytorch_1.9 https://github.com/lbin/DCNv2.git +git clone -b pytorch_1.11 https://github.com/lbin/DCNv2.git cd ./DCNv2/ python3 setup.py build develop @@ -18,6 +19,7 @@ python3 setup.py build develop ## Step 2: Preparing datasets ### Go back to the "pytorch/" directory + ```bash cd ../../../../../ ``` @@ -65,21 +67,25 @@ mv resnet18-5c106cde.pth /root/.cache/torch/hub/checkpoints/ ## Step 3: Training ### Setup CUDA_VISIBLE_DEVICES variable + ```bash export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ``` ### On single GPU + ```bash cd ./cv/detection/centernet/pytorch/src python3 main.py ctdet --batch_size 32 --master_batch 15 --lr 1.25e-4 --gpus 0 ``` ### Multiple GPUs on one machine + ```bash cd ./cv/detection/centernet/pytorch/src bash run.sh ``` ## Reference + https://github.com/bubbliiiing/centernet-pytorch diff --git a/nlp/language_model/bart_fairseq/pytorch/README.md b/nlp/language_model/bart_fairseq/pytorch/README.md index 4086ab59a..e87f30d52 100644 --- a/nlp/language_model/bart_fairseq/pytorch/README.md +++ b/nlp/language_model/bart_fairseq/pytorch/README.md @@ -1,15 +1,17 @@ # BART ## Model description + BART is sequence-to-sequence model trained with denoising as pretraining objective. We show that this pretraining objective is more generic and -show that we can match RoBERTa results on SQuAD and GLUE and gain -state-of-the-art results on summarization (XSum, CNN dataset), +show that we can match RoBERTa results on SQuAD and GLUE and gain +state-of-the-art results on summarization (XSum, CNN dataset), long form generative question answering (ELI5) and dialog response -genration (ConvAI2). +genration (ConvAI2). ## Step 1: Installation -Bart model is using Fairseq toolbox. Before you run this model, + +Bart model is using Fairseq toolbox. Before you run this model, you need to setup Fairseq first. ```bash @@ -39,20 +41,22 @@ tar -xzvf bart.large.tar.gz ``` ## Step 3: Training + ```bash # Finetune on CLUE RTE task bash bart.sh # Inference on GLUE RTE task -` python3 bart.py ``` ## Results -| GPUs | QPS | Train Epochs | Accuracy | -|------|-----|--------------|------| -| BI-v100 x8 | 113.18 | 10 | 83.8 | + +| GPUs | QPS | Train Epochs | Accuracy | +| ------------ | -------- | -------------- | ---------- | +| BI-v100 x8 | 113.18 | 10 | 83.8 | ## Reference + - [Fairseq](https://github.com/facebookresearch/fairseq/tree/v0.10.2) diff --git a/nlp/language_model/bert/pytorch/requirements.txt b/nlp/language_model/bert/pytorch/requirements.txt index 2f2a21621..e15742c21 100644 --- a/nlp/language_model/bert/pytorch/requirements.txt +++ b/nlp/language_model/bert/pytorch/requirements.txt @@ -1,6 +1,6 @@ # progress bars in model download and training scripts boto3==1.14.0 -h5py==2.10.0 +h5py==3.10.0 html2text==2020.1.16 ipdb==0.13.2 nltk==3.5 -- Gitee