Sign in
Sign up
Explore
Enterprise
Education
Search
Help
Terms of use
About Us
Explore
Enterprise
Education
Gitee Premium
Gitee AI
AI teammates
Sign in
Sign up
Fetch the repository succeeded.
Donate
Please sign in before you donate.
Cancel
Sign in
Scan WeChat QR to Pay
Cancel
Complete
Prompt
Switch to Alipay.
OK
Cancel
Watch
Unwatch
Watching
Releases Only
Ignoring
4
Star
104
Fork
22
chenzhou
/
bert_bilstm_crf_ner_pytorch
Code
Issues
37
Insights
Pipelines
Service
Quality Analysis
Jenkins for Gitee
Tencent CloudBase
Tencent Cloud Serverless
悬镜安全
Aliyun SAE
Codeblitz
SBOM
Don’t show this again
Update failed. Please try again later!
Remove this flag
Content Risk Flag
This task is identified by
as the content contains sensitive information such as code security bugs, privacy leaks, etc., so it is only accessible to contributors of this repository.
训练完后找不到模型
Backlog
#I7LJ71
zjhe
Opened this issue
2023-07-17 09:47
 训练完后 找不到模型,不知道哪里的问题, 而且这个数据训练特别快,也没报错,不知道哪里的问题。 日志如下: C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\output\20230717085727 2023-07-17 08:57:27,447 - __main__ - INFO - available device: cpu,count_gpu: 0 2023-07-17 08:57:27,447 - __main__ - INFO - ====================== Start Data Pre-processing ====================== 2023-07-17 08:57:27,447 - root - INFO - loading labels info from train file and dump in C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\output\20230717085727 2023-07-17 08:57:27,559 - __main__ - INFO - loading labels successful! the size is 15, label is: I-LNAME,I-LABEL,B-DATE,B-LOC,B-ORG,B-LABEL,I-LOC,I-DATE,I-CW,B-LNAME,I-ORG,I-FNAME,B-CW,O,B-FNAME 2023-07-17 08:57:27,560 - __main__ - INFO - loading label2id and id2label dictionary successful! Some weights of the model checkpoint at C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\bert-base-chinese were not used when initializing BERT_BiLSTM_CRF: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight'] - This IS expected if you are initializing BERT_BiLSTM_CRF from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BERT_BiLSTM_CRF from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BERT_BiLSTM_CRF were not initialized from the model checkpoint at C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\bert-base-chinese and are newly initialized: ['hidden2tag.bias', 'birnn.weight_hh_l0_reverse', 'crf.end_transitions', 'crf.start_transitions', 'hidden2tag.weight', 'birnn.bias_hh_l0_reverse', 'birnn.bias_hh_l0', 'birnn.bias_ih_l0_reverse', 'crf.transitions', 'birnn.weight_ih_l0_reverse', 'birnn.bias_ih_l0', 'birnn.weight_ih_l0', 'birnn.weight_hh_l0'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 2023-07-17 08:57:28,493 - __main__ - INFO - loading tokenizer、bert_config and bert_bilstm_crf model successful! 2023-07-17 08:57:28,493 - __main__ - INFO - starting load train data and data_loader... file_path: C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\data\train.txt get_input_examples,len(lines): 957 ['O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-CW I-CW O', '在 重 大 的 人 生 事 件 或 经 历 军 队 体 制 编 制 改 革 时 , 存 在 焦 虑 性 尤 为 明 显 , 特 别 是 面 临 家 庭 带 来 的 种 种 困 扰 , 它 很 容 易 影 响 年 轻 军 官 。'] convert examples: 0%| | 0/957 [00:00<?, ?it/s]2023-07-17 08:57:28,634 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,634 - processor - INFO - guid: 0 2023-07-17 08:57:28,634 - processor - INFO - tokens: ['[CLS]', '早', '在', '5', '月', '1', '5', '日', ',', '武', '汉', '警', '方', '在', '腾', '讯', '手', '机', '管', '家', '、', '腾', '讯', '电', '脑', '管', '家', '及', '腾', '讯', '守', '[SEP]'] 2023-07-17 08:57:28,634 - processor - INFO - input_ids: 101 3193 1762 126 3299 122 126 3189 8024 3636 3727 6356 3175 1762 5596 6380 2797 3322 5052 2157 510 5596 6380 4510 5554 5052 2157 1350 5596 6380 2127 102 2023-07-17 08:57:28,634 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,634 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,634 - processor - INFO - label_ids: 13 13 13 2 7 7 7 7 13 4 10 10 10 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 4 10 10 13 2023-07-17 08:57:28,641 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,641 - processor - INFO - guid: 1 2023-07-17 08:57:28,642 - processor - INFO - tokens: ['[CLS]', '1', '1', '月', '7', '日', ',', '我', '们', '踏', '上', '了', '这', '片', '红', '土', '地', '寻', '访', '英', '雄', '[UNK]', '[UNK]', '百', '岁', '老', '兵', '陈', '训', '杨', ':', '[SEP]'] 2023-07-17 08:57:28,642 - processor - INFO - input_ids: 101 122 122 3299 128 3189 8024 2769 812 6672 677 749 6821 4275 5273 1759 1765 2192 6393 5739 7413 100 100 4636 2259 5439 1070 7357 6378 3342 8038 102 2023-07-17 08:57:28,642 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,642 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,642 - processor - INFO - label_ids: 13 2 7 7 7 7 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 5 1 1 1 14 9 0 13 13 2023-07-17 08:57:28,647 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,647 - processor - INFO - guid: 2 2023-07-17 08:57:28,647 - processor - INFO - tokens: ['[CLS]', '在', '重', '大', '的', '人', '生', '事', '件', '或', '经', '历', '军', '队', '体', '制', '编', '制', '改', '革', '时', ',', '存', '在', '焦', '虑', '性', '尤', '为', '明', '显', '[SEP]'] 2023-07-17 08:57:28,647 - processor - INFO - input_ids: 101 1762 7028 1920 4638 782 4495 752 816 2772 5307 1325 1092 7339 860 1169 5356 1169 3121 7484 3198 8024 2100 1762 4193 5991 2595 2215 711 3209 3227 102 2023-07-17 08:57:28,647 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,647 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,647 - processor - INFO - label_ids: 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 convert examples: 100%|██████████| 957/957 [00:05<00:00, 183.91it/s] 2023-07-17 08:57:33,833 - __main__ - INFO - loading train data_set and data_loader successful! 2023-07-17 08:57:33,833 - __main__ - INFO - ====================== End Data Pre-processing ====================== C:\ProgramData\Anaconda3\envs\zjhe\lib\site-packages\transformers\optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, 2023-07-17 08:57:33,838 - __main__ - INFO - loading AdamW optimizer、Warmup LinearSchedule and calculate optimizer parameter successful! 2023-07-17 08:57:33,838 - __main__ - INFO - ====================== Running training ====================== 2023-07-17 08:57:33,838 - __main__ - INFO - Num Examples: 3, Num Batch Step: 1, Num Epochs: 2, Num scheduler steps:2 Epoch: 0%| | 0/2 [00:00<?, ?it/s]2023-07-17 08:57:33,839 - __main__ - INFO - ########[Epoch: 0/2]######## DataLoader: 0%| | 0/1 [00:00<?, ?it/s]2023-07-17 08:57:33,840 - __main__ - INFO - ####[Step: 0/1]#### DataLoader: 100%|██████████| 1/1 [00:01<00:00, 1.12s/it] Epoch: 50%|█████ | 1/2 [00:01<00:01, 1.12s/it]2023-07-17 08:57:34,962 - __main__ - INFO - ########[Epoch: 1/2]######## DataLoader: 0%| | 0/1 [00:00<?, ?it/s]2023-07-17 08:57:34,964 - __main__ - INFO - ####[Step: 0/1]#### DataLoader: 100%|██████████| 1/1 [00:01<00:00, 1.31s/it] Epoch: 100%|██████████| 2/2 [00:02<00:00, 1.21s/it] 2023-07-17 08:57:36,271 - __main__ - INFO - NER model training successful!!! Process finished with exit code 0
 训练完后 找不到模型,不知道哪里的问题, 而且这个数据训练特别快,也没报错,不知道哪里的问题。 日志如下: C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\output\20230717085727 2023-07-17 08:57:27,447 - __main__ - INFO - available device: cpu,count_gpu: 0 2023-07-17 08:57:27,447 - __main__ - INFO - ====================== Start Data Pre-processing ====================== 2023-07-17 08:57:27,447 - root - INFO - loading labels info from train file and dump in C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\output\20230717085727 2023-07-17 08:57:27,559 - __main__ - INFO - loading labels successful! the size is 15, label is: I-LNAME,I-LABEL,B-DATE,B-LOC,B-ORG,B-LABEL,I-LOC,I-DATE,I-CW,B-LNAME,I-ORG,I-FNAME,B-CW,O,B-FNAME 2023-07-17 08:57:27,560 - __main__ - INFO - loading label2id and id2label dictionary successful! Some weights of the model checkpoint at C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\bert-base-chinese were not used when initializing BERT_BiLSTM_CRF: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight'] - This IS expected if you are initializing BERT_BiLSTM_CRF from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BERT_BiLSTM_CRF from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BERT_BiLSTM_CRF were not initialized from the model checkpoint at C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\bert-base-chinese and are newly initialized: ['hidden2tag.bias', 'birnn.weight_hh_l0_reverse', 'crf.end_transitions', 'crf.start_transitions', 'hidden2tag.weight', 'birnn.bias_hh_l0_reverse', 'birnn.bias_hh_l0', 'birnn.bias_ih_l0_reverse', 'crf.transitions', 'birnn.weight_ih_l0_reverse', 'birnn.bias_ih_l0', 'birnn.weight_ih_l0', 'birnn.weight_hh_l0'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 2023-07-17 08:57:28,493 - __main__ - INFO - loading tokenizer、bert_config and bert_bilstm_crf model successful! 2023-07-17 08:57:28,493 - __main__ - INFO - starting load train data and data_loader... file_path: C:\Users\zjhe\PycharmProjects\study\bert_bilstm_crf_ner_pytorch-master\torch_ner\data\train.txt get_input_examples,len(lines): 957 ['O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-CW I-CW O', '在 重 大 的 人 生 事 件 或 经 历 军 队 体 制 编 制 改 革 时 , 存 在 焦 虑 性 尤 为 明 显 , 特 别 是 面 临 家 庭 带 来 的 种 种 困 扰 , 它 很 容 易 影 响 年 轻 军 官 。'] convert examples: 0%| | 0/957 [00:00<?, ?it/s]2023-07-17 08:57:28,634 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,634 - processor - INFO - guid: 0 2023-07-17 08:57:28,634 - processor - INFO - tokens: ['[CLS]', '早', '在', '5', '月', '1', '5', '日', ',', '武', '汉', '警', '方', '在', '腾', '讯', '手', '机', '管', '家', '、', '腾', '讯', '电', '脑', '管', '家', '及', '腾', '讯', '守', '[SEP]'] 2023-07-17 08:57:28,634 - processor - INFO - input_ids: 101 3193 1762 126 3299 122 126 3189 8024 3636 3727 6356 3175 1762 5596 6380 2797 3322 5052 2157 510 5596 6380 4510 5554 5052 2157 1350 5596 6380 2127 102 2023-07-17 08:57:28,634 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,634 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,634 - processor - INFO - label_ids: 13 13 13 2 7 7 7 7 13 4 10 10 10 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 4 10 10 13 2023-07-17 08:57:28,641 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,641 - processor - INFO - guid: 1 2023-07-17 08:57:28,642 - processor - INFO - tokens: ['[CLS]', '1', '1', '月', '7', '日', ',', '我', '们', '踏', '上', '了', '这', '片', '红', '土', '地', '寻', '访', '英', '雄', '[UNK]', '[UNK]', '百', '岁', '老', '兵', '陈', '训', '杨', ':', '[SEP]'] 2023-07-17 08:57:28,642 - processor - INFO - input_ids: 101 122 122 3299 128 3189 8024 2769 812 6672 677 749 6821 4275 5273 1759 1765 2192 6393 5739 7413 100 100 4636 2259 5439 1070 7357 6378 3342 8038 102 2023-07-17 08:57:28,642 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,642 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,642 - processor - INFO - label_ids: 13 2 7 7 7 7 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 5 1 1 1 14 9 0 13 13 2023-07-17 08:57:28,647 - processor - INFO - ====================================================================== Example ====================================================================== 2023-07-17 08:57:28,647 - processor - INFO - guid: 2 2023-07-17 08:57:28,647 - processor - INFO - tokens: ['[CLS]', '在', '重', '大', '的', '人', '生', '事', '件', '或', '经', '历', '军', '队', '体', '制', '编', '制', '改', '革', '时', ',', '存', '在', '焦', '虑', '性', '尤', '为', '明', '显', '[SEP]'] 2023-07-17 08:57:28,647 - processor - INFO - input_ids: 101 1762 7028 1920 4638 782 4495 752 816 2772 5307 1325 1092 7339 860 1169 5356 1169 3121 7484 3198 8024 2100 1762 4193 5991 2595 2215 711 3209 3227 102 2023-07-17 08:57:28,647 - processor - INFO - token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2023-07-17 08:57:28,647 - processor - INFO - attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2023-07-17 08:57:28,647 - processor - INFO - label_ids: 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 convert examples: 100%|██████████| 957/957 [00:05<00:00, 183.91it/s] 2023-07-17 08:57:33,833 - __main__ - INFO - loading train data_set and data_loader successful! 2023-07-17 08:57:33,833 - __main__ - INFO - ====================== End Data Pre-processing ====================== C:\ProgramData\Anaconda3\envs\zjhe\lib\site-packages\transformers\optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, 2023-07-17 08:57:33,838 - __main__ - INFO - loading AdamW optimizer、Warmup LinearSchedule and calculate optimizer parameter successful! 2023-07-17 08:57:33,838 - __main__ - INFO - ====================== Running training ====================== 2023-07-17 08:57:33,838 - __main__ - INFO - Num Examples: 3, Num Batch Step: 1, Num Epochs: 2, Num scheduler steps:2 Epoch: 0%| | 0/2 [00:00<?, ?it/s]2023-07-17 08:57:33,839 - __main__ - INFO - ########[Epoch: 0/2]######## DataLoader: 0%| | 0/1 [00:00<?, ?it/s]2023-07-17 08:57:33,840 - __main__ - INFO - ####[Step: 0/1]#### DataLoader: 100%|██████████| 1/1 [00:01<00:00, 1.12s/it] Epoch: 50%|█████ | 1/2 [00:01<00:01, 1.12s/it]2023-07-17 08:57:34,962 - __main__ - INFO - ########[Epoch: 1/2]######## DataLoader: 0%| | 0/1 [00:00<?, ?it/s]2023-07-17 08:57:34,964 - __main__ - INFO - ####[Step: 0/1]#### DataLoader: 100%|██████████| 1/1 [00:01<00:00, 1.31s/it] Epoch: 100%|██████████| 2/2 [00:02<00:00, 1.21s/it] 2023-07-17 08:57:36,271 - __main__ - INFO - NER model training successful!!! Process finished with exit code 0
Comments (
9
)
Sign in
to comment
Status
Backlog
Backlog
Doing
Done
Closed
Assignees
Not set
Labels
Not set
Label settings
Milestones
No related milestones
No related milestones
Pull Requests
None yet
None yet
Successfully merging a pull request will close this issue.
Branches
No related branch
No related branch
master
Planed to start   -   Planed to end
-
Top level
Not Top
Top Level: High
Top Level: Medium
Top Level: Low
Priority
Not specified
Serious
Main
Secondary
Unimportant
参与者(1)
Python
1
https://gitee.com/chenzhouwy/bert_bilstm_crf_ner_pytorch.git
git@gitee.com:chenzhouwy/bert_bilstm_crf_ner_pytorch.git
chenzhouwy
bert_bilstm_crf_ner_pytorch
bert_bilstm_crf_ner_pytorch
Going to Help Center
Search
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
Repository Report
Back to the top
Login prompt
This operation requires login to the code cloud account. Please log in before operating.
Go to login
No account. Register