|
e61a793dd6
|
Adjust training parameters to train gpt2-large
|
2022-07-16 20:20:34 -05:00 |
|
|
d47afd47a3
|
Adjust training parameters
|
2022-07-16 11:17:56 -05:00 |
|
|
47407b9fb6
|
Use gpt2-large instead of distilgpt2
|
2022-07-15 18:48:19 -05:00 |
|
|
f08e5bfc5f
|
Reformat code with autopep8
|
2022-02-22 17:55:04 -06:00 |
|
|
8dab77d61b
|
Set output directory correctly
|
2022-02-22 17:51:52 -06:00 |
|
|
1c43115cd6
|
Split data into chunks and save model
|
2022-02-22 17:43:29 -06:00 |
|
|
d191b6204f
|
Move train.py to train_lstm.py and add new transformers training code
|
2022-02-22 16:12:13 -06:00 |
|
|
fb9b81284b
|
Optimize LSTM training
|
2022-02-22 12:36:52 -06:00 |
|
|
5f9292e242
|
Save entire model after training
|
2022-02-21 16:47:59 -06:00 |
|
|
6c7935489c
|
Fix args formatting for train.py
|
2022-02-21 16:40:09 -06:00 |
|
|
2f05004e4a
|
Get predictions to actually work
|
2022-02-21 15:49:39 -06:00 |
|
|
dc6c6e5aa6
|
Reformat parser so it looks nicer
|
2022-02-21 15:35:55 -06:00 |
|
|
b991260f59
|
Reformat using autopep8
|
2022-02-21 15:33:17 -06:00 |
|
|
a3d0f4911d
|
Specify types for train.py args
|
2022-02-21 15:31:42 -06:00 |
|
|
e7a178e2ca
|
Add args for model parameters
|
2022-02-21 15:25:57 -06:00 |
|
|
8b86ce3f65
|
More cleanup
|
2022-02-21 15:20:00 -06:00 |
|
|
9e84768780
|
Fix imports
|
2022-02-21 14:57:40 -06:00 |
|
|
3c234a0376
|
Copy old code to new train.py file
|
2022-02-21 14:55:18 -06:00 |
|