|
181d2f5793
|
Use WizardLM instead of Vicuna
|
2023-05-30 03:18:36 +00:00 |
|
|
951128cf7a
|
Update README
|
2023-05-29 20:16:31 +00:00 |
|
|
fd89bcb232
|
Hide stderr output
|
2023-05-29 20:15:59 +00:00 |
|
|
404bf34696
|
Use GPU acceleration for llama.py
|
2023-05-29 20:03:51 +00:00 |
|
|
2d077d12c0
|
Fix bug
|
2023-04-10 16:22:31 +00:00 |
|
|
9a4bcec9cc
|
Stop after 1024 tokens
|
2023-04-10 14:33:41 +00:00 |
|
|
3432b3be76
|
Don't generate infinitely
|
2023-04-10 04:22:47 +00:00 |
|
|
1e7d8be616
|
Finally!
|
2023-04-10 04:03:37 +00:00 |
|
|
285a08c7fa
|
Return 204 on favicon.ico request, pad prompt with "### Human:", "### Assistant:"
|
2023-04-10 03:51:47 +00:00 |
|
|
2c3240c676
|
Fix typo
|
2023-04-10 03:46:04 +00:00 |
|
|
8395e17380
|
Larger context
|
2023-04-10 03:45:19 +00:00 |
|
|
89e79c9ae8
|
Fix permissions
|
2023-04-10 03:41:58 +00:00 |
|
|
5b92e1661e
|
Add llama streaming script
|
2023-04-10 03:40:19 +00:00 |
|
|
6d7c618a19
|
Adjust parameters and ignore favicon.ico
|
2022-07-15 17:17:37 -05:00 |
|
|
d55f6163b6
|
Print debugging output
|
2022-07-15 17:01:34 -05:00 |
|
|
ede5a0a5a9
|
Load and run model
|
2022-07-15 13:46:14 -05:00 |
|
|
0b70a7303f
|
Create a simple Python Unix socket HTTP server
|
2022-07-15 13:40:03 -05:00 |
|
Anthony Wang
|
db146d8a7b
|
Initial commit
|
2022-07-15 13:00:14 -05:00 |
|