Add Matrix support 🎉
This commit is contained in:
parent
6513d3369e
commit
3bc9e434ba
|
@ -5,9 +5,9 @@ Fediverse ebooks bot using neural networks
|
|||
|
||||
## Usage
|
||||
|
||||
First, install Python dependencies using your distro's package manager or `pip`: [psycopg2](https://www.psycopg.org), [torch](https://pytorch.org/), [transformers](https://huggingface.co/docs/transformers/index), and [datasets](https://huggingface.co/docs/datasets/). Additionally, for Mastodon and Pleroma, install [Mastodon.py](https://mastodonpy.readthedocs.io/en/stable/), and for Misskey, install [Misskey.py](https://misskeypy.readthedocs.io/ja/latest/). If your database or platform isn't supported, don't worry! It's easy to add support for other platforms and databases, and contributions are welcome!
|
||||
First, install Python dependencies using your distro's package manager or `pip`: [psycopg2](https://www.psycopg.org), [torch](https://pytorch.org/), [transformers](https://huggingface.co/docs/transformers/index), and [datasets](https://huggingface.co/docs/datasets/). Additionally, for Mastodon and Pleroma, install [Mastodon.py](https://mastodonpy.readthedocs.io/en/stable/), for Misskey, install [Misskey.py](https://misskeypy.readthedocs.io/ja/latest/), and for Matrix, install [simplematrixbotlib](https://simple-matrix-bot-lib.readthedocs.io/en/latest/index.html). If your database or platform isn't supported, don't worry! It's easy to add support for other platforms and databases, and contributions are welcome!
|
||||
|
||||
Now generate the training data from your fediverse server's database using `python data.py -d 'dbname=test user=postgres password=secret host=localhost port=5432'`. You can skip this step if you have collected training data from another source.
|
||||
Now generate the training data from your fediverse server's database using `python data.py -d 'dbname=test user=postgres password=secret host=localhost port=5432'`. Generating the training data from the database is not yet supported for Matrix. You can skip this step if you have collected training data from another source.
|
||||
|
||||
Next, train the network with `python train.py`, which may take several hours. It's a lot faster when using a GPU. If you need advanced features when training, you can also train using [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py).
|
||||
|
||||
|
|
20
bot.py
20
bot.py
|
@ -5,7 +5,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
|
|||
|
||||
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument('-b', '--backend', choices=['mastodon', 'misskey'], default='mastodon',
|
||||
parser.add_argument('-b', '--backend', choices=['mastodon', 'misskey', 'matrix'], default='mastodon',
|
||||
help='fediverse server type')
|
||||
parser.add_argument('-i', '--instance', help='Mastodon instance hosting the bot')
|
||||
parser.add_argument('-t', '--token', help='Mastodon application access token')
|
||||
|
@ -81,6 +81,7 @@ print(output)
|
|||
post = output.split('\n')[0]
|
||||
if len(post) < 200:
|
||||
post = output.split('\n')[0] + '\n' + output.split('\n')[1]
|
||||
post = post[:500]
|
||||
|
||||
|
||||
# Post it!
|
||||
|
@ -91,9 +92,22 @@ if args.backend == 'mastodon':
|
|||
access_token=args.token,
|
||||
api_base_url=args.instance
|
||||
)
|
||||
mastodon.status_post(post[:500])
|
||||
mastodon.status_post(post)
|
||||
|
||||
elif args.backend == 'misskey':
|
||||
from Misskey import Misskey
|
||||
|
||||
misskey = Misskey(args.instance, i=args.token)
|
||||
misskey.notes_create(post[:500])
|
||||
misskey.notes_create(post)
|
||||
|
||||
elif args.backend == 'matrix':
|
||||
import simplematrixbotlib as botlib
|
||||
|
||||
creds = botlib.Creds(args.instance, 'ebooks', args.token)
|
||||
bot = botlib.Bot(creds)
|
||||
|
||||
@bot.listener.on_startup
|
||||
async def room_joined(room_id):
|
||||
await bot.api.send_text_message(room_id=room_id, message=post)
|
||||
|
||||
bot.run()
|
||||
|
|
Loading…
Reference in a new issue