Boosting matrix/synapse by using workers
zuletzt bearbeitet: Fri, 13 Dec 2019 09:43:38 +0100
buckaroo@hub.netzgemeinde.eu
By using wokers, synapse becomes much more efficient on multi-core machines, although it consumes more memory
Artikel ansehen
Zusammenfassung ansehen
I've been running a synapse homeserver for nearly a year and it works just fine, but there's one bottleneck: Python's single-threadedness. It only utilizes one core, the other are mostly idling (or used for the database). Recently I've learned that there's a solution for that (although currently considered to be experimental): Workers. By splitting synapse into a collection of processes which perform different tasks it's possbile to utilize all the cores of the CPU - even running the workers on different machines, loadbalancing the task, whatever. There are two catches, though:
There's a (incomplete) documentation about this: https://github.com/matrix-org/synapse/blob/master/docs/workers.md . However, there a few pitfalls to avoid, therefore I'll publish my configuration.
I'm using a shared log config for all the workers (
- The setup becomes more complex. This is rather normal when you make a setup more scalable
- Since every worker is basically a synapse instance dedicated to a different task, the setup will consume more memory
There's a (incomplete) documentation about this: https://github.com/matrix-org/synapse/blob/master/docs/workers.md . However, there a few pitfalls to avoid, therefore I'll publish my configuration.
homeserver.yaml
As stated in documentation, there need to be some changes in the main homeserver.yaml. You need to add two listeners:# The TCP replication port
- port: 9092
bind_addresses: ['::1', '127.0.0.1']
type: replication
# The HTTP replication port
- port: 9093
bind_addresses: ['::1', '127.0.0.1']
type: http
resources:
- names: [replication]
And add some configuration items (if you want to have all workers running, that is)# Workers
start_pushers: False
notify_appservices: False
send_federation: False
enable_media_repo: False
update_user_directory: False
use_presence: False
Workers
I've put the worker's yamls in a subdirectory calledworkers
:homeserver.yaml
worker_app: synapse.app.homeserver
daemonize: true
appservice.yaml
worker_app: synapse.app.appservice
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/appservice.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
client_reader.yaml
worker_app: synapse.app.client_reader
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8086
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/client_reader.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
event_creator.yaml
worker_app: synapse.app.event_creator
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8089
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/event_creator.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
federation_reader.yaml
worker_app: synapse.app.federation_reader
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8084
resources:
- names:
- federation
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/federation_reader.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
federation_reader.yaml
worker_app: synapse.app.federation_reader
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8084
resources:
- names:
- federation
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/federation_reader.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
federation_sender.yaml
worker_app: synapse.app.federation_sender
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/federation_sender.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
frontend_proxy.yaml
worker_app: synapse.app.frontend_proxy
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_main_http_uri: http://127.0.0.1:8008
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8088
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/frontend_proxy.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
frontend_proxy.yaml
worker_app: synapse.app.frontend_proxy
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_main_http_uri: http://127.0.0.1:8008
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8088
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/frontend_proxy.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
media_repository.yaml
worker_app: synapse.app.media_repository
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8085
resources:
- names: [media]
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/media_repository.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
pusher.yaml
worker_app: synapse.app.pusher
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/pusher.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
synchrotron.yaml
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8083
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/synchrotron.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
user_dir
worker_app: synapse.app.user_dir
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
bind_addresses: ['::1', '127.0.0.1']
port: 8087
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/user_dir.pid
worker_log_config: /home/matrix/synapse/workers_log_config.yml
I'm using a shared log config for all the workers (
workers_log_config.yaml
) in the main directory, basically it's just a copy of the main logging config:version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: /home/matrix/synapse/workers.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse:
level: ERROR
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
root:
level: ERROR
handlers: [file, console]
Reverse Proxy Settings
I'm using apache2 as a reverse proxy for synapse. Here are myProxyPass/ProxyPassMatch
configurations. I've put them in a single include file, so I can include them in both vhosts (443 and 884) - Include /etc/apache2/vhosts.d/proxystatements.include
- , so both synapse ports will benefit from the workers. I've merged - when possible - some of the regexps in the documentation into single ones:proxystatements.include
####### Workers #################################
#synapse.app.synchrotron
ProxyPassMatch ^/_matrix/client/(v1_alpha|r0)/sync$ http://127.0.0.1:8083 nocanon
ProxyPassMatch ^/_matrix/client/(api/v1|v2_alpha|r0)/events$ http://127.0.0.1:8083 nocanon
ProxyPassMatch ^/_matrix/client/(api/v1|r0)/initialSync$ http://127.0.0.1:8083 nocanon
ProxyPassMatch ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$ http://127.0.0.1:8083 nocanon
#synapse.app.federation_reader
ProxyPassMatch ^/_matrix/federation/v1/(event/|state/|state_ids/|backfill/|get_missing_events/|publicRooms|query/|make_join/|make_leave/|send_join/|send_leave/|invite/|query_auth/|event_auth/|exchange_third_party_invite/|send/) http://127.0.0.1:8084 nocanon
ProxyPassMatch ^/_matrix/key/v2/query http://127.0.0.1:8084 nocanon
#synapse.app.media_repository
ProxyPass /_matrix/media/ http://127.0.0.1:8085/_matrix/media/ nocanon
#synapse.app.client_reader
ProxyPassMatch ^/_matrix/client/(api/v1|r0|unstable)/(publicRooms|rooms/(.*/joined_members|.*/context/.*|.*/members|.*/state)|login|account/3pid|keys/(query|changes)|voip/turnServer|pushrules/.*|register|.*/messages)$ http://127.0.0.1:8086 nocanon
ProxyPassMatch ^/_matrix/client/versions$ http://127.0.0.1:8086 nocanon
#synapse.app.user_dir
ProxyPassMatch ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$ http://127.0.0.1:8087 nocanon
#synapse.app.frontend_proxy
ProxyPassMatch ^/_matrix/client/(api/v1|r0|unstable)/(keys/upload|presence/[^/]+/status) http://127.0.0.1:8088 nocanon
#synapse.app.event_creator
ProxyPassMatch ^^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send http://127.0.0.1:8089 nocanon
ProxyPassMatch ^^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$ http://127.0.0.1:8089 nocanon
ProxyPassMatch ^^/_matrix/client/(api/v1|r0|unstable)/(join|profile)/ http://127.0.0.1:8089 nocanon
#Everything eles to the main process
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
Things to consider
As the documentation states, you have to start/restart/stop the whole setup now by usingsynctl -a ./workers (start|stop..)
instead of synctl.Konversationsmerkmale
Lädt...