Skip to content

Commit aaa0cd3

Browse files
authored
remove async replication that is AF specific (#402)
1 parent 3819444 commit aaa0cd3

File tree

1 file changed

+1
-62
lines changed

1 file changed

+1
-62
lines changed

site/content/3.12/deploy/architecture/replication.md

Lines changed: 1 addition & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -50,65 +50,4 @@ In addition to the replication factor, there is a **writeConcern** that
5050
specifies the minimum number of in-sync followers required for write operations.
5151
If you specify the `writeConcern` parameter with a value greater than `1`, the
5252
collection's leader shards are locked down for writing as soon as too few
53-
followers are available.
54-
55-
## Asynchronous replication
56-
57-
When using asynchronous replication, _Followers_ connect to a _Leader_ and apply
58-
all the events from the Leader log in the same order locally. As a result, the
59-
_Followers_ end up with the same state of data as the _Leader_.
60-
61-
_Followers_ are only eventually consistent with the _Leader_.
62-
63-
Transactions are honored in replication, i.e. transactional write operations
64-
become visible on _Followers_ atomically.
65-
66-
All write operations are logged to the Leader's _write-ahead log_. Therefore,
67-
asynchronous replication in ArangoDB cannot be used for write-scaling. The main
68-
purposes of this type of replication are to provide read-scalability and
69-
hot standby servers.
70-
71-
It is possible to connect multiple _Follower_ to the same _Leader_. _Followers_
72-
should be used as read-only instances, and no user-initiated write operations
73-
should be carried out on them. Otherwise, data conflicts may occur that cannot
74-
be solved automatically, and this makes the replication stop.
75-
76-
In an asynchronous replication scenario, Followers _pull_ changes
77-
from the _Leader_. _Followers_ need to know to which _Leader_ they should
78-
connect to, but a _Leader_ is not aware of the _Followers_ that replicate from it.
79-
When the network connection between the _Leader_ and a _Follower_ goes down, write
80-
operations on the Leader can continue normally. When the network is up again, _Followers_
81-
can reconnect to the _Leader_ and transfer the remaining changes. This
82-
happens automatically, provided _Followers_ are configured appropriately.
83-
84-
### Replication lag
85-
86-
As described above, write operations are applied first in the _Leader_, and then applied
87-
in the _Followers_.
88-
89-
For example, let's assume a write operation is executed in the _Leader_
90-
at point in time _t0_. To make a _Follower_ apply the same operation, it must first
91-
fetch the write operation's data from Leader's write-ahead log, then parse it and
92-
apply it locally. This happens at some point in time after _t0_, let's say _t1_.
93-
94-
The difference between _t1_ and _t0_ is called the _replication lag_, and it is unavoidable
95-
in asynchronous replication. The amount of replication _lag_ depends on many factors, a
96-
few of which are:
97-
98-
- the network capacity between the _Followers_ and the _Leader_
99-
- the load of the _Leader_ and the _Followers_
100-
- the frequency in which _Followers_ poll the _Leader_ for updates
101-
102-
Between _t0_ and _t1_, the state of data on the _Leader_ is newer than the state of data
103-
on the _Followers_. At point in time _t1_, the state of data on the _Leader_ and _Followers_
104-
is consistent again (provided no new data modifications happened on the _Leader_ in
105-
between). Thus, the replication leads to an _eventually consistent_ state of data.
106-
107-
### Replication overhead
108-
109-
As the _Leader_ servers are logging any write operation in the _write-ahead-log_
110-
anyway, replication doesn't cause any extra overhead on the _Leader_. However, it
111-
causes some overhead for the _Leader_ to serve incoming read
112-
requests of the _Followers_. However, returning the requested data is a trivial
113-
task for the _Leader_ and should not result in a notable performance
114-
degradation in production.
53+
followers are available.

0 commit comments

Comments
 (0)