Project Overview
Building a fault-tolerant key/value storage service using Raft that supports Get(key)
, Put(key, value)
and Append(key, value)
.
Implementation
Architecture
|
|
A client connects to a clerk. The clerk sends requests to the kvserver which has a leader Raft instance.
The files are client.go
, server.go
.
Client
A client does the following things:
- Sending the request to the kvserver.
- Handling the reply. When something goes wrong, such as requesting to a follower kvserver or timeout, it resends the request until succeeds.
The client could only receive an OK when it communicates to the kvserver who holds the leader raft instance. Otherwise it has to communicates to the next kvserver in its list until finding the leader kvserver.
From the behavior of a client, it’s easy to determine that a client(clerk) holds a list of servers (servers []*labrpc.ClientEnd
) and the index of the leader (leader
int).
Other properties? This will be discussed in the following part of this passage.
Server
First of all, a server holds a raft instance and its applyCh
.
When a log entry is committed by raft cluster, a message will be sent to applyCh
. So the kvserver need to read messages repeatedly from the channel (for applyMsg := range kv.applyCh
) and handle it. This should be conducted in a goroutine.
When receiving a MODERATE message, the server should save the state to itself. A hash map (kv map[string]string
) is okey. According to different types of commands, update the entries in the hash map.
Order Keeping
The client sends requests in serial.
In the client side, it keeps clientId
and seq
:
clientId
is generated randomlyseq
is an auto-increment sequence number starts from 0
In the server side, it keeps a lastApplied map[int64]int
to remember the executed sequence number of each client. The key of lastApplied
is the client id and the value is the last applied sequence number of this client.
When receving messages from the applyCh
, the kvserver checks whether the seq
is EXACTLY eligible: op.Seq == kv.lastApplied[op.ClientId] + 1
. If the current sequence number is too small, it means this message is out-to-date, just do nothing to kv
and notify. If the current sequence number is too big, return directly (after timeout the client will resend one).
Update lastApplied
after the eligible execution:
|
|
Notifying Mechanism
For each request from KVServer#Get
or KVServer#PutAppend
, a channel will be set in its lifecycle.
We use a map as kvserver’s property. The key is ClientId-Seq
to unique every request. The value is a NotifyCh
.
|
|
In kvserver’s Get
and PutAppend
methods, creating the channel like this:
|
|
In kvserver’s apply
method, notify like this:
|
|
And in kvserver’s Get
and PutAppend
method, waiting like this:
|
|
At last, the channel should be deleted to free memory:
|
|
Concurrency Control
The client does need concurrency control as it sends requests to the kvserver in single thread.
Lock is necessary when a kvserver modifies its properties. We try to lock as little as possible. In one hand, it will lead to better performance. In another hand, it reduces the risk of dead lock.
Note that any mutex must be unlocked before the thread is blocked, such as waiting a message from a channel.
Test
Passed.
|
|
Conclusion
In this part of lab we did several things:
- Communications between objects
- The usage of Raft
- Applying committed logs to the state machine
- Ordering keeping
- Notification mechanism
- Concurrency control