BEST代写-线上留学生作业代写 & 论文代写专家


代码代写|CSE 486/586 : Class Project Handout

代码代写|CSE 486/586 : Class Project Handout



In the previous phase, we worked on leader election and how a new leader would be elected when there is a timeout. To recap, once a follower times out, it increases its own term number,converts to candidate state and sends out RequestVote RPC to all the other nodes. On the receiver side, the node when it receives a RequestVote RPC, it would compare its own term number with the candidate’s term and it’s own log entries with the lastLogIndex and lastLogTerm values (sent out during RequestVote RPC) and decide whether to give a vote or not.

In phase 3, the lastLogIndex and lastLogTerm comparisons were trivial and not considered as the logs entries were always empty as no client requests were being made. However in this phase we will see how these comparisons play an important role in deciding who can and who cannot become the leader.

Note : All references to AppendEntry RPC’s also include heartbeats since both are practically the same thing.


The controller from phase 3 has a new purpose in this phase. The controller will be used to send out STORE requests to the RAFT cluster. This is similar to the client making a request to the RAFT cluster where the request gets appended to the log’s of the leader and eventually gets appended to the follower’s logs. You will be using the STORE cmd to make client requests.

The controller can send a request to any of the nodes to RETRIEVE all the committed entries at that particular node. However, only the Leader will respond with the committed entries[{entry1},{entry2},{entry3}], any other node responds with Leader Info as depicted in the diagram above. Also, the format of what constitutes an entry is as below – entry = {

“Term”: Term in which the entry was received(may or may not be current term)

“Key”: Key of the message (Could be some dummy value)

“Value”: Actual message (Could be some dummy value)

In phase 2, you would have built a simple mechanism of forwarding the client’s request from the leader node to all the followers and ensuring that the request is executed on all the nodes.

In RAFT, the request first gets appended to the log’s of each node and once the request has been appended to a majority of the node’s logs, the request is said to be committed and will be executed by the leader. This is an extremely simplified explanation of what actually happens.There are certain rules to a client req being appended to the followers logs, whether a follower will accept or reject an AppendEntry RPC and how the logs are replicated. Details in the paper

The leader maintains a nextIndex[] for each follower which is the index of the log entry that the leader will send to that follower in the subsequent AppendEntry RPC. When a leader first comes to power (when a candidate first changes state to a leader) it initializes all nextIndex[] values to the index just after the last one in it’s own log, this will happen every time a leader is elected.Eg, If the last committed index on the new leader’s own log is 5, then the leader initializes nextIndex[] for each follower node as 6 which means the leader is prepared to send the entry at position 6.