Dark Mode
Image

Misc

Cristian's Algorithm

Client processes synchronise time with a time server using Cristian's Algorithm, a clock synchronisation algorithm. While redundancy-prone distributed systems and applications do not work well with this algorithm, low-latency networks where round trip time is short relative to accuracy do. The time interval between the beginning of a Request and the conclusion of the corresponding Response is referred to as Round Trip Time in this context.

An example mimicking the operation of Cristian's algorithm is provided below:

Cristian's Algorithm

Algorithm:

  1. The process on the client machine sends the clock server a request at time T 0 for the clock time (time at the server).
  2. In response to the client process's request, the clock server listens and responds with clock server time.
  3. The client process retrieves the response from the clock server at time T1 and uses the formula below to determine the synchronised client clock time.

TCLIENT = TSERVER plus (T1 - T0)/2.

where TCLIENT denotes the synchronised clock time, TSERVER denotes the clock time returned by the server, T0 denotes the time at which the client process sent the request, and T1 denotes the time at which the client process received the response

The formula above is valid and reliable:

Assuming that the network latency T0 and T1 are roughly equal, T1 - T0 denotes the total amount of time required by the network and server to transfer the request to the server, process it, and return the result to the client process.

The difference between client-side and real-time time is no more than (T1 - T0)/2 seconds. We can infer from the aforementioned statement that the synchronisation error can only be (T1 - T0)/2 seconds at most.

Hence,

Error E [-(T 1 - T 0)/2, (T 1 - T 0)/2]

The Python codes below demonstrate how Cristian's algorithm functions.

To start a clock server prototype on a local machine, enter the following code:

Python

  1. # Python3 program imitating a clock server  
  2.   
  3. import socket  
  4. import datetime  
  5.   
  6. # function used to initiate the Clock Server  
  7. def initiateClockServer():  
  8.   
  9.     s = socket.socket()  
  10.     print("Socket successfully created")  
  11.       
  12.     # Server port  
  13.     port = 8000  
  14.   
  15.     s.bind(('', port))  
  16.       
  17.     # Start listening to requests  
  18.     s.listen(5)   
  19.     print("Socket is listening...")  
  20.       
  21.     # Clock Server Running forever  
  22.     while True:  
  23.       
  24.     # Establish connection with client  
  25.     connection, address = s.accept()      
  26.     print('Server connected to', address)  
  27.       
  28.     # Respond the client with server clock time  
  29.     connection.send(str(  
  30.                     datetime.datetime.now()).encode())  
  31.       
  32.     # Close the connection with the client process  
  33.     connection.close()  
  34.   
  35.   
  36. # Driver function  
  37. if __name__ == '__main__':  
  38.   
  39.     # Trigger the Clock Server  
  40.     initiateClockServer()  

Output:

Socket successfully created
Socket is listening...

On the local machine, the following code runs a client process prototype:

Python

  1. # Python3 program imitating a client process  
  2.   
  3. import socket  
  4. import datetime  
  5. from dateutil import parser  
  6. from timeit import default_timer as timer  
  7.   
  8. # function used to Synchronize client process time  
  9. def synchronizeTime():  
  10.   
  11.     s = socket.socket()       
  12.       
  13.     # Server port  
  14.     port = 8000   
  15.       
  16.     # connect to the clock server on local computer  
  17.     s.connect(('127.0.0.1', port))  
  18.   
  19.     request_time = timer()  
  20.   
  21.     # receive data from the server  
  22.     server_time = parser.parse(s.recv(1024).decode())  
  23.     response_time = timer()  
  24.     actual_time = datetime.datetime.now()  
  25.   
  26.     print("Time returned by server: " + str(server_time))  
  27.   
  28.     process_delay_latency = response_time - request_time  
  29.   
  30.     print("Process Delay latency: " \  
  31.         + str(process_delay_latency) \  
  32.         + " seconds")  
  33.   
  34.     print("Actual clock time at client side: " \  
  35.         + str(actual_time))  
  36.   
  37.     # synchronize process client clock time  
  38.     client_time = server_time \  
  39.                     + datetime.timedelta(seconds = \  
  40.                             (process_delay_latency) / 2)  
  41.   
  42.     print("Synchronized process client time: " \  
  43.                                         + str(client_time))  
  44.   
  45.     # calculate synchronization error  
  46.     error = actual_time - client_time  
  47.     print("Synchronization error : "  
  48.                 + str(error.total_seconds()) + " seconds")  
  49.   
  50.     s.close()     
  51.   
  52.   
  53. # Driver function  
  54. if __name__ == '__main__':  
  55.   
  56.     # synchronize time using clock server  
  57.     synchronizeTime()  

Output:

Time returned by server: 2018-11-07 17:56:43.302379
Process Delay latency: 0.0005150819997652434 seconds
Actual clock time at client side: 2018-11-07 17:56:43.302756
Synchronized process client time: 2018-11-07 17:56:43.302637
Synchronization error : 0.000119 seconds

We can define a minimum transfer time that we can use to create an improved synchronisation clock time through iterative testing over the network (less synchronisation error).

The server time will always be generated after T0 + T min in this case, and TSERVER will always be generated before T1 - Tmin, where T min is the minimum transfer time, which is the minimum value of TREQUEST and TRESPONSE during several iterative tests. Here, a formulation of the synchronisation error is as follows:

Error E [-((T1 - T0)/2-Tmin), ((T1 - T0)/2-Tmin)]

Similar to this, if TREQUEST and TRESPONSE differ by a significant amount of time, TMIN1 and TMIN2 may be used in place of TMIN1 and TMIN2, respectively, where TMIN1 denotes the minimum observed request time and TMIN2 denotes the minimum observed response time over the network.

In this scenario, the synchronised clock time can be calculated as follows:

(T1 - T0)/2 + (Tmin2 - Tmin1)/2 +TSERVER = TCLIENT

Therefore, we can improve clock time synchronisation and subsequently reduce the overall synchronisation error by simply introducing response and request time as separate time latencies. The total clock drift that is seen will determine how many iterative tests need to be performed.

Comment / Reply From