All time data are represented as the number of seconds since 1970/01/01 00:00:00 UTC as a double data format
This is compatible with Unix timestamps and convenient for addition and subtraction.
Changing the time to __int64 makes the speed slower because it require conversion in every operation.
The decimal point is used to indicate the precision below seconds. (for example 0.1 means 100 milliseconds)
Because the unix timestamp of year 2020 is 1,606,780,800, 31 bits are enough to express integer part of it. IEEE 754 double data type has 52 bit for fraction part, so 21 digits can be used for the representation of sub second time. 2 ^ 20 is about 1 million, so it has about 1 usec resolution.