Event Endpoint

Events are the most numerous and most complex entity in our system. Events are of course generated from device button presses, but can also be created via api actions, such as auto-resets.

The nature of distributed counting makes dealing with discrete events more complicated than would first be imagined. For instance, one key issue is consistency. With each device going online and offline depending on random ambient conditions, there is no guarantee that we have a complete picture of what the current count at a given time is. Offline devices will eventually come online and may upload data that has occured minutes or hours earlier and would necessitate the recalculation of the count. Throw in ‘count reset’ events that bring the count back to zero, and the result is that you have to be very aware of the eventually consistent nature of a distributed system.

As such, the event API provides the tools required to synchronize the current state of the event stream without continuously refetching the entire event history - but this means that the client has some responsibility for ordering events and updating calculated counts to account for out-of-order uploads. The one invariant that we do guarantee is that as new events arrive, the system guarantees that their id is generated in an always ascending manner

So to live stream events, you’ll need two pieces of information:

  1. A ‘fixed point’ in time where you have a known absolute count. The TallyFi system generates periodic checkpoints that are used to set an absolute count. These can be invalidated too, but much more rarely. When they do become invalid their IDs change (indicating the counts should be completely re-calculated based on the new fixed point).

  2. The stream of events that have occurred after said ‘fixed point’. Since events can/will be uploaded out of temporal order, you can determine the count at any arbitrary point of time by sorting these relative events (each event is a ‘delta’, e.g. +5 men, -5 women) and updating the global count at each point in time.

The live updates polling then can simply as the server for new events are in the time window of concern and have an ID greater than the last one we received. Once the new events have been downloaded, the client has the responsibility to resort all the events back into temporal order and re-calculate the updated totals at each point in time.

Event Structure

The anatomy of an individual event is quite simple:

{
"id": 19000000,    /* unique id, always ascending in order of upload */
"maleUp": 1,
"maleDown": 0,
"femaleUp": 0,
"femaleDown": 0,
"reset": false,    /* the total count for this zone should be set to 0 */
"internal": false, /* is this an internal count between zones (redundant data at venue level) */
"nonThroughput": false, /* should this count be ignored for throughput calculations (re-entry) */
"time": 1563889523.0,   /* unix timestamp that the event occurred at */
"venue": 1,             /* venue_id */
"zone": 1,              /* zone_id */
"device": 452           /* device_id, can be null */
}

Event Streaming

GET /api/1.0/events?venue={venue_id}&recent_id={recent_id}&start_time={start_time}
Synopsis:

Get events and checkpoint that matches the GET parameters

Query Parameters:
  • venue – venue being streamed

  • start_time – Unix timestamp that matches the earliest time event that you seek to visualize/stream. The endpoint will return events that occured before that timestamp: it will return all events to the nearest checkpoint.

  • end_time – Unix timestamp that matches the last time event that you seek to visualize/stream. If it is not specified, the API assumes now. Unlike start_time, you should not receive events that occur after end_time.

  • recent_id – On the first request, this value should not be specified. Subsequent polling requests, this should be maximum event_id yet received from this endpoint

  • e.g. Retrieve event list:

GET /api/1.0/events?venue=1&start_time=1546318810 HTTP/1.1
apikey: 123456789
HTTP/1.1 200 OK
Content-Type: application/json

{
     "venueCheckpoint": {
         "id": 4000,
         "maleUpTotal": 0,
         "maleDownTotal": 0,
         "femaleUpTotal": 0,
         "femaleDownTotal": 0,
         "time": 1546318800.0,
         "venue": 1
     },
     "zoneCheckpoints": [
         {
             "id": 5000,
             "maleUpTotal": 0,
             "maleDownTotal": 0,
             "femaleUpTotal": 0,
             "femaleDownTotal": 5,
             "time": 1546318800.0,
             "zone": 2
         },
         {
             "id": 5001,
             "maleUpTotal": 0,
             "maleDownTotal": 0,
             "femaleUpTotal": 0,
             "femaleDownTotal": 0,
             "time": 1546318800.0,
             "zone": 3
         }
     ],
     "events": [
         {
             "id": 16806922,
             "maleUp": 1,
             "maleDown": 0,
             "femaleUp": 0,
             "femaleDown": 0,
             "reset": false,
             "internal": false,
             "nonThroughput": false,
             "time": 1562428900.0,
             "venue": 1,
             "zone": 1,
             "device": 1
         },
     ],
 }

Once you’ve received this structure, in order to graph or otherwise manipulate the data you will need to sort the events list by timestamp, locate the relevant global count (if the intent to visualize a zone in isolation, you need to start with the zone’s checkpoint), and iterate through each relevant event applying the up/down delta to discover the new global count at each point in time.