What does it do? – Port field indicates the network port over which the GazeSense app will communicate with a client application. GazeSense will always publish, in real-time, the tracking information to a client through that port. However, GazeSense will only listen to the client application when the Attention Context Control is set to API.

See also the GazeSenseExample.py script, for an example on how to receive data from the GazeSense app and how to configure the 3D setup from the client application.

In the output files generated by GazeSense, the user will be able to view a variety of metrics that include:

InTracking – boolean that indicates whether a subject is being currently tracked

ConnectionOK – boolean that indicates whether the client was successful in establishing a connection to the GazeSense application or not

head_pose – Rotation matrix and translation vector of the estimated face model, with respect to the reference point

nose_tip – 3D location of the subject’s nose tip

GazeCoding – string with the label of the gazed object

Head Attention Points, Head Attention Scores – head based intersection points and measured scores for each target

Gaze Attention Points, Gaze Attention Scores – gaze based intersection points and measured scores for each target

point_of_regard_3D – estimated 3D point where the subject is paying attention, calculated from a consensus between the Gaze and the Head Attention data

new_tracking_data – boolean that announces new data to track

timestamp – seconds passed since an arbitrary reference

screen_gaze_coordinates – 2D space conversion of the point of regard, for screen- like targets, to be understood as the screen pixel being gazed

rgb_video_frame_buffer – provides a raw buffer containing the row-wise RGB ordered pixels of the color video of the next frame, after the function request_next_video_frame() has been called