Extended Kalman Filter - Basic Questions

Hey guys,

I’m looking through the code for the EKF and Dynamic Models and have a few questions (I’m particularly interested in using these libraries for purposes of writing my own dynamic model).

  1. I’m curious why you guys put use no_grad when writing the jacobians. For example I see the code:
def jacobian(self, dt):
        """
        Compute and return cached native state transition Jacobian (F) over
        time interval ``dt``.

        :param dt: time interval to integrate over.
        :return: Read-only Jacobian (F) of integration map (f).
        """
        if dt not in self._F_cache:
            d = self._dimension
            with torch.no_grad():
                F = eye_like(self.sa2, d)
                F[: d // 2, d // 2 :] = dt * eye_like(self.sa2, d // 2)
            self._F_cache[dt] = F

        return self._F_cache[dt]

It appears this means that this particular dynamic model will not allow learning to propagate through the matrix F here. Does this mean that learning the transition jacobian is not an intention of the EKF?

  1. It appears the EKF supports filtering, but not smoothing. Is this correct? For example, I’m seeing this code for the EKFDistribution:
def filter_states(self, value):
        """
        Returns the ekf states given measurements

        :param value: measurement means of shape `(time_steps, event_shape)`
        :type value: torch.Tensor
        """
        states = []
        state = EKFState(self.dynamic_model, self.x0, self.P0, time=0.0)
        assert value.shape[-1] == self.event_shape[-1]
        for i, measurement_mean in enumerate(value):
            if i:
                state = state.predict(self.dt)
            measurement = PositionMeasurement(
                measurement_mean, self.measurement_cov, time=state.time
            )
            state, (dz, S) = state.update(measurement)
            states.append(state)
        return states
  1. How do you specify the observation matrix? I see that there is measurement_cov, but I don’t see where the observation matrix goes.

It appears this is only forward propagation for message passing. I don’t see any backward message passing. I was curious if this is indeed true, and if it is, whether backward propagation will be supported in the future.