Image from paper
Dynamic Perception is a method for robot navigation that uses visual perception to detect changes in the environment. Dynamic Perception with Policy Selection is an extension of this method that also incorporates a decision-making process to select the best action based on the detected changes.
GNN version and flags
## LanGauge
v1
## LanGauge
v1
## LanGauge
v1
## LanGauge
v1
The instructions provide two identical links to the same language version of Dynamic Perception, so it is unclear what the difference between the two options is. Please provide more information or clarify the instructions.
Model name
# Static perception v1
# Dynamic perception v1
# Dynamic perception with Policy Selection v1
# Dynamic perception with Flexible Policy Selection v1
The main difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] is that the latter incorporates a policy selection mechanism, which allows for more efficient decision-making in dynamic environments.
Model annotation
## Model annotations
Static
Perception
Simple
Snapshot
This model relates a single hidden state, to a single observable modality. It is a static model.
## Model annotations
Dynamic
Perception
This model relates a single hidden state, to a single observable modality. It is a dynamic model because it tracks changes in the hidden state through time.
## Model annotations
Dynamic
Perception
Action
Variational Free Energy
This model relates a single hidden state, to a single observable modality. It is a dynamic model because it tracks changes in the hidden state through time. There is Action applied via pi.
## Model annotations
Dynamic
Perception
Action
Variational Free Energy
This model relates a single hidden state, to a single observable modality. It is a dynamic model because it tracks changes in the hidden state through time. There is Action applied via pi, and uncertainty about action via the beta parameter.
The main difference between Dynamic Perception and Dynamic Perception with Policy Selection is the addition of Action and Variational Free Energy in the latter model. Both models relate a single hidden state to a single observable modality and track changes in the hidden state through time.
State space block
## State space block
D[2,1,type=float]
s[2,1,type=float]
A[2,2,type=float]
o[2,1,type=float]
## State space block
D[2,1,type=float]
B[2,1,type=float]
s_t[2,1,type=float]
A[2,2,type=float]
o_t[2,1,type=float]
t[1,type=int]
## State space block
A[2,2,type=float]
D[2,1,type=float]
B[2,len(π), 1,type=float]
π=[2]
C=[2,1]
G=len(π)
s_t[2,1,type=float]
o_t[2,1,type=float]
t[1,type=int]
## State space block
A[2,2,type=float]
D[2,1,type=float]
B[2,len(π),1,type=float]
π=[2]
C=[2,1]
G=len(π)
s_t[2,1,type=float]
o_t[2,1,type=float]
t[1,type=int]
The main difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] is that the latter includes a policy selection component, which involves the use of a policy function to select actions based on the current state of the system. Additionally, [Dynamic Perception with Policy Selection] has a more complex state space block that includes additional variables such as π, C, and G.
Connections
## Connections among variables
D-s
s-A
A-o
## Connections among variables
D-s_t
s_t-A
A-o
s_t-B
B-s_t+1
## Connections among variables
D-s_t
s_t-A
A-o
s_t-B
B-s_t+1
C>G
G>π
## Connections among variables
D-s_t
s_t-A
A-o
s_t-B
B-s_t+1
C>G
G>π
E>π
β-γ
γ>π
The main difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] is that the latter includes additional connections among variables involving C, G, and π.
Initial parameterization
## Initial Parameterization
D={0.5,0.5}
o={1,0}
## Initial Parameterization
## Initial Parameterization
## Initial Parameterization
The only difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] seems to be the title of their initial parameterization section.
Equations
## Equations
\text{softmax}(\ln(D)+\ln(\mathbf{A}^\top o))
## Equations
s_{tau=1}=softmax((1/2)(ln(D)+ln(B^dagger_tau*s_{tau+1})+ln(trans(A)o_tau)
s_{tau>1}=softmax((1/2)(ln(D)+ln(B^dagger_tau*s_{tau+1})+ln(trans(A)o_tau)
## Equations
s_{pi, tau=1}=sigma((1/2)(lnD+ln(B^dagger_{pi, tau}s_{pi, tau+1}))+lnA^T*o_tau)
s_{pi, tau>1}=sigma((1/2)(ln(B_{pi, tau-1}s_{pi, tau-1})+ln(B^dagger_{pi, tau}s_{pi, tau+1}))+lnA^T*o_tau)
G_pi=sum_tau(A*s_{pi, tau}*(ln(A*s_{pi, tau})-lnC_tau)-diag(A^TlnA)*s_{pi, tau})
pi=sigma(-G)
## Equations
F_pi = sum_tau (s_{pi, tau} * (ln(s_{pi, tau}) - (1/2)*(ln(B_{pi, tau-1}s_{pi, tau-1}) + ln(B^dagger_{pi, tau}s_{pi, tau+1})) - A^T*o_tau))
pi_0=sigma(lnE-gamma*G)
pi=sigma(lnE-F-gamma*G)
p(gamma)=Gamma(1,beta)
E[gamma]=gamma=1/beta
beta=beta-beta_{update}/psi
beta_{update}=beta-beta_0+(pi-pi_0)*(-G)
The main difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] is that the latter includes a policy selection component, which involves a function for selecting an optimal policy for a given task. This is reflected in the updated equations for s_{pi, tau=1} and s_{pi, tau>1}, as well as the inclusion of the additional term G_pi and the final policy calculation pi=sigma(-G).
Time
## Time
Static
## Time
Dynamic
s_t=DiscreteTime
ModelTimeHorizon=Unbounded
## Time
Dynamic
s_t=DiscreteTime
ModelTimeHorizon=Unbounded
## Time
Dynamic
s_t=DiscreteTime
ModelTimeHorizon=Unbounded
The instructions do not provide enough information to differentiate between [Dynamic Perception] and [Dynamic Perception with Policy Selection]. The given text for both options appears to be identical. Please provide additional information or clarification.
ActInf Ontology annotation
## Active Inference Ontology
A=RecognitionMatrix
D=Prior
s=HiddenState
o=Observation
## Active Inference Ontology
A=RecognitionMatrix
B=TransitionMatrix
D=Prior
s=HiddenState
o=Observation
t=Time
## Active Inference Ontology
A=RecognitionMatrix
B=TransitionMatrix
C=Preference
D=Prior
G=ExpectedFreeEnergy
s=HiddenState
o=Observation
π=PolicyVector
t=Time
## Active Inference Ontology
A=RecognitionMatrix
B=TransitionMatrix
C=Preference
D=Prior
E=Prior on Action
G=ExpectedFreeEnergy
s=HiddenState
o=Observation
π=PolicyVector
t=Time
The main difference between [Dynamic Perception] and [Dynamic Perception with Policy Selection] is the inclusion of a Preference matrix (C) and a Policy Vector (π) in the latter. These additions allow for the selection of a specific policy to be used in decision-making, whereas Dynamic Perception alone does not include this feature.
Footer
# Static perception v1
# Dynamic perception v1
# Dynamic perception with Policy Selection v1
The main difference between Dynamic Perception and Dynamic Perception with Policy Selection is that the latter includes a policy selection mechanism.