A simple neural network module for relational reasoning
- Symbol grounding problem. - Not robust to small task. - Input variations. - data-poor problems → sparse but complex relations.
- Relation networks (RNs): single function to compute/score all relations. - RN-augmented architecture.
Deep Logic: Joint Learning of Neural Perception and Logical Reasoning
- Hard to reach global optima. - Not designed to deal with semantic data (images & text).
- Joint learning framework.
- Build Logical Network consists of: + Multi-nomial(x ; h_k). + h_k: output of BiLSTM (model dependencies between different layers). + The representations operates the finite recursion to feed through the logical gate multiple times. - Training (Optimization Stage): + Optimize encoder, fix logical network: + Optimize logical network, fix encoder:
- Disentangle to reasoning. - Disentangled representations as symbolics. - The network is not adaptive, the logical nodes are fixed. There should be an approach to reuse the logical nodes (via LSTM/RNN) or Regularization like Pruning/Quantization etc.
Bayesian Inverse Contextual Reasoning for Heterogeneous Semantics-Native Communication
Trading off Utility, Informativeness, and Complexity in Emergent Communication.
Human languages are shaped by task-general communicative constraints. Leverages generative adversarial network. Optimize a tradeoff between maximizing utility, informativeness vs. minimizing complexity. Minimize the complexity of signal → explainable. Disentanglement of c into invariant and non-invariant signal. Reasoning from the invariant signal.
Alexa Arena: A User-Centric Interactive Plaform for Embodied AI
Human-like systematic generalization through a meta-learning neural network
SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Iterated Learning Improves Compositionality in Large Vision-Language Models
.
Nethack is Hard to Hack
Grounding Neural Inference with Satisfiability Modulo Theories
Active Reasoning in an Open-World Environment
Egocentric Planning for Scalable Embodied Task Achievement
Efficient Symbolic Policy Learning with Differentiable Symbolic Expression
What’s Left? Concept Grounding with Logic-Enhanced Foundation Models
A social path to human-like artificial intelligence
When the world is large enough that the initial behavior distribution covers only a relatively tiny subspace. Natural Intelligence emerges at multiple scales in networks of interacting agents via collective living, social relationships and major evolutionary transitions. Learning quality depends on the richness and size of the dataset. Bottleneck in AI shifting from data assimilation to novel data generation. Exploration and Exploitation can be synergistic instead of antagonistic. Compounding Innovation: Exploitation drives exploration → appropriate direction.
A Dual Representation Framework for Robot Learning with Human Guidance
Neurosymbolic AI for Reasoning Over
Knowledge Graphs: A Survey
Scene-Driven Multi-modal Knowledge Graph
Construction for Embodied AI
Semantic HELM: A Human-Readable Memory
for Reinforcement Learning
Describe, Explain, Plan and Select:
Interactive Planning with Large Language Models
Enables Open-World Multi-Task Agents
Leveraging Symbolic Knowledge Bases
for Commonsense Natural Language
Inference Using Pattern Theory
State2Explanation: Concept-Based Explanations to
Benefit Agent Learning and User Understanding
Read and Reap the Rewards:
Learning to Play Atari with the Help of Instruction
Manuals
3D Concept Learning and Reasoning from Multi-View Images
Large Language Models Are Neurosymbolic Reasoners
Interactive Visual Reasoning under Uncertainty
Interactive Visual Reasoning under Uncertainty
EgoTV : Egocentric Task Verification from Natural Language Task Descriptions
Interpretable Imitation Learning with Symbolic Rewards
EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning
Interpretable and Explainable Logical Policies
via Neurally Guided Symbolic Abstraction