The paper introduces RuleGuider, a method that combines symbolic-based rules with walk-based reinforcement learning agents for improved knowledge graph reasoning. RuleGuider enhances the interpretability and performance of walk-based models by providing supervisory rewards based on high-quality rules, addressing challenges posed by sparse reward signals during the traversal of knowledge graphs. Experiments demonstrate that RuleGuider achieves state-of-the-art results on benchmark datasets while maintaining interpretability.
Related topics: