#DARPA concluded its #SAILON program on #OpenWorldLearning #OWL. I just attended its last PI meeting.

The program sought to study a very new paradigm of #ML. How to design #AI systems that can recognize, characterize, and accommodate distributional shifts, transformations, perturbations in the domains _after_ they have been deployed, _without_ retraining/reprogramming?

This learning paradigm breaks out of the train/test mode that classical ML is setup as.

The problem truly is #DARPAhard!

#AI #ML research/publishing operates in silos - to the detriment of making progress.

Our #IJCAI submission on #OpenWorldLearning #OWL was rejected for good and bad reasons.

The bad reason: "this is not just planning but also something similar to reinforcement learning".

Guess what - that is the point of our research! We are trying to close the gap between designed #AIPlanning systems and adaptive #Learning systems. It is a super-hard gap to push #AI #ML algorithmic research in.

@mattslocombe @cogsci @cognition

Thank you! I had a blast talking about #AI, #cognition, #analogy, #OpenWorldLearning #InteractiveTaskLearning. The forum was the very BEST: very insightful students & delightful psychologists.

Honestly, I learned quite a bit from the discussions. Amidst the world wide storm of #AI and #ML, sometimes we forget why #HumanIntelligence is so special.

New in #AI #ML that is not #chatgpt

I am STOKED about our research on #OpenWorldLearning at #AAMAS 2023.

#OWL is a novel learning paradigm. The three waves of #AI share a common design pattern. Phase 1 - program/train the inference algorithm; Phase 2 - deploy it. If deployment finds some unhandled usecases, go back to the first phase.

#OWL breaks this cycle & builds systems that can #learn like #humans - they learn autonomously AFTER they have been deployed.

https://arxiv.org/abs/2303.14272

Learning to Operate in Open Worlds by Adapting Planning Models

Planning agents are ill-equipped to act in novel situations in which their domain model no longer accurately represents the world. We introduce an approach for such agents operating in open worlds that detects the presence of novelties and effectively adapts their domain models and consequent action selection. It uses observations of action execution and measures their divergence from what is expected, according to the environment model, to infer existence of a novelty. Then, it revises the model through a heuristics-guided search over model changes. We report empirical evaluations on the CartPole problem, a standard Reinforcement Learning (RL) benchmark. The results show that our approach can deal with a class of novelties very quickly and in an interpretable fashion.

arXiv.org

January has been an exciting month for #AI #ML fundamental research at #PARC.

Our work on making #AIPlanning methods work/learn in an #OpenWorld -will be presented at #AAMAS2023 as well as at #ICAPS2023. AND, an #AIJ article is under works.

#OpenWorldLearning is a new challenge - the environments introduce novelties while the agent is operating in the world. The agent must detect, characterize, and accommodate novelties during run time. This research is a part of #DARPA #SAILON program