Meta-Learning Panel Discussion
Subscribe to updates
Share and Invite
What are we currently all doing wrong in metalearning?
What's your definition of Meta-Learning? If we can do meta-meta-learning, meta-meta-meta learning ..., is there an end of this endless meta... process?
How meta-overfitting differs from regular overfitting, and how to address them?
What are the greatest challenges you see in meta-learning?
What do you think is the most promising thing to work on in meta-learning?
Do we need episodes/tasks in meta-learning, or should meta-learning be one continual learning task?
Can we discover completely new/novel learning paradigms/novel architectures through meta-learning?
How can we measure which tasks are useful to learn from and which ones are too dissimilar?
Are we just "transfer learning"? How do we get away from doing this?
Why is the so little meta learning in the NLP community?
When is meta-learning merely learning? How can we measure task distance, or the degree of generalization, between meta-train and meta-test?
Are some tasks (or family of tasks) easier to meta-learn than others?
Nando said performance for them got better and better with larger and larger networks. Do we only need to scale things up? (And what challenges do you see in this?)
What is the place of meta learning in continuous learning.
Meta-learning seems to help us increase the generalization power of our models, so presumably meta-meta learning can offer a corresponding increase, and so on. But there is the no free lunch theorem, so there has to be an asymptote. Where do you think that asymptote lies?
Can we make sure that meta-learned algorithms are interpretable?
Why haven’t we seen Neural Bayesian Optimization to take off similarly in meta-learning as search by gradient descent?
It seems that the current state-of-the-art in meta-reinforcement-learning presents results on multi-task domains that can be easily solved by contextual policy learning. What is the bottleneck?
How are images represented in Meta Learning? is classification done on the pixel level? Dont we need a huge model and data to learn a good representation?
How shall we combine Meta learning with continuous learning snd how can metalearning avoid catastrophic forgetting ?
Do you think in principle we can solve all machine learning problems with meta learning?
Where do we draw the line between meta learning and transfer learning?
The majority of meta-learning techniques have been applied/evaluated on imaging or RL tasks. Can these same techniques be easily applied to NLP problems?
What is the role of self improvement?
Why are we even comparing for human's brain? Can't we have superior meta-learning algorithms?
How do we bridge the gap between few-shot and many-shot learning? What challenges do we need to overcome in order to simultaneously solve both?