time is up

Topics and questions to discuss during the Q&A sessions of the DLTM workshop.

saved

Ideas

Pros and cons

Votes

What are the tasks deep learning can and cannot solve.

by Francois Fleuret

39

Vote

Laleh, amina, Pierre-Alexandre and 36 more

What are the insights on choosing a deep learning architecture for a particular task?

by Andre Anjos

35

Vote

amina, Pierre-Alexandre, Serife and 32 more

What are the limitations of the current deep learning frameworks.

by Francois Fleuret

31

Vote

Laleh, amina, Pierre-Alexandre and 28 more

How do you interpret (gain insight) from what the network learns

by Carlos Peña

20

Vote

Thibault GROUEIX, amina, Pierre-Alexandre and 17 more

How much performance improvement in practice comes from hyperparameter tuning as opposed to exploring different e.g. deeper architectures?

18

Vote

Darshan, Michaël, Paulina G. and 15 more

What are the unavoidable trade-offs when designing a deep-learning framework.

by Francois Fleuret

16

Vote

Luca Baldassarre, Anaïs Badoual, Armand Valsesia and 13 more

What are good recommendations to gather sufficient data (variability) to train a particular deep learning architecture? More is better always?

by Andre Anjos

14

Vote

Amir Mohammadi, Young Joo Seo, Pierre-Alexandre and 11 more

What is the most promising challenger among other machine learning techniques.

by Francois Fleuret

14

Vote

amina, Pierre-Alexandre, Serife and 11 more

What optimization method would you recommend using to train a neural network? SGD, Adagrad, Adam, LBFGS, w/o momentum, ...

by Aurelien Lucchi

11

Vote

Young Joo Seo, amina, Serife and 8 more

Are there aspects of deep learning that still lack theoretical foundations?

by Pedro Gusmao

10

Vote

Damien, amina, Serife and 7 more

How should the academic value of a new architecture be judged?

by Florian S.

10

Vote

Serife, James Newling, Cijo Jose and 7 more

What is the scale of the brute-force grid-search for meta-parameters at Google and Facebook.

by Francois Fleuret

9

Vote

Young Joo Seo, Damien, Olivier and 6 more

When do we use deep neural network compared to other machine learning technique.

by Ateneken

8

Vote

Damien, Serife, Stepan Tulyakov and 5 more

What are the expected "solved task" milestones in the 1, 3 and 10 years horizons.

by Francois Fleuret

6

Vote

Pierre-Edouard, James Newling, Cijo Jose and 3 more

How can we exploit Reinforcement learning in Deep Architectures? What do we need to make it actually work?

by Prodromos

6

Vote

Pierre-Alexandre, Michael, Tulyakov and 3 more

What are the current limitations of RNN's and how do memory based models solve them?

by Jason Ramapuram

6

Vote

Gulcan, Young Joo Seo, Sébastien Piccand and 3 more

What are the books any deep-learning practitioner must have read.

by Francois Fleuret

5

Vote

Reinforcement learning, Michael, Sara Sedlar and 2 more

What are potentials and limitations of transfer learning in deep neural networks? How different datasets and tasks between the base and target network could be?

by Sara Sedlar

5

Vote

LI Xuhong, Michael, Tulyakov and 2 more

How well is deep learning performing with online training

by Alex Nanchen

5

Vote

Pierre-Alexandre, Pierre-Edouard, Sara Sedlar and 2 more

Deep learning for NLP - state-of-the-art network architectures, limitations, data representation (word-level, character-level, N-grams of words or characters, ...), word embeddings

by Jasmina

4

Vote

amina, Pierre-Alexandre, Maxime Darçot and 1 more

Should AlexNet/CNN features now be taken for granted and used as HOG/SIFT?

by Olivier Canévet

4

Vote

Gulcan, Cijo Jose, Francois Fleuret and 1 more

Will deep learning architectures be deeper and deeper in the future?

by Paolo russo

4

Vote

Ivan Štajduhar, Stepan Tulyakov, Hesam Setareh and 1 more

Is there any computational-intensive deep-learning task/algorithm which cannot be easily accelerated with GPUs.

by Mikhail Asiatici

4

Vote

James Newling, Sofiane Sarni, Sara Sedlar and 1 more

Is there a future for mass-market deep-learning specific hardware.

by Francois Fleuret

4

Vote

Olivier, Tiago, Cijo Jose and 1 more

Does deep learning creates a need for new engineer profiles.

by Francois Fleuret

4

Vote

Pierre-Alexandre, Olivier, James Newling and 1 more

The idea of growing neural networks was very popular in the early 90's, for example the famous cascade-correlation. But now the trend is to use fixed large capacity networks. Do you see a future in growing networks?, if not why?

by Cijo Jose

I think it is similar to my question about not predefined structure.

by Stepan Tulyakov

3

Vote

James

Olivier

Stepan Tulyakov

Deep learning's success in due in part to (1) improved algorithms (2) improved hardware and (3) more abundant data. Please rank the importance of these 3 factors. Does your importance ranking match current research budgets, and should it?

by James Newling

3

Vote

Cijo Jose

James

Olivier

Adversarial Examples (Szegedy '14) : (1) what does their existence suggest, and (2) how can we best use them?

by James Newling

3

Vote

James

Olivier

Tatjana Chavdarova

How successful are DNN with not predefined structure? It there any progress with such DNNs?

by Tulyakov

3

Vote

Mikhail Asiatici

Olivier

Tulyakov

What is recent progress, future prospecive and difficulties with semi-supervised and unsupervised learning with DNN

by Tulyakov

3

Vote

amina

Gulcan

Tulyakov

What is the most principled way of comparing two DL architectures?

by Tatjana Chavdarova

3

Vote

Olivier

Sara Sedlar

Serife

How will the interactions between the private sector and the academic world evolve.

by Francois Fleuret

3

Vote

El Mahdi El Mhamdi

Francois Fleuret

Marta Martinez

What are the key deep learning factors for good generalisation ? The amount of data ? Sparse representations ? Depth ? Objective function ? etc.

by Sébastien Piccand

3

Vote

Luca Baldassarre

Sara Sedlar

Tiago

Which is the more simple (in terms of coding and use) framework for Deep Learning?

by Fotini

3

Vote

Anaïs Badoual

Olivier

Tulyakov

Do you see a potential for FPGAs in deep learning.

by Mikhail Asiatici

3

Vote

Mikhail Asiatici

Olivier

Serife

How much does driving the learning process help ? e.g. Pre training some first layers for manifold learning; adding layers for specific tasks (alignment, normalization, feautre reduction); and finally adding final layers for some final decision.)

by Sébastien Piccand

This could be counter productive and limit the capabilities of the deep architecture.

This could make the learning process faster as it simplifies the big picture.

2

Vote

amina

James

Is there a sound alternative to "baby-sitting the learning rate/the experiment"?

by Olivier

2

Vote

Cijo Jose

James

What are the current ideas/strategies in the deep learning community to create networks being able to do one-shot learning?

by Michael

2

Vote

El Mahdi El Mhamdi

Gulcan

What is the pros and cons of the deep learning libraries; Theano, Tensorflow, Caffe and Torch7?

by Young Joo Seo

1

Vote

Thibault Groueix

Are there theory-grounded rules to determine the ideal size of a DNNetwork?

by El Mahdi El Mhamdi

1

Vote

amina

What are the recent advances in zero-shot/one-shot learning/ data-efficient learning with DNNs/RNNs? What are the future prospects in these tasks?

by Gulcan Can

0

Vote

In theory, what are the characteristics of a self-supervised task(if there is any) through which convnet learns richer representations than ones it learns via human supervision? In practice, what challenges do we need to address to implement such tasks?

by Mehdi Noroozi

0

Vote

Is RTRL still a valid solution to online problems (Since all modern ML RNN training algorithms use BPTT) ?

by Jason Ramapuram

0

Vote

Foteini

Which is the more simple/easy in framework for Deep Learning?

Mon, Jun 27, 2016

http://www.tricider.com/brainstorming/38Q6gxl5sNZ