
01:20:42
Prof. Michael Hintermüller, thank you very much for your interesting sharing! Could you please explain a little bit more about how to choose the activation functions (smooth or nonsmooth) based on the prior information?

01:23:49
Prof. Hintermueller, it is really nice and interesting talk. Thank you very much. I may have some questions to ask. Why the machine learning technique is only used to approximate the optimization constraint instead of the solution of the whole inverse/optimization problem? For the neural network, how large is the training data and how to generate the training data to get the learning based model?

01:25:36
Prof. Hintermüller, thanks for the very inspiring talk.For the optimization framework, can the current method get better reconstruction results with regularization terms like TV or rank(X) for certain scenarios?

01:31:41
Thank you very much for your explanation and thanks again for the very nice sharing!

01:31:54
Prof. Hintermueller, thank you for the wonderful talk. For the optimization framework, you mentioned that the dictionary space is a high dimensional manifold. I wonder if there are any studies on the geometric property of this space? As the objective function is relatively simpler compared to the dictionary space, it seems possible to try manifold optimization approach if we have some geometric understanding of the space.

01:33:17
Thank you for your great talk. May I ask after you train the network to approximate the optimization constraint, how do you incorporate it into the optimization process? Do you use algorithms such as ADMM?

01:49:59
Thank you for your great talk.