
01:38:58
Thanks Prof. Shen's wonderful talk! In your talk, you mentioned the Floor-ReLU network. May I ask that how do you replace ReLU by Floor, is it randomly chosen? Or you use some other methods to do the replacement?

01:39:22
Thank Prof Shen for your good representation, it is quite interesting! And I have some problems.1.whether your theorem is related to the architecture of DNN?2. if I use some other activation function, can i get similar result?3. and can i use this in U-net?

01:40:20
Thanks!

01:40:38
Thank you for your talk.Have you ever try smoother activation funcitons, such as RePU(max{0,x^s})

01:41:50
That's okay! Thank you!

01:42:39
I see.Thank you!

01:43:06
Thank Prof. Shen for the nice talk. Do you think it more efficient to enforce some properties for the dictionary/generators by introducing a loss in Deep leaning approximation? For example orthogonality.

01:43:14
Do you consider the complexity of the NN?

01:46:28
I see. Thanks a lot.

01:47:05
Prof. Shen, thank you for the wonderful talk. Do you think it is possible to extend the above analysis to semi-supervised or unsupervised learning? Thanks.

01:47:25
Can you explain more on the equation on Page38

01:47:41
how to construct the function g,is there any special requirement for vector a