Zoom Logo

MATH-IMS Seminar - Shared screen with speaker view
Yuchen Guo
01:38:58
Thanks Prof. Shen's wonderful talk! In your talk, you mentioned the Floor-ReLU network. May I ask that how do you replace ReLU by Floor, is it randomly chosen? Or you use some other methods to do the replacement?
Chenran LIN
01:39:22
Thank Prof Shen for your good representation, it is quite interesting! And I have some problems.1.whether your theorem is related to the architecture of DNN?2. if I use some other activation function, can i get similar result?3. and can i use this in U-net?
Yuchen Guo
01:40:20
Thanks!
zoey
01:40:38
Thank you for your talk.Have you ever try smoother activation funcitons, such as RePU(max{0,x^s})
Chenran LIN
01:41:50
That's okay! Thank you!
zoey
01:42:39
I see.Thank you!
Zhiwen LI
01:43:06
Thank Prof. Shen for the nice talk. Do you think it more efficient to enforce some properties for the dictionary/generators by introducing a loss in Deep leaning approximation? For example orthogonality.
zoey
01:43:14
Do you consider the complexity of the NN?
Zhiwen LI
01:46:28
I see. Thanks a lot.
Hok Shing Wong
01:47:05
Prof. Shen, thank you for the wonderful talk. Do you think it is possible to extend the above analysis to semi-supervised or unsupervised learning? Thanks.
Xuran Meng HKU
01:47:25
Can you explain more on the equation on Page38
yiting
01:47:41
how to construct the function g,is there any special requirement for vector a