EE 378 A : Projects

Rules

  1. Projects may consist of literature review or a mixture of literature review and original work (simulations, calculations, proofs)

  2. Project groups are preferably formed by two students.

  3. You are required to

    1. Communicate the group composition and project topic to Daria (resh@stanford.edu) by April 23.

    2. Record a 20 mins presentation on your project, and upload them on Canvas alongside slides by May 20 (More detailed instructions will be provided).

    3. Watch other groups presentations. We will discuss them together during the week of May 23.


Ideas

Several of the ideas below refer to topics that will be touched in the lectures. A project is an opportunity to take a deep dive into that specific topic.

The reference listed here are meant to suggest possible starting points for your investigation. You are strongly encouraged to search the literature and read/report whatever you find interesting.


  • Deblurring

Whyte, O., Sivic, J., Zisserman, A. and Ponce, J., 2012. Non-uniform deblurring for shaken images. International journal of computer vision, 98(2), pp.168-186.

Cho, S. and Lee, S., 2009. Fast motion deblurring. In ACM SIGGRAPH Asia 2009 papers (pp. 1-8).

Levin, A., 2006. Blind motion deblurring using image statistics. Advances in Neural Information Processing Systems, 19

  • Dictionary learning

Tošić, I. and Frossard, P., 2011. Dictionary learning. IEEE Signal Processing Magazine, 28(2), pp.27-38.

Mairal, J., Bach, F., Ponce, J. and Sapiro, G., 2009, June. Online dictionary learning for sparse coding. In Proceedings of the 26th annual international conference on machine learning

Spielman, D.A., Wang, H. and Wright, J., 2012. Exact recovery of sparsely-used dictionaries. Conference on Learning Theory

Sun, J., Qu, Q. and Wright, J., 2016. Complete dictionary recovery over the sphere I: Overview and the geometric picture. IEEE Transactions on Information Theory, 63(2), pp.853-884.

  • Compressed sensing beyond sparsity

Chandrasekaran, V., Recht, B., Parrilo, P.A. and Willsky, A.S., 2012. The convex geometry of linear inverse problems. Foundations of Computational mathematics, 12(6), pp.805-849.

Bhaskar, B.N., Tang, G. and Recht, B., 2013. Atomic norm denoising with applications to line spectral estimation. IEEE Transactions on Signal Processing, 61(23), pp.5987-5999

  • Total variation denoising

Rudin, L.I., Osher, S. and Fatemi, E., 1992. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4), pp.259-268.

Chambolle, A., Caselles, V., Cremers, D., Novaga, M. and Pock, T., 2010. An introduction to total variation for image analysis. Theoretical foundations and numerical methods for sparse recovery, 9(263-340), p.227.

Hütter, J.C. and Rigollet, P., 2016, June. Optimal rates for total variation denoising. In Conference on Learning Theory (pp. 1115-1146). PMLR.

  • RKHS methods

Alain Berlinet and Christine Thomas-Agnan, Reproducing kernel Hilbert spaces in probability and statistics, Springer Science & Business Media, 2011.

Andrea Caponnetto and Ernesto De Vito, Optimal rates for the regularized least-squares algorithm, Foundations of Computational Mathematics 7 (2007)

Mei, S., Misiakiewicz, T. and Montanari, A., 2021, July. Learning with invariances in random features and kernel models. In Conference on Learning Theory

  • Deep learning approaches to denoising

Wang, X., Girshick, R., Gupta, A. and He, K., 2018. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition

Zhang, K., Zuo, W., Chen, Y., Meng, D. and Zhang, L., 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7)

  • Deep learning approaches to compression

Toderici, G., O'Malley, S.M., Hwang, S.J., Vincent, D., Minnen, D., Baluja, S., Covell, M. and Sukthankar, R., 2015. Variable rate image compression with recurrent neural networks. arXiv:1511.06085.

Theis, L., Shi, W., Cunningham, A. and Huszár, F., 2017. Lossy image compression with compressive autoencoders. arXiv:1703.00395.

Mentzer, F., Toderici, G.D., Tschannen, M. and Agustsson, E., 2020. High-fidelity generative image compression. Advances in Neural Information Processing Systems, 33, pp.11913-11924.

  • Deep learning approaches to inverse problems

Bora, A., Jalal, A., Price, E. and Dimakis, A.G., 2017, July. Compressed sensing using generative models. In International Conference on Machine Learning (pp. 537-546)

Lucas, A., Iliadis, M., Molina, R. and Katsaggelos, A.K., 2018. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Processing Magazine, 35(1), pp.20-36.

Gottschling, N.M., Antun, V., Adcock, B. and Hansen, A.C., 2020. The troublesome kernel: why deep learning for inverse problems is typically unstable. arXiv:2001.01258.