CSE 455/555 Introduction to Pattern Recognition
In this updated Pattern Recognition course, we’ll learn not only by lectures and problem sets but by diving directly into the original research papers that shaped the field. Each week, you’ll read several papers—ranging from classic template-matching methods and statistical classifiers to modern deep-learning approaches—then discuss and implement key ideas.
Our in-class sessions will be a mix of presentations, guided code walkthroughs, and Q&A. You’ll write brief critical analyses for each paper, code up core algorithms in Python (using NumPy, scikit-learn or PyTorch), and compare performance on benchmark datasets. By the end of the semester, you’ll not only master pattern-recognition techniques but also develop the critical reading and implementation skills needed to engage with cutting-edge research.
Prerequisite: Pre-Requisites: CSE 250 or EAS 230 or EAS 240 or CSE 115 or EAS 999TRCP and EAS 305 or STA 301 and STA 301 or MTH 411; Computer Science, Computer Engineering, or Bioinformatics majors only. Students must complete a mandatory advisement session with their faculty advisor.
Resources: A curated PDF reading list will be provided; expect weekly code exercises and written critiques.
Instructor Information
Course Instructor: Jue Guo
- Research Area: Optimization for machine learning, Adversarial Learning, Continual Learning and Graph Learning
- Interested in participating in our research? Reach to me by email.
Course Outline and Logistics
Check out the course material under lecture notes.
Credits: 3
Course Hours: Lecture; TuTh 6:30 PM – 9:10 PM (Remote)
Term Dates: Jun 23 – Aug 1, 2025
Office Hours: Email to Request
Grader: Kristopher Kodweis (kkodweis@buffalo.edu)
Grader Office Hours:Email to Request
| Week | Session | Date | Paper(s) / Topic |
|---|---|---|---|
| 1 | Session 1 | June 24, 2025 | Back-Propagation (Rumelhart, Hinton & Williams, 1986) Dropout (Srivastava et al., 2014) |
| 1 | Session 2 | June 26, 2025 | ReLU (Nair & Hinton, 2010) Adam (Kingma & Ba, 2014) |
| 2 | Session 3 | July 1, 2025 | Batch Normalization (Ioffe & Szegedy, 2015) RNN Encoder–Decoder (Cho et al., 2014) |
| 2 | Session 4 | July 3, 2025 | AlexNet (Krizhevsky et al., 2012) VGG (Simonyan & Zisserman, 2014) |
| 3 | Session 5 | July 8, 2025 | ResNet (He et al., 2015) Inception (Szegedy et al., 2015) |
| 3 | Session 6 | July 10, 2025 | U-Net (Ronneberger et al., 2015) Faster R-CNN (Ren et al., 2015) |
| 4 | Session 7 | July 15, 2025 | YOLO (Redmon et al., 2016) Mask R-CNN (He et al., 2017) |
| 4 | Session 8 | July 17, 2025 | EfficientNet (Tan & Le, 2019) Attention Is All You Need (Vaswani et al., 2017) |
| 5 | Session 9 | July 22, 2025 | BERT (Devlin et al., 2018) GPT-2 (Radford et al., 2019) |
| 5 | Session 10 | July 24, 2025 | RoBERTa (Liu et al., 2019) T5 (Raffel et al., 2020) |
| 6 | Session 11 | July 29, 2025 | Graph Attention Networks (Veličković et al., 2018) Deep Graph Infomax (Veličković et al., 2019) |
| 6 | Session 12 | July 31, 2025 | Adversarial Examples in the Physical World (Kurakin et al., 2016) Fast is Better than Free (Wong et al., 2020) |
| 7 | Capstone | August 1, 2025 | Course Wrap-Up & Future Directions |
Evaluation Components
| Component | Weight / Details |
|---|---|
| Attendance | 15% (Random Attendance Check) |
| Exam 1 | 35% |
| Project | 25% |
| Exam 2 | 25% |
Note on Logistics
- A week-ahead notice for mid-term, based on the pace of the course.
- The logistic is subject to change based on the overall pace and the performance of the class.
Grading
The following is the outline of the grading:
Grading Rubric
This course is absolute grading, meaning no curve, as there is a certain standard we need to uphold for students to have a good knowledge of algorithm.
| Percentage | Letter Grade | Percentage | Letter Grade |
|---|---|---|---|
| 95-100 | A | 70-74 | C+ |
| 90-94 | A- | 65-69 | C |
| 85-89 | B+ | 60-64 | C- |
| 80-84 | B | 55-59 | D |
| 75-79 | B- | 0-54 | F |
AI Research Reading List
Neural Network Foundations
- Back-Propagation (Rumelhart, Hinton & Williams, 1986) – PDF
- Dropout (Srivastava et al., 2014) – PDF
- ReLU (Nair & Hinton, 2010) – PDF
- Adam (Kingma & Ba, 2014) – arXiv:1412.6980
- Batch Normalization (Ioffe & Szegedy, 2015) – arXiv:1502.03167
- RNN Encoder–Decoder (Cho et al., 2014) – PDF
Computer Vision
- AlexNet (Krizhevsky, Sutskever & Hinton, 2012) – NeurIPS 2012 PDF
- VGG (Simonyan & Zisserman, 2014) – arXiv:1409.1556
- ResNet (He et al., 2015) – arXiv:1512.03385
- Inception Net (Szegedy et al., 2015) – arXiv:1409.4842
- U-Net (Ronneberger et al., 2015) – arXiv:1505.04597
- Faster R-CNN (Ren et al., 2015) – arXiv:1506.01497
- YOLO (Redmon et al., 2016) – arXiv:1506.02640
- Mask R-CNN (He et al., 2017) – arXiv:1703.06870
- EfficientNet (Tan & Le, 2019) – arXiv:1905.11946
Natural Language Processing
- BERT (Devlin et al., 2018) – arXiv:1810.04805
- GPT-2 (Radford et al., 2019) – PDF
- RoBERTa (Liu et al., 2019) – arXiv:1907.11692
- T5 (Raffel et al., 2020) – arXiv:1910.10683
- GPT-3 (Brown et al., 2020) – arXiv:2005.14165
Graph Neural Networks
- Graph Attention Networks (Veličković et al., 2018) – arXiv:1710.10903
- Deep Graph Infomax (Veličković et al., 2019) – arXiv:1809.10341
Adversarial Machine Learning
- Adversarial Examples in the Physical World (Kurakin, Goodfellow & Bengio, 2016) – arXiv:1607.02533
- Fast Is Better Than Free (Wong, Rice & Kolter, 2020) – arXiv:2001.03994
Continual Learning
- Overcoming Catastrophic Forgetting (Kirkpatrick et al., 2017) – arXiv:1612.00796
- Learning without Forgetting (Li & Hoiem, 2018) – arXiv:1606.09282