<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>1 | Alvin Chan</title><link>https://www.alvinchan.io/publication-type/1/</link><atom:link href="https://www.alvinchan.io/publication-type/1/index.xml" rel="self" type="application/rss+xml"/><description>1</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 01 Oct 2025 00:00:00 +0000</lastBuildDate><item><title>Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning</title><link>https://www.alvinchan.io/publication/zhang-2025-icrl/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/zhang-2025-icrl/</guid><description/></item><item><title>Designing Lipid Nanoparticles Using a Transformer-Based Neural Network</title><link>https://www.alvinchan.io/publication/chan-2025-comet/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-2025-comet/</guid><description/></item><item><title>Deep Extrapolation for Attribute-Enhanced Generation</title><link>https://www.alvinchan.io/publication/chan-2021-deep/</link><pubDate>Fri, 10 Dec 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-2021-deep/</guid><description/></item><item><title>Self-Instantiated Recurrent Units with Dynamic Soft Recursion</title><link>https://www.alvinchan.io/publication/zhang-2021-self/</link><pubDate>Wed, 01 Dec 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/zhang-2021-self/</guid><description/></item><item><title>On Orthogonality Constraints for Transformers</title><link>https://www.alvinchan.io/publication/zhang-2021-orthogonality/</link><pubDate>Sun, 01 Aug 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/zhang-2021-orthogonality/</guid><description/></item><item><title>CoCon: A Self-Supervised Approach for Controlled Text Generation</title><link>https://www.alvinchan.io/publication/chan-cocon-2021/</link><pubDate>Wed, 13 Jan 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-cocon-2021/</guid><description/></item><item><title>Parameterization of Hypercomplex Multiplications</title><link>https://www.alvinchan.io/publication/zhang-2021-parameterization/</link><pubDate>Tue, 12 Jan 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/zhang-2021-parameterization/</guid><description/></item><item><title>RNA Alternative Splicing Prediction with Discrete Compositional Energy Network</title><link>https://www.alvinchan.io/publication/chan-rna-2021/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-rna-2021/</guid><description/></item><item><title>Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder</title><link>https://www.alvinchan.io/publication/chan-poisontext-2020/</link><pubDate>Tue, 06 Oct 2020 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-poisontext-2020/</guid><description/></item><item><title>What it Thinks is Important is Important: Robustness Transfers through Input Gradients</title><link>https://www.alvinchan.io/publication/chan-what-2019/</link><pubDate>Sun, 01 Dec 2019 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-what-2019/</guid><description/></item><item><title>Jacobian Adversarially Regularized Networks for Robustness</title><link>https://www.alvinchan.io/publication/chan-jacobian-2019/</link><pubDate>Sun, 01 Sep 2019 00:00:00 +0000</pubDate><guid>https://www.alvinchan.io/publication/chan-jacobian-2019/</guid><description>&lt;!-- We show that training classifiers to produce salient input Jacobian matrices with a GAN-like regularization can boost adversarial robustness. -->
&lt;!-- Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training. --></description></item></channel></rss>