Nothing Special   »   [go: up one dir, main page]

Skip to content
/ HEX Public

Example implementation for the paper: (ICLR Oral) Learning Robust Representations by Projecting Superficial Statistics Out

Notifications You must be signed in to change notification settings

HaohanWang/HEX

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 

Repository files navigation

Learning Robust Representations by Projecting Superficial Statistics Out

Example implementation of the paper:

Code structure:

Replication

For the codes that are used to replicate the experiments in the paper, please visit HaohanWang/HEX_experiments

FAQ

  • This method does not seem to converge.

    • As we mentioned in the paper, we also have a hard time to optimize AlexNet with our method from beginning, however, we noticed that there are many tricks that can help.

      • Train a network following the standard manner and then finetune with our method is recommended.
      • The initilization of NGLCM plays an important role; in fact, it plays a more important role than the optimization process of NGLCM. Therefore, if you notice that the initlization scales the representations too much and leads to NaN, we recommend to freeze the optimization of NGLCM (then at least it's a starndard GLCM) rather than to alter the initialization manner. Another useful stragety (thanks to Songwei) is to normalize the representations to avoid scaling.
  • This method does not seem to help improve the performance of my CNN.

    • As CNN is known to take advantage of superficial (non-semantic) information of the data, we do not guarantee our method to help improve the performance of the setting where testing data and training data are from the same distribution (where simply predicting through superficial information can also help). In other words, our method only excels in the setting where learning the semantic information plays an important role (such as domain adaptation/generalization settings).
  • It seems hard to apply this method to other applications.

    • Unfortunately, it seems so, especially with the scale of the experiments gets larger everyday. Also, the method is specially designed for the superficial features that NGLCM can handle, it should not be a surprise if it fails for experiments out of its scope.

Bibtex

@inproceedings{
wang2018learning,
title={Learning Robust Representations by Projecting Superficial Statistics Out},
author={Haohan Wang and Zexue He and Zachary L. Lipton and Eric P. Xing},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=rJEjjoR9K7},
}

Contact

Other Resources

About

Example implementation for the paper: (ICLR Oral) Learning Robust Representations by Projecting Superficial Statistics Out

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages