Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
Customizing Text-to-Image Models with a Single Image Pair
SA '24: SIGGRAPH Asia 2024 Conference PapersArticle No.: 6, Pages 1–13https://doi.org/10.1145/3680528.3687642Art reinterpretation is the practice of creating a variation of a reference work, making a paired artwork that exhibits a distinct artistic style. We ask if such an image pair can be used to customize a generative model to capture the demonstrated ...
- research-articleDecember 2024
Customizing Text-to-Image Diffusion with Object Viewpoint Control
SA '24: SIGGRAPH Asia 2024 Conference PapersArticle No.: 7, Pages 1–13https://doi.org/10.1145/3680528.3687564Model customization introduces new concepts to existing text-to-image models, enabling the generation of these new concepts/objects in novel contexts. However, such methods lack accurate camera view control with respect to the new object, and users must ...
- research-articleDecember 2023
Content-based Search for Deep Generative Models
SA '23: SIGGRAPH Asia 2023 Conference PapersArticle No.: 71, Pages 1–12https://doi.org/10.1145/3610548.3618189The growing proliferation of customized and pretrained generative models has made it infeasible for a user to be fully cognizant of every model in existence. To address this need, we introduce the task of content-based model search: given a query and a ...
- ArticleAugust 2020
- research-articleApril 2020
ShapeVis: High-dimensional Data Visualization at Scale
WWW '20: Proceedings of The Web Conference 2020Pages 2920–2926https://doi.org/10.1145/3366423.3380058We present ShapeVis, a scalable visualization technique for point cloud data inspired from topological data analysis. Our method captures the underlying geometric and topological structure of the data in a compressed graphical representation. Much ...
- ArticleAugust 2019
Harnessing the vulnerability of latent layers in adversarially trained models
- Nupur Kumari,
- Mayank Singh,
- Abhishek Sinha,
- Harshitha Machiraju,
- Balaji Krishnamurthy,
- Vineeth N. Balasubramanian
IJCAI'19: Proceedings of the 28th International Joint Conference on Artificial IntelligencePages 2779–2785Neural networks are vulnerable to adversarial attacks - small visually imperceptible crafted noise which when added to the input drastically changes the output. The most effective method of defending against adversarial attacks is to use the methodology ...
- ArticleFebruary 2019
Understanding Adversarial Space Through the Lens of Attribution
AbstractNeural networks have been shown to be vulnerable to adversarial perturbations. Although adversarially crafted examples look visually similar to the unaltered original image, neural networks behave abnormally on these modified images. Image ...