default search action
42nd SIGGRAPH 2015: Los Angeles, CA, USA - Posters Proceedings
- Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH '15, Los Angeles, CA, USA, August 9-13, 2015, Posters Proceedings. ACM 2015, ISBN 978-1-4503-3632-1
- Chun-Chia Chiu, Yi-Hsiang Lo, Wei-Ting Ruan, Cheng-Han Yang, Ruen-Rone Lee, Hung-Kuo Chu:
Continuous circular scribble arts. 1:1 - Yuki Koyama, Daisuke Sakamoto
, Takeo Igarashi:
Crowd-powered parameter analysis for computational design exploration. 2:1 - Xiang 'Anthony' Chen, Stelian Coros
, Jennifer Mankoff
, Scott E. Hudson:
Encore: 3D printed augmentation of everyday objects with printed-over, affixed and interlocked attachments. 3:1 - Kazutaka Nakashima, Takeo Igarashi:
Extraction of a smooth surface from voxels preserving sharp creases. 4:1 - Chengcheng Tang
, Xiang Sun
, Alexandra Gomes, Johannes Wallner, Helmut Pottmann
:
Form-finding with polyhedral meshes made simple. 5:1 - Rukmini Goswami, Tim Tregubov, Lorie Loeb:
FrameShift: shift your attention, shift the story. 6:1 - EunJin Kim, Hyeon-Jeong Suk
:
Hue extraction and tone match: generating a theme color to enhance the emotional quality of an image. 7:1 - Azusa Mama, Yuki Morimoto, Katsuto Nakajima:
Interactive tree illustration generation system. 8:1 - Raf Ramakers, Kashyap Todi, Kris Luyten
:
PaperPulse: an integrated approach for embedding electronics in paper designs. 9:1 - Yuki Igarashi
, Jun Mitani:
Patchy: an interactive patchwork design system. 10:1 - Shinji Mizuno, Marino Isoda, Rei Ito, Mei Okamoto, Momoko Kondo, Saya Sugiura, Yuki Nakatani, Motomi Hirose:
Sketch dance stage. 11:1 - Daria Tsoupikova, Scott Rettberg
, Roderick Coover, Arthur Nishimoto:
The battle for hearts and minds: interrogation and torture in the age of war. 12:1 - Rébecca Kleinberger:
V3: an interactive real-time visualization of vocal vibrations. 13:1 - Shogo Fukushima, Takeshi Naemura
:
Wobble strings: spatially divided stroboscopic effect for augmenting wobbly motion of stringed instruments. 14:1 - Sang-won Leigh, Harshit Agrawal, Pattie Maes:
Z-drawing: a flying agent system for computer-assisted drawing. 15:1 - Katsutoshi Masai
, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami
, Maki Sugimoto:
AffectiveWear: toward recognizing facial expression. 16:1 - Hugo Talbot, Frédérick Roy, Stéphane Cotin:
Augmented reality for cryoablation procedures. 17:1 - Jun Nishida, Hikaru Takatori, Kosuke Sato, Kenji Suzuki
:
CHILDHOOD: wearable suit for augmented child experience. 18:1 - Mark T. Bolas, Ashok Kuruvilla, Shravani Chintalapudi, Fernando Rabelo, Vangelis Lympouridis, Christine Barron, Evan A. Suma, Catalina Matamoros, Cristina Brous, Alicja Jasina, Yawen Zheng, Andrew Jones
, Paul E. Debevec, David M. Krum:
Creating near-field VR using stop motion characters and a touch of light-field rendering. 19:1 - Liang-Chen Wu, Jia-Ye Li, Yu-Hsuan Huang, Ming Ouhyoung
:
First-person view animation editing utilizing video see-through augmented reality. 20:1 - Michael Saenz, Joshua Strunk, Kelly Maset, Jinsil Hwaryoung Seo, Erica Malone:
FlexAR: anatomy education through kinetic tangible augmented reality. 21:1 - Nazim Haouchine, Alexandre Bilger, Jérémie Dequidt, Stephane Cotin:
Fracture in augmented reality. 22:1 - Toshiaki Nakasu, Tsukasa Ike, Kazunori Imoto, Yasunobu Yamauchi:
Hands-free gesture operation for maintenance work using finger-mounted acceleration sensor. 23:1 - Bruno Marques, Nazim Haouchine, Rosalie Plantefève, Stephane Cotin:
Improving depth perception during surgical augmented reality. 24:1 - Prashanth Bollam, Eesha Gothwal, G. B. C. S. Tejaswi Vinnakota, Shailesh Kumar, Soumyajit Deb:
Mobile collaborative augmented reality with real-time AR/VR switching. 25:1 - Xing Zhang, Umur A. Ciftci, Lijun Yin:
Mouth gesture based emotion awareness and interaction in virtual reality. 26:1 - Toshikazu Ohshima, Shun Kawaguchi, Yuma Tanaka:
MR coral sea evolved: mixed reality aquarium with physical MR displays. 27:1 - Yong Yi Lee, Junho Choi, Yong Hwi Kim, Jong Hun Lee, Moon Gu Son, Bilal Ahmed, Kwan H. Lee:
RiSE: reflectance transformation imaging in spatial augmented reality for exhibition of cultural heritage. 28:1 - Masasuke Yasumoto, Takehiro Teraoka:
Shadow shooter: 360-degree all-around virtual 3d interactive content. 29:1 - Paul E. Debevec, Greg Downing, Mark T. Bolas, Hsuen-Yueh Peng, Jules Urbach:
Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera. 30:1 - Stefano Scheggi, Leonardo Meli
, Claudio Pacchierotti
, Domenico Prattichizzo
:
Touch the virtual reality: using the leap motion controller for hand tracking and wearable tactile devices for immersive haptic rendering. 31:1 - Seunghyun Woo, Daeyun An, Jongmin Oh, Gibeom Hong:
WAOH: virtual automotive HMI evaluation tool. 32:1 - Nobuki Yoda, Takeo Igarashi:
Decomposition of 32 bpp into 16 bpp textures with alpha. 33:1 - Shaohui Jiao, Xiaofeng Tong, Eric Li, Wenlong Li:
Dynamic fur on mobile using textured offset surfaces. 34:1 - Kai-Wen Liu, I-Peng Lin, Shih-Wei Sun
, Wen-Huang Cheng, Xiaoniu Su-Chu Hsu:
G-spacing: a gyro sensor based relative 3D space positioning scheme. 35:1 - Jinhong Park
, Minkyu Kim, Sunho Ki, Youngduke Seo, Chulho Shin:
Half frame forwarding: frame-rate up conversion for tiled rendering GPU. 36:1 - Antoinette Leanna Bumatay, Jinsil Hwaryoung Seo:
Mobile haptic system design to evoke relaxation through paced breathing. 37:1 - Ravi Krishnaswamy:
Performance and precision: mobile solutions for high quality engineering drawings. 38:1 - Kristian Sons, Felix Klein, Jan Sutter, Philipp Slusallek
:
The XML3D architecture. 39:1 - Nobuhisa Hanamitsu, Kanata Nakamura, M. H. D. Yamen Saraiji, Kouta Minamizawa, Susumu Tachi:
Twech: a mobile platform to search and share visuo-tactile experiences. 40:1 - Masasuke Yasumoto, Takehiro Teraoka:
VISTouch. 41:1 - Haruki Sato, Tatsunori Hirai, Tomoyasu Nakano, Masataka Goto
, Shigeo Morishima
:
A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars. 42:1 - Ergun Akleman
, Siran Liu, Donald H. House:
Art directed rendering & shading using control images. 43:1 - Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, Shigeo Morishima
:
Automatic synthesis of eye and head animation according to duration and point of gaze. 44:1 - Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, Shigeo Morishima
:
BGMaker: example-based anime background image creation from a photograph. 45:1 - Siran Liu, Ergun Akleman
:
Chinese ink and brush painting with reflections. 46:1 - Jonah Friedman, Andrew C. Jones:
Fully automatic ID mattes with support for motion blur and transparency. 47:1 - Benjamin Knowles, Oleg Fryazinov:
Increasing realism of animated grass in real-time game environments. 48:1 - Seungbae Bang, Byungkuk Choi, Roger Blanco Ribera, Meekyoung Kim, Sung-Hee Lee
, Jun-yong Noh:
Interactive rigging. 49:1 - Simon Pabst, Hansung Kim
, Lukás Polok, Viorela Ila
, Ted Waine, Adrian Hilton, Jeff Clifford:
Jigsaw: multi-modal big data management in digital film production. 50:1 - Daniel Camozzato, Leandro Lorenzett Dihl
, Ivan Silveira, Fernando Marson
, Soraia Raupp Musse
:
Procedural floor plan generation from building sketches. 51:1 - Chun-Kai Huang, Yi-Ling Chen, I-Chao Shen, Bing-Yu Chen
:
Retargeting 3D objects and scenes. 52:1 - Yu Wang, Marc Olano:
Rigid fluid. 53:1 - Katsuhisa Kanazawa, Ryoma Tanabe, Tomoaki Moriya, Tokiichiro Takahashi:
Rust aging simulation considering object's geometries. 54:1 - I. Chiang, Po-Han Lin, Yuan-Hung Chang, Ming Ouhyoung
:
Synthesizing close combat using sequential Monte Carlo. 55:1 - Jaehwan Kim, Jongyoul Park, Kyoung Park:
UnAMT: unsupervised adaptive matting tool for large-scale object collections. 56:1 - Naoki Nozawa, Daiki Kuwahara, Shigeo Morishima
:
3D face reconstruction from a single non-frontal face image. 57:1 - Toshihiko Yamasaki, Yusuke Nakano, Kiyoharu Aizawa:
A prediction model on 3D model compression and its printed quality based on subjective study. 58:1 - Toru Kawanabe, Tomoko Hashida:
atmoRefractor: spatial display by controlling heat haze. 59:1 - Tony Tung:
Augmented dynamic shape for live high quality rendering. 60:1 - Nobuhiko Mukai, Naoki Mita, Youngha Chang:
Bubble rupture simulation by considering high density ratio. 61:1 - Yajie Yan, Tao Ju, David Letscher, Erin W. Chambers
:
Burning the medial axis. 62:1 - Hisashi Watanabe, Toshiya Fujii, Tatsuya Nakamura, Tsuguhiro Korenaga:
Color perception difference: white and gold, or black and blue? 63:1 - Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen:
Component segmentation of sketches used in 3D model retrieval. 64:1 - Afsaneh Rafighi, Sahand Seifi, Oscar Meruvia Pastor:
Continuous and automatic registration of live RGBD video streams with partial overlapping views. 65:1 - Michelle Holloway, Tao Ju, Cindy Grimm
:
Contour guided surface deformation for volumetric segmentation. 66:1 - Peihong Guo
, Ergun Akleman
, Ying He, Xiaoning Wang, Wei Liu:
Critical points with discrete Morse theory. 67:1 - Nahomi Maki, Kazuhisa Yanaka:
Display of diamond dispersion using wavelength-division rendering and integral photography. 68:1 - Slim Ouni
, Guillaume Gris:
Dynamic realistic lip animation using a limited number of control points. 69:1 - Byeongjun Choi, Woong Seo, Insung Ihm:
Enhancing time and space efficiency of kd-tree for ray-tracing static scenes. 70:1 - Hisataka Suzuki, Rex Hsieh, Ryotaro Tsuda, Akihiko Shirai:
ExPixel FPGA: multiplex hidden imagery for HDMI video sources. 71:1 - Yoichi Ochiai, Kota Kumagai, Takayuki Hoshi, Jun Rekimoto, Satoshi Hasegawa, Yoshio Hayasaki:
Fairy lights in femtoseconds: aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields. 72:1 - Jérémy Levallois, David Coeurjolly
, Jacques-Olivier Lachaud:
Feature extraction on digital snow microstructures. 73:1 - Anthousis Andreadis, Robert Gregor, Ivan Sipiran
, Pavlos Mavridis, Georgios Papaioannou
, Tobias Schreck:
Fractured 3D object restoration and completion. 74:1 - Caigui Jiang
, Chengcheng Tang, Jun Wang, Johannes Wallner, Helmut Pottmann:
Freeform honeycomb structures and lobel frames. 75:1 - Antoine Toisoul, Abhijeet Ghosh:
Image based relighting using room lighting basis. 76:1 - Daniel Rakita, Tomislav Pejsa, Bilge Mutlu, Michael Gleicher:
Inferring gaze shifts from captured body motion. 77:1 - Hiroki Yamamoto, Hajime Kajita, Hanyuool Kim, Naoya Koizumi, Takeshi Naemura
:
Mid-air plus: a 2.5 D cross-sectional mid-air display with transparency control. 78:1 - Takuya Kato, Akira Kato, Naomi Okamura, Taro Kanai, Ryo Suzuki, Yuko Shirai:
Musasabi: 2D/3D intuitive and detailed visualization system for the forest. 79:1 - Beibei Wang, Xiangxu Meng, Tamy Boubekeur:
Non-diffuse effects for point-based global illumination. 80:1 - Hajime Kajita, Naoya Koizumi, Takeshi Naemura
:
OpaqueLusion: opaque mid-air images using dynamic mask for occlusion expression. 81:1 - Christian Hafner, Przemyslaw Musialski
, Thomas Auzinger, Michael Wimmer, Leif Kobbelt
:
Optimization of natural frequencies for fabrication-aware shape modeling. 82:1 - Junichi Sugita, Tokiichiro Takahashi:
Paint-like compositing based on RYB color model. 83:1 - Naoki Hashimoto, Koki Kosaka:
Photometric compensation for practical and complex textures. 84:1 - Takefumi Hiraki, Issei Takahashi, Shotaro Goto, Shogo Fukushima, Takeshi Naemura
:
Phygital field: integrated field with visible images and robot swarm controlled by invisible images. 85:1 - Ari Rapkin Blenkhorn
:
Real-time rendering of atmospheric glories. 86:1 - Hiroyuki Kubo
, Kohe Tokoi, Yasuhiro Mukaigawa:
Real-time rendering of subsurface scattering according to translucency magnitude. 87:1 - Kang Zhang, Wuyi Yu, Mary Manhein, Warren N. Waggenspack, Xin Li
:
Reassembling 3D thin shells using integrated template guidance and fracture region matching. 88:1 - Keita Sekijima, Hiroya Tanaka:
Reconfigurable three-dimensional prototype system using digital materials. 89:1 - Francisco Inácio, Jan P. Springer:
Reducing geometry-processing overhead for novel viewpoint creation. 90:1 - Fumiya Narita, Shunsuke Saito, Takuya Kato, Tsukasa Fukusato, Shigeo Morishima
:
Texture preserving garment transfer. 91:1 - Paul Kilgo, Jerry Tessendorf:
Toward validation of a Monte Carlo rendering technique. 92:1 - Caleb Brose, Martin Thuo
, Jeremy W. Sheaffer:
Tracking water droplets under descent and deformation. 93:1 - Xueming Yu, Shanhe Wang, Jay Busch, Thai Phan, Tracy McSheery, Mark T. Bolas, Paul E. Debevec:
Virtual headcam: pan/tilt mirror-based facial performance tracking. 94:1 - Kendra A. Schmal, Christoph Thomas, Judy Cushing, Genevieve Orr:
Visualizing valley wind flow. 95:1
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.