Nothing Special   »   [go: up one dir, main page]

Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Offline evaluation reference alignment #12

Open
alessandroTorresani opened this issue May 23, 2018 · 12 comments
Open

Offline evaluation reference alignment #12

alessandroTorresani opened this issue May 23, 2018 · 12 comments

Comments

@alessandroTorresani
Copy link
alessandroTorresani commented May 23, 2018

Hi,
I am trying to evaluate a 3D model produced with COLMAP and I got a little bit confused by the files needed to execute correctly the evaluation.
I executed COLMAP on the Bart dataset producing Bart_COLMAP_my.ply file and Barn_COLMAP_SfM_my.log (from the binary files using the script "convert_to_logfile.py").
Then I set up the parameters in setup.py but I have two doubts:

  1. Should I provide both .log files (namely the python variables in run.py "colmap_ref_logfile" and "new_logfile") to align the trajectory?
  2. The file "Barn_trans.txt" is relative only to the provided Barn_COLMAP_SfM.log file or can I use it also with my log file (Barn_COLMAP_SfM_my.log)

Thank you in advange and congrats for your work.

Alessandro

@arknapit
Copy link
Contributor

Hello Alessandro

Thank you for your question. It just made me realize, that the crop files are missing on our download page. We will put them up there soon, in the meanwhile you can find them here

For the evaluation, the best would be to start with one full example from our training-set to check if everything in the pipeline works ( like mentioned in the toolbox here ).

Just create a folder with the name “Ignatius” and put these 5 files in there (downloadable from https://tanksandtemples.org/download/ and for the crop file use the link above for now):

  • Ignatius.ply # Ground truth geometry acquired by high-fidelity 3D Scanner
  • Ignatius.json # Area cropping information to limit spatial boundary for evaluation
  • Ignatius_COLMAP.ply # Reconstruction to be tested
  • Ignatius_COLMAP_SfM.log # Reference camera pose obtained by successful reconstruction algorithm
  • Ignatius_trans.txt # Transformation matrix that aligns reference pose with ground truth

Point the paths from the setup.py to the according folder and use these postfixes:

MY_LOG_POSTFIX = "_COLMAP_SfM.log"
MY_RECONSTRUCTION_POSTFIX = "_COLMAP.ply"

And just have the Ignatius model in the scene dictionary in setup.py:
scenes_tau_dict = { "Ignatius": 0.003 }

Then, python run.py should work and give you the evaluation results just like shown here

Once this works, you can replace the reconstruction (Ignatius_COLMAP.ply) with your own *.ply file and add the according logfile into the folder (don't forget to adapt the MY_LOG_POSTFIX accordingly).
However, the *.log file "Ignatius_COLMAP_SfM.log" needs to stay in the folder, which is needed for the alignment.

Let me know if there are more questions
Greetings
Arno

@alessandroTorresani
Copy link
Author

Thank you, I can confirm that the test example with Ignatius works.
However now I want to evaluate the model produced on my own selected frames.
If I understood well, I should follow "Case 2: Reconstruction from a video" from the tutorial page. Am I right?
Is available something automated to generate "mapping.txt" or should I create this file by hand?
Thank you again

@arknapit
Copy link
Contributor

Hello Alessandro

Yes, you need to create the "mapping.txt" yourself, since it is hard to figure out automatically what video frames people extracted for their own image-set. Just check the tutorial for the format of "mapping.txt", it should be easy to write a script for this, so you should not have to write it by hand.
However, the "*.log" file, which is produced by our script:
TanksAndTemples/python_toolbox/interpolate_log_file.py
so far only works when submitting a reconstruction from the test-set to our evaluation website. So far, the automatic alignment procedure from our toolbox only works for reconstructions using the prepared image set. We will soon update the toolbox scripts accordingly and let you know when it is done.

greetings
Arno

@alessandroTorresani
Copy link
Author

It will be amazing if you add this feature.
In the meantime I wish you good luck with this project and thank you again.

Alessandro

@arknapit
Copy link
Contributor

Hello Alessandro

We just updated the toolbox to include automatic registration with individually selected frames from the training set video. Just go ahead and create the mapping.txt like mentioned on our tutorial site, and then create a *.log file with our interpolate_log_file.py script, which will then include an interpolated camera pose for every frame in the video. When you use this *.log file instead of the *.log file of the standard camera poses, our new evaluation script will automatically detect that, and the evaluation should work the same way as before.
Also, you will need to add a mapping reference file to your evaluation folder. E.g. If you evaluate the Barn, you need to add Barn_mapping_reference.txt to the Barn folder. You can download all the mapping reference files from this link
An archive of the whole training dataset, already including all necessary files can be downloaded here (6.2GB):

For the automatic evaluation to work properly, you will need a reasonable sample size of the video, but with a minimum of 100 images it should work. Also please make sure that they are somewhat uniformly sampled over the whole video length if you have a small number of images.

Let me know if it works.

@alessandroTorresani
Copy link
Author

Hi!
Sorry for my late reply. I followed your steps and checking the cropped pointcloud produced by the evaluation script (precision.ply and recall.ply) I can confirm that the alignment worked. Thank you very much for this feature.

@TruongKhang
Copy link

Hi @arknapit and @alessandroTorresani,

I have just read the discussion above, but I don't still find the answer for my problem. Do you support the code helping produce the alignment file (.txt) for my own dataset? If not, can you show me some guides to implement this alignment algorithm?

Thank you a lot!
Khang Truong.

@arknapit
Copy link
Contributor

Hello Khang

can you specify your problem?

greetings
Arno

@TruongKhang
Copy link

@arknapit , my point is that I have my own dataset which only has a set of images and ground truth of the scene (it looks like your testing data). Now, I want to evaluate accuracy between the reconstructed scene run by COLMAP and the ground truth. As I see in your evaluation code, you already have the alignment files (which saved the transformation matrix that aligns reference pose with ground truth e.g. Ignatius_trans.txt) for training datasets. I don't know how to create this file for my own dataset. Your code doesn't support to produce this file, right?

@arknapit
Copy link
Contributor

@TruongKhang the alignment files and procedures here are specific to the tanksandtemples dataset, so they wouldn't work with any other dataset including yours. If you want to register your own MVS reconstruction with your ground-truth (GT) in a similar fashion, try to have a look to the function: https://github.com/intel-isl/TanksAndTemples/blob/master/python_toolbox/evaluation/registration.py and check out the Open3D tutorials: http://www.open3d.org/docs/release/tutorial/Basic/icp_registration.html - you will need a good pre-alignment though (usually done manually) - once your models are perfectly aligned, you should be able to use our evaluation script with only a few lines of code changes. Good Luck!

@TruongKhang
Copy link

@arknapit , thank you so much. I'll read more about your code and try. However, do you keep the script to generate the alignment files? If you do, please share with me. I want to read to understand more about how you do.

@arknapit
Copy link
Contributor

Cleaning up a bit here, @alessandroTorresani can this be closed?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants