Sunday 24 January 2016

Latest V-HACD source code

The latest version of the V-HACD code is available here https://github.com/kmammou/v-hacd

19 comments:

  1. Hi Khaled, the osx binaries for testVHACD are crashing with the error:
    dyld: Library not loaded: /opt/local/lib/libgcc/libgomp.1.dylib
    Reason: image not found
    \Trace/BPT trap: 5
    I tried rebuilding from source and the resulting exec crashes at 15%. I'm using OSX 10.10.5. Do you have any idea how I may fix this? Very excited to use your tools.
    Many thanks, Mark

    ReplyDelete
  2. Please, could you share the mesh and settings you are using?

    ReplyDelete
  3. I tested on several of the sample meshes from the git, using parameters from your Nov 2012 blog post:

    testVHACD --input /.../sledge.off --depth 30 --maxConcavity 0.01 --invertInputFaces 0 --posSampling 64 --angleSampling 32 --posRefine 8 --angleRefine 64 --alpha 0.001 --targetNTrianglesDecimatedMesh 2000

    The compiled version gets stuck at:
    15% [ Approximate Convex Decomposition 0% ] Subdivision level 1 0%

    It works fine on my Windows 10 VM though.

    ReplyDelete
  4. Is it possible to save the each of the resulting convex geoms into separate stl files?

    ReplyDelete
  5. Hi Khaled. Currently using the HACD implementation on Bullet it takes 10+ minutes to generate some models, and using the V_HACD 2.0 it only takes a couple seconds. Am I doing something wrong? These are my HACD parameters: mDeviceHACDProperty.nClusters = 2;
    mDeviceHACDProperty.concavity = 20;
    mDeviceHACDProperty.nVerticesPerCH = 100;
    mDeviceHACDProperty.invert = false;
    mDeviceHACDProperty.addExtraDistPoints = false;
    mDeviceHACDProperty.addNeighboursDistPoints = false;
    mDeviceHACDProperty.addFacesPoints = false;
    Could I use V_HACD 2.0 in Bullet?

    ReplyDelete
    Replies
    1. I have tried different concavities, clusters and verticesPerCh as well

      Delete
    2. Why do you need to use HACD, Bullet ships with VHACD in Extras/VHACD, right?

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Hi Khaled, I am impressed with the V_HACD code that you have created. I am experimenting with the various settings. Do you have a set of parameters where the highest level of convex decomposition can be achieved? (even small holes on the model?)

    ReplyDelete
  8. Hi Khaled, the works are awesome. After decomposition into convex shapes, the simulation is faster indeed. I am just wondering a little bit further. Is it possible to create Spheres, Capsules and Boxes only during the decomposition? If so, it should be the ultimate solution for convex decomposition for collision detection.

    ReplyDelete
  9. Hi, is it possible to get the list of original triangles for each convex hull? I know that it is possible to query them afterwards, but is such information available during the compute process?

    ReplyDelete
  10. hi Khaled.

    about 12 years ago, I found your paper in the net and implemented a version of it.

    I see that your library is now open source, and I just want to integrate it as an add op to my new release.

    I see you made very nice improvements which are really good, like the voxelization, but you al made few compromises, like adding piece of code from other libraries.

    I made some has made some optimization, to your exiting code library, that I can contribute, if you are interested.

    this are the changes that I made.

    1- and option to use a generic convex hull function, that work in time linear other number of vertexes in the hull.
    I will roughly the same time to generate say a 1000 point convex from a million points vertexes cloud, that it will take if it was a 1000 points vertex cloud. it also generates exact hulls.

    2-I added an option that use standard c++ threads rather than open mp, and opencl. It does really need it, since is just take under just few second to make a decomposition, from a concave mesh rather than dozen or hundred of secund, but using the thread is does in 2 to 3 seconds.

    by using this, I can use unlimited clusters size, instead that the limitation that you have const size_t CLUSTER_SIZE = 65536;

    3-I also added a parameter m_concavityToVolumeWeigh, that seems more effective for controlling that resolution of the generate decomposition, than the concavity

    I also have another question that I think can make your library far, faster, without sacrificing quality and not need to GPU or these party extension, just plain vanilla c++


    anyway, I made those changes, and they are in my library. but in my version, I removed all the third-party stuff.

    I also have a version of your github that compile and those modification are made conditional under a preprocessor USE_GENERIC_CPP_11

    I can send you a pull request, if you are interested.

    ReplyDelete
    Replies
    1. Hi Julio, just sent you an email. Please let me know if you did not receive it

      Delete
  11. Oh cool.
    Did you send it to my Google email?
    If you send it to the sbcglobal, I do not have for more that 10 years now.

    I am out not, but I will check very soon.

    ReplyDelete
  12. I just verified my user profile, and my email is the correct one.
    Google does not has junk, so no sure where the email went.
    if you can sent the email again.

    Julio

    ReplyDelete
  13. one more thing FWI
    I just test the more complex model that you have in you data base of samples.
    the biggest that you have is the greek stature, and the elephant.

    the result is less than a seconds, for each in fact, it it was not because of the Log, I can't be measured without a profiles.

    I load the output in Max, and compere with the images in the doc folder and they seem identical.

    I believe that is all the debug code, and the log is removed, and you do the secund optimization, it could probably be use for real time, if is was sent to a background thread.

    anyway, you let me know if it could be of any used to you.



    ReplyDelete
  14. I don’t think I can see your google email address. Pls could you send me an email on kmamou(at)gmail.com

    ReplyDelete
  15. Julio,

    I'm currently in the process of rewriting v-hacd. One of my tasks is to rewrite the convex hull code. But if you have already done that I would be happy to look at your code.

    The new version is faster and more robust. It also detects if the source object is already sufficiently convex and can early out, something the old version doesn't do. This new version works entirely off of volume conservation. I have a tweaked plane splitting algorithm which is much faster than the old one.

    Thanks,

    John

    ReplyDelete