One, two … many

It took me a while to get started on my second blog post, blame the bugs! After fixing the showstoppers now, i was able to create a new demo video for you to enjoy. I will explain the new features below. Forgive my unorganized talk and my funny german accent ;)


The most important new feature i implemented is the ability to handle data sets as opposed to single data elements (subsequently called “singletons”). To give you an idea of what this means, let’s first take a brief look at how current node trees work.

Shader- and texture node trees are used to calculate the values of a render- or texture sample respectively. They are basically functional expressions used to calculate a set of singleton values. The input to these trees (e.g. the “Geometry” node in shader trees) consists of exactly one value per socket:

  • the UV coordinates of the shader sample
  • the camera distance for that sample
  • the coordinates of a texture sample

All nodes in these trees consequently also calculate exactly one value per output socket. The important thing to note here is that although the tree is evaluated for a large number of shader/texture samples, this is not part of the tree itself, but of the underlying render system.

For compositor trees, the data that moves around the tree is a little more sophisticated: Each socket basically holds a full image buffer. The type of the sockets (value, vector or color) just tells the compositor nodes how to interpret the pixel values. Most simple nodes (like color mixing) work on the individual pixels, but some also operate on neighbouring pixels (e.g. Blur) or even the full set of pixels (e.g. Defocus). However, the data set for all nodes in a compositor tree is basically the same: they all operate on an image the size of the render layer. In a sense the compositor data can be seen as singleton data, just as the single values, vectors and colors in shader- and texture trees.

For simulation trees, the situation is different: A simulation node tree should be able to act on very different types of objects. A node socket can hold singleton data, such as the position of an object, but also “collection” data, e.g. vertex locations, face normals, particle mass, etc. This requires a much more generic system of data storage and reference than the other node tree types with their very specific purposes. For this reason, each socket now has, in addition to its data type, a context type and context source. The type can be a singleton, vertex, edge, face, particle or any other kind of collection used in Blender. A node will generally perform an operation for all elements in the data set plugged into its inputs: an “Add” node can not only add two singletons, but also two larger sets of values. The only restriction is that data from different contexts can not be combined: you can not plainly add the face normals of a mesh to the positions of particles, because in general the data sets have different sizes.

EDIT:
I can see this question coming up more often, so i’d like to answer it here:
Yes, there are ways to combine data from different sets, it’s just not as simple as an add node with particle locations on one input and face data on the other. If you want to, lets say, assign the color of a texture to a particle, you’d use a special “GetClosestSurfacePoint” node (or similar), which takes the particle location and finds a point on the surface, then calculates that points texture color and outputs it (in the particle context!). Similar problem is finding “neighbouring” particles, which also gets a special node (that would use a BVH-Tree internally).

Batch Processing

There are two extremes when it comes to copying data from node to node and processing it:

  1. Process each element one-by-one (for 1000 particles, execute the tree 1000 times)
  2. Process all elements in one go (execute the tree 1 time, but with the full data set)

Both of these methods have disadvantages:

  1. Executing the tree has an overhead, which accumulates to significant delays when using single-element processing. Also this prevents us from using SIMD instructions (more on that later)
  2. Storing the full data set multiple times can easily fill your memory for larger simulations (as currently happens with compositor trees)

The solution to this problem is the use of a batch processing system: Each data set is split into a couple of fixed-size batches. The number of batches is much smaller than the number of elements in the data set, while the size of a batch still avoids memory problems. Another advantage is that this allows the efficient use of multiple threads. Each thread processes one batch of data at a time for a particular node.

I will stop here to finally get this post online and continue in future posts.

50 Comments

  1. Carl Erik says:

    Extremely cool dude.. :D Me likes a lot! Thanks for contributing to making blender even cooler than it allready is.. :)
    -c-

  2. Roger says:

    Amazing, simply amazing!!!! And very interesting clarification about node system, one question, as you said in the post, you will not be able to interoperate between for example, Face normals data set and Vertex position data set, this is logical, but could be a “For loop” node or some iterator to use the vector decompose to interoperate datasets with different sizes?

    Any ways, the current state of the projects is so useful that i can’t ask for more!
    Thank you so much!

  3. tstscr says:

    Amazingly cool.
    I have a question regarding your getData Nodes:
    At the moment it seems you have hard-coded the possible choices of data to get (eg vertlocations).
    Are you planning to do that for all the data, or maybe implement something where one can simply type the datapath in? So one can access all the possible things stored in the objects, even possibly custom rna datapaths?
    Very nice work. You can imagine me excited :)

    • phonybone says:

      You’re right, that is among the most important features still missing. Your idea is also pretty much what i have in mind: use a string reference (RNA-path-ish) to describe the source of the data, then allow the user to select properties from that source to be displayed as input/output sockets. This will greatly improve usability and make it much easier to create reusable node groups. I will cover this in the following blog post(s).

  4. MAx Hammond says:

    Can this work for Objects ie Rotation Scale Translation?

    So you could get the distance an object has traveled since frame 001 / the Diamater of the object * 360 to get the object to rotate like a wheel, or something like that.

    I only ask as this would be a awsome way to redo my Epicylic gear system :)
    Link:http://vimeo.com/4697569

    Or maybe even My Ant Rig Link:http://vimeo.com/14277043 if it worked for pose bones!

    Basicaly I’m saying I love it and cant wait to try it out!

    • phonybone says:

      I haven’t added a quaternion or matrix socket type yet, but when this is in, object rotations are also possible (adding a socket type is fairly simple). The get/set node sockets should be based on RNA properties, so that virtually any object property is available in the node trees.

  5. Malcolm says:

    As a Softimage ICE guy.. i´m absolutely amazed by your work here!
    i´m thinking about visual feedback of states and values (like “showvalues” on ports of ICE). is it possible here?
    and i see for the future a intelligent caching system here…(the ice one is very user unfriendly).
    oh.. i have another question .. is it possible to pass some node data in or out to shader system?

    thanks man.

    • phonybone says:

      I haven’t thought about details yet, but a visual “debugging” system would be desirable. More basic features will have to come first though. Hm, i’m not sure what you mean by “pass some node data in or out to shader system”?

      • Malcolm says:

        Oh, I want to say, get data and set data to shader nodes…
        for example, to set object color based on some DSP (Data Set Processing, oh pretty name!) code, or get some data from shaders… like blend modes results for image processing and use this data inside Blender DSP(ok to use this name scheme?!). Thanks for attention Lukas! I´m following your project enthusiastically.

      • Malcolm says:

        Hi Lukas, i used your code into ICE just for fun, but i see that you not reuse your vector decompose, you decompose 2 times, any reason for this?
        the code runs perfect, without adaptation (i only reused 3D vector to scalar “vector decompose”), but i don´t know what your POWER node does… can you explain?
        thanks man.

  6. joshwedlake says:

    this is very cool. i guess datasets will/are also enabling parallel computations? is this all cpu based at the moment, and is it multithreaded? thanks a lot!

    • phonybone says:

      The system is fully multithreaded and CPU-based :)
      Doing this on the GPU might be an option for the future, but a big part of the code would have to be ported over, so for me it would be a waste of time at this point. With SIMD the cpu implementation should be fast enough also for complex simulations (which would have to be baked&cached anyway, this is not meant for realtime calculations).

  7. Lyle Walsh says:

    Would it be feasible to include some sort of a buffer to facilitate iterative calculations? To my limited understanding as Blender exists today, only the motion blur retains multiple frames (subframes) of information in the compositor. For multi- frame effects, such as ghosting, we have to use the VSE, which lacks most of the power of the compositor.

    • phonybone says:

      Well, it wouldn’t be a problem to simply execute the node tree several times to make use of subframe calculations, storing the resulting particle or mesh state after each execution. If you want to use these intermediate results for rendering though, this would require support in other parts of Blender which are not part of this work.

      • Roger says:

        Just with the ability to process sub frames is really enough, without subframe calculation will not be possible to program stable custom solvers in nodes or other critical calculations that require more accurate results… Did you read me? custom solvers! I am loving this project so much! :)

        What this really will make it totally powerful is the ability to increase/decrees the sub frames iterations in run time to make adaptive time step simulations, that’s really important.
        Any ways this kind of request are for second phase i think!
        Thank for everything!

  8. Seo4you says:

    Small patch
    http://www.pasteall.org/15170/diff

    I download your code but i can’t compile in visual studio 2008

    • phonybone says:

      Thanks for the patch :)
      I work on linux, so i haven’t updated the project files for msvc yet. It would be great if you can fix this, because i don’t have a build system for windows set up. You’d need to add these folders:
      source/blender/editors/particleset
      source/blender/nodes/intern/SIM_nodes
      There may be more to do, but i’m not familiar with the msvc build files.

      • Seo4you says:

        Next bug
        Bug in static int set_update_flag_recursive(bNodeTree *ntree)

        Is:
        int update

        Should be:
        int update=0;
        //without this i have exception “Variable update is being used witout being initialization

        • phonybone says:

          ah yes, that’s right, thanks again. worst thing that could happen though is unnecessary updates, but good to have fixed anyway :)

          • Seo4you says:

            [1]
            you’re welkome ;)

            [2]
            can you publish or send your exaples ?
            demo_modyfier_01.blend
            demo_particles_08.blend ?

            [3]
            What is “Debug print” ?

            [4]
            You think about connect in node editor (in your chain procesing) pyton script ?

  9. toontje says:

    I’m glad that this is possible now, because you can build feedbacks here and there and voila, you’ve got oscillating systems, filters/ dampers, fractals etc. It’s very interesting to see where this is going…..

  10. Gianmichele says:

    This is really interesting. Have you got plan to make these nodes multithreaded?

  11. wow. i am so excited.this gives us a chance non scripters to create behaviors

  12. Edward says:

    amazing! i’m playing now with particles 2010 build…. great work!

  13. Rodrigo says:

    This is awesome! Reminds me houdini.
    This is so powerful and that’s what I belive being the next generation of 3D programs, indeed. NODES!
    Great work!
    congratulations!

    • Litilu says:

      It would be neat to have an API or a way to use this node system to make user interfaces for python scripts (instead of the current way with the regular UI)

      • phonybone says:

        Yea, it would be need to have node types (or even completely dynamic nodes) defined in python scripts, but you have to keep in mind that nodes make heavy use of C function pointers for their execution code. It might be possible to use RNA function properties for this. As a first step it’s desirable to make the definitions more localized, so you don’t have to edit little pieces of code scattered over 5-7 files.

  14. delic says:

    What about a donate link in the blog ?

  15. [...] zu finden ist. Leider gibt es noch keine Test Builds zu ICE fuer Blender, aber der Entwickler Lukas Tönne [phonybone] arbeitet weiter fleißig am Autodesk [...]

  16. [...] of reference Phonybone’s Blender Blog [Blender] Platform : Windows MacOSX Linux Correspondence Software : [...]

  17. mb10 says:

    Unfortunately, as far as I understand, no builds are available for Windows yet, so I cannot directly test it.
    However, as a Simulink/Matlab user, I think this is an extremely promising feature for simulating physical phoenomena in the Blender graphic environment, and it doesn’t need any programming skill.
    The current block library, even if incomplete, appears smart and efficient.
    Looking forward testing and giving feedback!

    mb

  18. Malcolm says:

    Hi Lukas, im just playing with blender “particles (particles-2010) rev31577 64bit + API Doc”" from graphicall.com, but i can´t connect ports vectorCompose to setVertexData (grid selected), they just don´t connect…
    This is a bug on this release? (31577 windows)
    Thanks in advance!

    • phonybone says:

      @mb10, Malcolm:
      The current svn version is not working, i apologize for that. I’m currently working on a more generic data access system that should reward your patience :)

  19. [...] video was fromone of his blog postand it demonstrated a possible use of his node system to generate some waves and another one showing [...]

  20. [...] video was from one of his blog post and it demonstrated a possible use of his node system to generate some waves and another one [...]

  21. Many thanks for posting this, I found it to be extremely useful, and it clarified the majority of the problems I had.

  22. I’m glad i found ur blog.Not everyone can provide information with proper flow. Good post.
    I am going to save the URL and will definitely visit again. Keep it up.

  23. AlexDS says:

    i loooove it!

  24. Gzeaxzgl says:

    Could you give me some smaller notes? nude black preteen loli 6955

  25. news says:

    Have you given any kind of thought at all with translating your main web-site in to Chinese? I know a small number of translaters here that might help you do it for no cost if you wanna contact me.

  26. Quality content is the important to attract the viewers to
    go to see the web page, that’s what this site is providing.

  27. weird says:

    It is appropriate time to make some plans for the future
    and it’s time to be happy. I’ve read this post and if I
    could I wish to suggest you few interesting things or advice.

    Maybe you could write next articles referring to this article.
    I desire to read even more things about it!

  28. LRmnx344 says:

    30h6 Ray Ban Rb3447
    4u2 1w3 8z0 s886n