The scene pipeline supports the export of scenes from 3D modeling packages via Collada.
For a given Collada scene the dae2json tool will export the following elements:
Materials, effects and lights can be overloaded at this stage or later at runtime. The dae2json tool supports passing include files that provide material, effect and light overloads. Overloads from the include files will be matched by name and will replace the elements from the Collada scene.
This is an example of how a JSON include file would look::
{
"version": 1,
"lights":
{
"point_light_1":
{
"type": "point",
"color": [1, 1, 1],
"material": "squarelight",
"halfextents": [10, 20, 10]
},
"spot_light_1":
{
"type: "spot",
"color": [0.89, 1, 0.99],
"material": "conelight",
"right": [0, 6, 26],
"up": [21, 0, 0],
"target": [0, -36, 9],
"shadows": true
}
"directional_light":
{
"type": "directional",
"color": [0.2, 0.2, 0.8],
"direction": [0, -1, 0],
"global": true
},
"red_ambient":
{
"type": "ambient",
"color": [0.1, 0, 0],
"global": true
},
},
"effects":
{
"lambert-fx":
{
"type": "lambert",
"meta":
{
"normals": true
}
}
},
"materials":
{
"squarelight":
{
"parameters":
{
"lightfalloff": "squarelight.dds",
"lightprojection": "squarelight.dds"
}
},
"conelight":
{
"parameters":
{
"lightfalloff": "white",
"lightprojection": "roundlight.jpg"
}
},
"rocket_colored":
{
"effect": "lambert-fx",
"parameters":
{
"color": [1, 0.2, 0, 0.5],
"diffuse": "textures/rocket.dds"
},
"meta":
{
"collisionFilter": [ "ALL" ],
"noshadows": true,
"transparent": true,
"materialcolor": true
}
}
},
}
These include files can be generated by hand or exported from other source assets, a Collada scene file can be exported passing as many of them as needed as parameters to the tool.
The dae2json tool automatically converts Collada scenes to have 1 unit per meter and the Y vector to point upwards, some modeling packages may use different conventions.
For details on generating your own JSON files see the Turbulenz Engine JSON formats documentation.
Certain features supported in the Collada specification are not supported or have limited support, these limitations are documented below.
The animation pipeline supports the export of keyframed animation for hierarchies of nodes from 3D modeling packages via Collada.
For a given Collada scene the dae2json tool will export 1 or more animation clips from a single Collada scene along with the geometry and nodes which are to be animated by the animation. It is also possible to run the tool with a switch to export only the animation clips from the file, this can be used where the animations are modeled in multiple scenes within the modeling package, and hence exported to multiple Collada scenes. Animation clips can be exported to Collada scenes by setting them up with tools such as the Trax editor in Maya. Where multiple animation clips are present in the Collada scene multiple animations will be exported to the output scene with matching names. Where no clips are present in the Collada scene all the animations in the scene will be grouped into a single animation name “default” (can be overridden via a tool parameter)
We support conversion of the CgFX shader file format to our internal format. For the broadest compatiblity we recommend targeting the OpenGL ES 2.0 feature set in order to be compatible with the WebGL and our compatibility mode.
We provide a tool for converting CGFX shaders to the Turbulenz Engine Shader format.
For a given CgFX file the cgfx2json tool will create a JSON file containing a shader definition:
Shader parameter semantics are ignored by the cgfx2json tool, parameters will be matched at runtime by the variable name.
It is recommended that the CgFX file compiles program code either into GLSL profiles or into ‘latest’.
For more information about the CgFX file format please read the NVidia tutorial.
For more information about JSON please visit json.org.
Loading a shader definition file
Inlining a shader definition
Steps 1 and 2 as on the loading case.
This workflow is less flexible than loading the shader definition file at runtime but it avoids the added latency of requesting the file. The CPU cost of parsing the JSON string from the JSON file to create a JavaScript object is about the same as the cost of parsing and executing the JavaScript code that contains the shader definition.
We provide a set of tools for our JSON format:
A JSON file is an object which can in turn contain more objects.
Objects are defined in a similar format as in JavaScript:
{
"objectName": {
"objectProperty1": "String",
"objectProperty2": ["Array1", "Array2"],
"anotherObject": {
"anotherObjectProperty": 5
}
}
}
In our JSON format we have 2 object types; objects and collections.
Objects have well defined property names. For example, the “geometries” object will always have “inputs”, “sources” and “surfaces” properties. Any other properties on the “geometries” object would be ignored.
An object is a collection if it does not have well defined property names. Generally each property of a collection refers to an object and the property name is used as the name of the object. For example:
"roomItems": {
"rug": {
"color": [1, 0, 0]
},
"table": {
"color": [0, 1, 0]
},
"chair": {
"color": [0, 0, 1]
}
}
Here roomItems is a collection which contains three objects: rug, table and chair.
Note
We will refer to the objects in the JSON file as JSON objects to avoid confusion with their similarly named JavaScript object equivalents.
All of the matrices in a Turbulenz JSON file are 4 rows of 3 columns and should be given as a row major order array of 12 numbers.
The top level object accepts the following properties:
The JSON geometries object is a collection of JSON geometry objects. Each JSON geometry object is is used to create a Geometry object in the scene.
Here is an example of the JSON geometries object:
"geometries": {
"floor": {
"inputs": {
"NORMAL": {
"offset": 0,
"source": "normal"
},
"POSITION": {
"offset": 0,
"source": "position"
},
"TEXCOORD0": {
"offset": 1,
"source": "texturemap"
}
},
"sources": {
"texturemap": {
"data": [0, 1, 1, 1, 0, 0, 1, 0],
"max": [1, 1],
"min": [0, 0],
"stride": 2
},
"normal": {
"data": [0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0],
"max": [0, 1, 0],
"min": [0, 1, 0],
"stride": 3
},
"position": {
"data": [-10, 0, 10,
10, 0, 10,
-10, 0, -10,
10, 0, -10],
"max": [10, 0, 10],
"min": [-10, 0, -10],
"stride": 3
}
},
"surfaces": {
"phong_floorSG": {
"numPrimitives": 2,
"triangles": [1, 3, 2, 0, 0, 2, 1, 3, 3, 1, 2, 0]
}
},
"meta": {
"graphics": true
}
}
}
Each JSON geometry object contains the following:
Has some of the following properties (up to 16) describing the inputs for semantic types:
ATTR0, POSITION0, POSITION
ATTR1, BLENDWEIGHT0, BLENDWEIGHT
ATTR2, NORMAL0, NORMAL
ATTR3, COLOR0, COLOR
ATTR4, COLOR1, SPECULAR
ATTR5, FOGCOORD, TESSFACTOR
ATTR6, PSIZE0, PSIZE
ATTR7, BLENDINDICES0, BLENDINDICES
ATTR8, TEXCOORD0, TEXCOORD
ATTR9, TEXCOORD1
ATTR10, TEXCOORD2
ATTR11, TEXCOORD3
ATTR12, TEXCOORD4
ATTR13, TEXCOORD5
ATTR14, TEXCOORD6, TANGENT0, TANGENT
ATTR15, TEXCOORD7, BINORMAL0, BINORMAL
The semantics on the same line are equivalent, so each geometry can have only one semantic from each line. These are the semantics supported by CGFX. See the “(GP4GP) Semantics” section in the documentation http://developer.download.nvidia.com/cg/Cg_3.0/Cg-3.0_July2010_ReferenceManual.pdf.
Each semantic property object contains:
“offset” - The offset of this input in the surface definitions. So for the example above we have the surface indices list (split into triangles and vertices):
"triangles": [((1, 3), (2, 0), (0, 2)), ((1, 3), (3, 1), (2, 0))]
So the offset gives the position in a vertex. Here we have NORMAL and POSITION have offset 0 and TEXCOORD0 has offset 1.
So the first vertex for this surface has:
If TEXCOORD0 had offset 0, NORMAL had offset 1 and POSITION had offset 2 then the surface indices list would be:
"triangles": [((3, 1, 1), (0, 2, 2), (2, 0, 0)), ((3, 1, 1), (1, 3, 3), (0, 2, 2))]
“source” - A string reference of the source for this input.
A collection of sources for the inputs. Each source object contains:
A collection of JSON surface objects. Each JSON surface object links together the inputs for a geometry in order to create a surface.
Each JSON surface object contains:
“numPrimitives” - The number of primitives that make up this surface.
“lines”, “triangles” - An array of vertex indices which connect the input objects. This array is of length numPrimitives * primitiveSize * (maxOffsets + 1). Here, maxOffsets is the maximum offset value of all of the inputs for this geometry and primitiveSize is the number of vertices for the selected primitive. If you take the triangles array from the example above:
"triangles": [1, 3, 2, 0, 0, 2, 1, 3, 3, 1, 2, 0]
[(1, 3, 2, 0, 0, 2), (1, 3, 3, 1, 2, 0)]
[((1, 3), (2, 0), (0, 2)), ((1, 3), (3, 1), (2, 0))]
The second line here shows how the array is grouped into triangles. The third line shows how the triangles are grouped into vertices.
In this example both the inputs POSITION and NORMAL have offset 0 and so they both share the same indices. The input TEXCOORD0 has offset 1. This means that each vertex is made up of 2 (maxOffsets + 1) indices. The first value is the index in the POSITION and NORMAL inputs. The second value is the index in the TEXCOORD0 input.
Meta data for the JSON geometry object.
Images is a collection of file references. Each file reference is a string containing the relative path to the image.
Effects and materials use the images object as a reference for image files. For example:
"effects": {
"duck": {
"parameters": {
"diffuse": "duckImage"
},
"type": "blinn"
}
"crate": {
"parameters": {
"diffuse": "textures/crate.png"
},
"type": "blinn"
}
},
"images "{
"duckImage": "textures/duck.png"
}
Then at load time the duck effect diffuse string would be replaced with “textures/duck.png”.
Effects and materials can also reference a file directly (the example crate effect references directly). Direct referencing should be used when the image is only used a few times or by unrelated effects or materials.
The images object stops image sources being duplicated and makes maintenance easier.
The JSON lights object is a collection of JSON light objects. Each JSON light is is used to create a Light object in the scene.
A JSON light is a flexible object allowing light objects to contain the parameters required by any custom renderer. This means that JSON light objects can have any properties on them. The light object’s prototype is set to its JSON light object, allowing access to any custom properties on the JSON light object.
For possible JSON light properties see the documentation for the light.create function with the exception of the following:
A string with one of the following values:
For supported light types check:
See the Light object documentation for more information on these properties.
The JSON materials object is a collection of JSON material objects. The JSON effects object is a collection of JSON effect objects. Each JSON material object is used to create a Material object in the scene. JSON effect objects are used for multiple materials with similar effects to reduce the duplication of data.
The JSON material objects have the following properties:
These two parameters are both used in the construction of material.techniqueParameters.
Initially, the effect property string is checked for a reference to a JSON effect. If it is a reference then the JSON effects parameters are used to populate the techniqueParameters. Then the JSON material parameters properties, if they are defined, are used to overwrite techniqueParameters.
This is best explained with an example:
"effects": {
"colouredMaterial": {
"parameters": {
"ambient": [0, 0, 0, 1],
"diffuse": "grey.png"
},
"type": "phong"
}
}
"materials": {
"grey-material": {
"effect": "colouredMaterial"
},
"yellow-material": {
"effect": "colouredMaterial",
"parameters": {
"diffuse": "yellow.png"
}
},
"green-material": {
"effect": "blinn",
"parameters": {
"diffuse": "green.png"
}
}
}
The first 2 materials are using the same effect. However, the “yellow” material overwrites the diffuse texture set by the effect. Both materials will have effect type “phong” while the “green” material has effect type “blinn”. So the material.techniqueParameters objects for each material will be as follows:
grey: {
techniqueParameters: {
ambient: [0, 0, 0, 1],
diffuse: "grey.png"
}
}
yellow: {
techniqueParameters: {
ambient: [0, 0, 0, 1],
diffuse: "yellow.png"
}
}
green: {
techniqueParameters: {
diffuse: "green.png"
}
}
This example is not in JSON format since it is showing the values of the JavaScript objects after they have been loaded.
Any properties on the parameters objects with string values are assumed to be file references. See the JSON images object for more information on file references.
If the effect property is not a reference then it is taken as the materials effect type. For supported effects see the rendering documentation.
The meta object contains possible extra information needed by the renderers. See the rendering documentation for valid values.
The JSON effect has the following parameters:
The JSON nodes object is a collection of JSON node objects. Each JSON node object is used to create a SceneNode object in the scene. Since nodes are referenced from their paths in the node hierarchy it is possible to have 2 nodes with the same name. However, 2 child nodes should not have the same name since they would then have the same path (this also applies for root nodes).
The JSON node objects have the following properties:
This object is a collection of JSON geometryinstance objects. Each JSON geometryinstance object is used to create a GeometryInstance object in the scene.
The JSON geometryinstance objects have the following properties:
A JSON nodes object for the children nodes. For example:
"nodes": {
"character": {
"dynamic": true,
"nodes": {
"root": {
"dynamic": true,
"nodes": {
"chest": {
"dynamic": true
},
"head": {
"dynamic": true
},
"legs": {
"dynamic": true,
"nodes":
{
"leftLeg": {
"dynamic": true
},
"rightLeg": {
"dynamic": true
}
}
}
}
}
}
}
}
This object will be copied onto the SceneNode object object’s camera property. You can access this object later on with:
var cameraNode = scene.findNode("cameraNode");
var camera = cameraNode.camera;
A JSON lightinstances object is a collection of JSON lightinstance objects. Each JSON lightinstance object is used to create a LightInstance object on the scene node.
Each JSON lightinstances object has the following property:
A string reference to another turbulenz JSON file object. Currently, the pound character, “#”, is not allowed in file references and any reference containing a hash will be ignored.
If the inplace flag is set to true then the external reference is loaded in at the top level object. Be careful about name clashes when using this flag. If the flag is false the external reference JSON node objects are loaded in as this JSON node objects children (added to its nodes property).
Here is an example of a JSON nodes object which represents a collection containing a camera node in the scene:
"nodes": {
"cameraNode": {
"geometryinstances": {
"geometry": "geometry-camera",
"material": "material-camera",
"surface": "geometry-camera-surface0",
"skinning": false
}
"camera": {
"comment0": "You can put any custom properties in here.",
"comment1": "They will be copied onto scene nodes camera property.",
"comment2": "For example:",
"cameraOffset": [0.1, 0.5, 0]
},
"matrix": [1, 0, 0,
0, 1, 0,
0, 0, 1,
-5, 4, 2],
"dynamic": false,
"disabled": false,
"kinematic": false,
"lightinstances": "light-camera"
}
}
Skeletons is a collection of JSON skeleton objects. Each JSON skeleton object has the following properties:
An array of bone inverse local transform matrices (4 by 3). This can be computed by the following method:
Each index in the 4 arrays represents a bone in the skeleton. Here is an example of a JSON skeletons object which represents a collection containing a basic human skeleton:
"skeletons": {
"basicHuman": {
"numNodes": 10,
"names": ["head",
"chest",
"upperRightLeg",
"lowerRightLeg",
"upperRightArm",
"lowerRightArm",
"upperLeftLeg",
"lowerLeftLeg",
"upperLeftArm",
"lowerLeftArm"],
"parents": [-1, 0, 1, 2, 1, 4, 1, 6, 1, 8],
"bindPoses": [[1, 0, 0,
0, 1, 0,
0, 0, 1,
0, 0, 0],
[1, 0, 0,
0, 1, 0,
0, 0, 1,
0, -5, 0],
... 8 more bind pose matrices],
"invBoneLTMs": [[1, 0, 0,
0, 1, 0,
0, 0, 1,
0, 0, 0],
[1, 0, 0,
0, 1, 0,
0, 0, 1,
0, 5, 0],
... 8 more inverse local transform matrices]
}
}
The JSON animations object is a collection of JSON animation objects.
Each JSON animation object can have the following properties:
An array of objects giving the axis aligned bounding box of the mesh for a set of keyframe of the animation. Each object in the array has the following properties:
The channels that this animation effects. Supported properties are:
This property is similar to a JSON skeleton object without the binding information. It takes the following properties:
This hierarchy need not be the same as the skeleton that the geometry uses. However, the input to a GPUSkinController object must have same skeleton as the geometry.
An array of nodeData JSON objects. This array gives the inputs for each bone’s animation and is of the length of the numNodes property on the hierarchy object. Each nodeData JSON object can have a baseframe object property or a keyframes object property.
A baseframe should be provided for channels on the bone that do not change during the animation. If a keyframe object attempts to use a channel defined by the baseframe then the keyframe object’s values for that channel will be ignored. If a baseframe is provided for each channel then the bone’s transform will not change during the animation.
A keyframes object should be provided when the bone is animated. The keyframes object is an array of keyframe objects of the length of the hierarchy objects numNodes property. Each keyframe object gives the transform of the bone at a certain time to be interpolated by an InterpolatorController object.
Both baseframe and keyframe can have the following properties which form a transform for each bone:
As well as any other custom channel properties (custom channel’s format must be an array of numbers) that are set on the channels object. The keyframe object also requires
The keyframes object must have at least two of keyframe objects in its array; a start and an end.
Here is an example of a JSON skeletons object which represents a collection containing a robot arm animation:
"animations": {
"robotArmPickUp": {
"hierarchy": {
"numNodes": 5,
"names": ["base",
"upperArm",
"lowerArm",
"leftClaw",
"rightClaw"],
"parents": [-1, 0, 1, 2, 2]
}
"numNodes": 5,
"length": 2.5,
"channels": {
"rotation": true,
"scale": true
},
"bounds": [
{
"center": [3, 3, 0],
"halfExtent": [3, 3, 1],
"time": 0
},
{
"center": [4, 4, 4],
"halfExtent": [4, 4, 4],
"time": 1.0
},
{
"center": [0, 4, 4],
"halfExtent": [1, 4, 4],
"time": 2
}
],
"nodeData": [
{
"keyframes": [
{
"rotation": [0, 0, 0, 1],
"scale": [1, 1, 1],
"time": 0
},
{
"rotation": [0, 0.706, 0, 0.707],
"scale": [1, 1, 1],
"time": 1
},
{
"rotation": [0, 1, 0, 0],
"scale": [1, 1, 1],
"time": 2
}
]
},
{
"baseframe":
{
"rotation": [0, 0, 0, 1],
"scale": [1, 1, 1],
}
},
{
"keyframes": [
{
"rotation": [0, 0, 0, 1],
"scale": [1, 1, 1],
"time": 0.5
},
{
"rotation": [0, 0, 0, 1],
"scale": [1, 1.25, 1],
"time": 1.5
}
]
},
{
"baseframe": {
"scale": [1, 1, 1]
},
"keyframes": [
{
"rotation": [0, 0, 0, 1],
"time": 1
},
{
"rotation": [1, 0, 0, 1.57],
"time": 2.5
}
]
},
{
"baseframe": {
"scale": [1, 1, 1]
},
"keyframes": [
{
"rotation": [0, 0, 0, 1],
"time": 1
},
{
"rotation": [1, 0, 0, -1.57],
"time": 2.5
}
]
}
]
}
}
The JSON physicsmaterials object is a collection of JSON physicsmaterial objects.
A JSON physicsmaterial object has the following properties.
An array of strings of the following types:
For more information see PhysicsDevice filters.
The JSON physicsmodels object is a collection of JSON physicsmodel objects. Each JSON physicsmodel object is used to create:
Here is an example of a JSON physicsmodels object:
"physicsmodels": {
"capsule": {
"dynamic": true,
"mass": 1,
"material": "Cone-PhysicsMaterial",
"height": 1,
"radius": 1,
"shape": "cone"
}
"cube": {
"dynamic": true,
"mass": 1,
"material": "Cube-PhysicsMaterial",
"halfExtents": [1, 3, 0.5],
"shape": "box"
}
"sphere": {
"dynamic": true,
"mass": 1,
"material": "Sphere-PhysicsMaterial",
"radius": 1,
"shape": "sphere"
}
"mesh": {
"dynamic": true,
"mass": 1,
"material": "Convexhull-PhysicsMaterial",
"geometry": "phong_floorSG",
"shape": "mesh"
}
}
A JSON physicsmodel object has the following properties.
A string representing the models collision object shape, possible values are:
For more information see the PhysicsDevice object.
The JSON physicsnodes object is a collection of JSON physicsnode objects. Each JSON physicsnode object links a JSON node up to a JSON physicsmodel.
A JSON physicsnode object has the following properties.
A JSON node object.
turbulenz_json_format