Interestingly, SDFs have also become a widely used tool for analysis of 3D sensors such as lidar and RGBD (kinect, realsense). Specifically, truncated SDF's, which put an upper limit on the magnitude of function, which helps with outliers and makes the representation sparser. They're a nice way to summarize noisy range measurements in a way that makes it easy to extract surfaces for visualization/further processing. The technique has been out there for a while, going at least back to Curless' space carving paper in the 90s [1], but kinect fusion [2] made it a lot more popular in recent years.
> As Signed Distance Functions start making it into mainstream and commercial applications, it's important to find replacements or alternatives to common things artists used to do in polygon-land.
So are SDFs really going to replace polygons then? Is this for just the demoscene, or for some specific applications, or do experts think this is the future for all 3D graphics?
I know enough about SDFs to know that they are really cool, but i have no idea what the practical considerations are now, let alone likely to be in five years' time.
It won't replace polygons. But it has been a very useful tool in the gfx programmer toolbox.
It can be used for font/vector rendering, modelling / rendering complex shapes that are difficult to do using traditional modelling methods, and it has a lot of niche usages (I remember one talk about using some SDF to procedurally generates fakes).
> So, when adding a regular fBM, sine wave or any other displacement function to a "host" SDF, we don't get a valid SDF anymore (we violate the principle that the gradient of an SDF must have length 1.0).
Can someone elaborate or point to an elaboration of this point? I guess there must be a good reason why the gradient cannot just be normalised, but I don't know what it is.
I'm not the greatest at explaining things, but I'll give it a shot.
Just so everyone is on the same page, an SDF is a function that simply returns the distance to the closest surface for any input point. If the distance is positive, the point is outside of the geometry. If the distance is negative, the point is inside the geometry. The gradient of the SDF is calculated by sampling the SDF across multiple points, so the SDF itself doesn't return a gradient.
An SDF is no longer a valid SDF if the distance isn't the true distance to the closest surface. This is usually caused by transforming the SDF non-uniformly, such as adding a sine wave. Another way to think about this is that a valid SDF is in euclidean space while an invalid SDF is in non-euclidean space. Applying a displacement function will compress and stretch space non-uniformly.
Imagine you have an SDF function that just represents a sphere. Then you transform the vertical position of the sphere based on the absolute value of the horizontal position of the input point. This will turn the sphere into a V shape. However, the SDF is no longer a valid SDF. If you sampled a point next to one of the inner walls of the V shape, it wouldn't return the distance to the wall, it would return the distance to the point on the shape below it.
The gradient for every possible point in a valid SDF will be a unit vector because space is uniform. A non-valid SDF will contain non-unit-vector gradients between areas of non-uniformity.
Hopefully that made sense. If you are still confused, I would suggest writing a simple raymarcher on Shadertoy and playing around with distorting SDFs yourself. A simple raymarcher is only a couple lines of code (probably less than 20 lines of GLSL).
Because you're displacing the existing gradient. The existing SDF is by definition valid but now being transformed. It's been a while since I dabbled in this but I'm not sure how you'd normalize and maintain the transform. If you naively normalize then you lose all the smooth continuity of the function, which is no bueno.
Could you naively compose the distance functions and then use automatic differentiation to compute the gradient? As long as the distance functions are differentiable, so should their composition be.
Certainly, it reduces to reverse-mode AD on fairly arbitrary (although usual scalar) code. You definitely need a compiler or preprocessor to do that for you.
After deforming the space with a sine function there is no problem in calculating the gradient. The problem is calculating the new correct distance to the nearest surface after a space transformation. You can see this problem in 2D, here I made an example https://www.shadertoy.com/view/NtsSRf
Can someone help me understand why SDF-only techniques are valuable (aside from being interesting in their own right)?
I know very little gfx but so far I mostly saw SDFs used for accomplishing stuff in fragment shader that would otherwise be done with geometry, kind of as a workaround for limitation of shadertoy environment
One huge benefit is the way you describe them. It's very concise.
// This is a sphere
length(p) - size;
There are no polygons, just surfaces described with math. You can do operations with them, like addition, subtraction, deformation quite easily, which is also much more complex with polygon based 3D models.
MagicaCSG is a graphical signed distance field editor (by the author of Magica Voxel). The example video and images demonstrate how SDFs can be used for complex modelling!
I believe the reason is because they allow for efficient raytracing via a technique called raymarching, which requires a way for it to tell what's the minimum distance from a surface given any arbitrary point, hence the need for a SDF. Quilez has many examples of this technique on his site
Raytracing with SDFs is actually not that efficient compared to all the optimizations available for raytracing polygons. And raymarching in particular can be quite slow due to the constant iterative nature of that process.
To elaborate(and just slightly correct on this point).
The problem that classic primitive raytracing has had was that it never scaled well for complex primitives since you needed to solve the equations that could have many intersection points for a ray.
A plane (and triangle) has 0 or 1 intersection point, this is quite trivial to solve.
Insert the ray equation ( x'=x0+dxt, y'=y0+dyt, z'=z0+dzt) into the plane equation ( Ax+By+Cz = D) and solve for [t] that is the intersection distance as seen from the ray origin [x0,y0,z0] in delta units.
A sphere has 0, 1 or 2 intersections.
You again insert a ray equation into a sphere equations and after solving you end up with a classic second degree equation with a square root.
A donut can have 0, 1, 2, 3 or 4 intersections.
All these are fairly trivial shapes, but you see how the code complexity goes up very quickly even here with more intersection cases to handle even for these fairly simple shapes.
Raytracing was used with triangle-meshes (with acceleration structures) in offline renderers since it still allowed for better lighting models with free-er ray sampling than normal rendering (and now RTX has put this ability in hardware for triangle soups)
Now, for something simple as a sphere and plane, raytracing is faster than raymarching.
The difference however is that since distance field raymarching only relies on a single value (the distance to the nearest surface) and SDF functions are trivially composable there really isn't any bounds on how complex shapes you can render with the technique, as long as the combination and core distance functions are at least somewhat conservatively "correct" in producing a single distance value (raymarching is can be forgiving for inaccuracies as long as they don't over-shoot).
So while raymarching isn't really super-fast, it's fast enough that most GPU's eats them up, since it can all be run in parallel without almost any data bandwidth.
(and the fact that raymarching often converges on "big" surfaces within a few iterations only needing more iteration in areas like borders that only take a small bit of the screen estate)
Nontrivial renderers always make use of acceleration structures, and that’s actually much harder to implement with SDFs. At least with polygons, you can very precisely divide up your space or your objects and know exactly in which region what it is you’re hitting. You don’t get that benefit so easily with complex SDFs and thus many optimizations are not available.
There was a really interesting paper here on HN a few months back, basically if your SDF was data based you could do a low resolution pruning pre-pass over the screen and remove the datapoints of the SDF that didn't contribute anything in that screenspace region.
Other than that you can actually re-use many simpler SDF's to work as bounding spheres,etc. For example I did a kind of skinned character rendering that kinda started turning expensive and a simple way to accelerate it was to first do a sphere calculation that bounded it, if the sphere calc was producing a large enough value it was output directly instead of the "detail" value of the actual character. Now for a GPU that isn't ideal if one of the "threads" enters the character, but I think most GPU's do triangles one-tile-at-a-time so these threads will most of the time have converging calculations within each tile.
> Can someone help me understand why SDF-only techniques are valuable
Out of computer graphics, I have used them a lot for modeling the evolution of curves and surfaces in a topologically-oblivious way. The terminology is slightly different (they go by silly names like "implicit snakes"), but it is the same thing. If you have ever used an "adaptive select" or "intelligent scissors" tool of a drawing program, it was probably implemented using this technique.
It's an easy to use and general mathematical language for describing geometry. It lends itself easily to parametrization for things like procedural generation of infinite variations or for animation. And it is very "local" so requires O(1) memory to evaluate.
For a nonstandard use of the idea, see font rendering on the GPU with signed distance field textures:
If you want to generate textures which you then read at runtime, this kind of technique is very useful - for instance, imagine you want to create the normal map of a cloud, or create a height map to displace vertices with (for a procedural landscape).
Well yeah, but you can obviously use signed distance functions to create a texture (that's what a shader is doing) that can be used at runtime. Imagine you want to make the normal map for a bubbly surface, for example, but you also want the bubbles to be animated. One way of doing this would be to render the normals from a bunch of sdf spheres onto a 3d texture, which would then be used later.
I don't use this a work - it's just an approach that I've had some success with at home. On my machine, complex fragment shaders are always slow - so maybe this kind of sdf-based approach works in realtime for other people.
SDFs are another tool. Very useful for content creation, for example, although there are still very few tools to create SDFs. You can't create SDFs in Blender for example. SDFs will be more used in the future, polygons are a mess... :) Games like Dreams (PS4 / PS5) make extensive use of SDFs
The opportunity for optimizing polygons in the rendering pipeline is extraordinary compared to SDFs. For that reason alone, it will not likely be economical to use them anytime in the near future for the majority of modern commercial 3D work, perhaps ever.
This is great. One of those obvious-in-hindsight techniques. Fractal noise is pretty simple, Perlin noise being the most popular. It's easy to think of this as being the SDF equivalent.
I can't think of how a Simplex-noise equivalent might work, as displacing spheres by a vector might have the same issues as naive noise in terms of continuity.
It's an interesting and very clever technique. But those landscapes don't look real in any sense. They look like painted clouds. Uncanny valley? Maybe not even that close to reality.
[1] https://graphics.stanford.edu/papers/volrange/volrange.pdf
[2] https://www.microsoft.com/en-us/research/publication/kinectf...