Category Archives: OpenGL

Vertex Displacement Mapping in GLSL Now Available on Radeon!

As I said in this news, the release of Catalyst 8.10 BETA comes with a nice bugfix: vertex texture fetching is now operational on Radeon (at least on my Radeon HD 4850). From 2 or 3 months, Catalyst makes it possible to fetch texture from inside a vertex shader. You can see with GPU Caps Viewer how many texture units are exposed in a vertex shader for your Radeon:


But so far, vertex texture fetching in GLSL didn’t work due to a bug in the driver. But now this is an old story, since VTF works well. For more details about vertex displacement mapping, you can read this rather old (2 years!) tutorial: Vertex Displacement Mapping using GLSL.

This very cool news makes me want to create a new benchmark based on VTF!

I’ve only tested the XP version of Catalyst 8.10. If someone has tested the Vista version, feel free to post a comment…

Next step for ATI driver team: enable geometry texture fetching: allows texture fetching inside a geometry shader…

See you soon!


Saturate function in GLSL

During the conversion of shaders written in Cg/HLSL, we often find the saturate() function. This function is not valid in GLSL even though on NVIDIA, the GLSL compiler accepts it (do not forget that NVIDIA’s GLSL compiler is based on Cg compiler). But ATI’s GLSL compiler will reject saturate() with a nice error. This function allows to limit the value of a variable to the range [0.0 – 1.0]. In GLSL, there is a simple manner to do the same thing: clamp().

Cg code:

float3 result = saturate(texCol0.rgb - Density*(texCol1.rgb));

GLSL equivalent:

vec3 result = clamp(texCol0.rgb - Density*(texCol1.rgb), 0.0, 1.0);

BTW, don’t forget all float4, float3 and float2 which correct syntax in GLSL is vec4, vec3 and vec2.

Lors de la conversion de shaders écrits en Cg/HLSL, on trouve souvent la fonction saturate(). Cette fonction n’est pas valide en GLSL bien que sur les NVIDIA le compilateur l’accepte (n’oublions pas que le compilateur GLSL de NVIDIA repose sur le compilateur Cg). Mais le compilateur GLSL d’ATI générera une belle erreur à la vue de saturate(). Cette fonction sert à limité la valeur d’une variable entre 0.0 et 1.0. En GLSL il y un moyen tout simple de faire la même chose: clamp().

Code Cg:

float3 result = saturate(texCol0.rgb - Density*(texCol1.rgb));

Equivalent GLSL:

vec3 result = clamp(texCol0.rgb - Density*(texCol1.rgb), 0.0, 1.0);

Au passage lors des conversions, n’oubliez pas les float4, float3 et float2 qui s’écrivent en GLSL en vec4, vec3 et vec2.


GLSL support in Intel graphics drivers

A user from oZone3D.Net forum asked me some info about the GLSL support of Intel graphics chips. It’s wellknown (sorry Intel) that Intel has a bad OpenGL support in its Windows drivers and even if Intel’s graphics drivers support OpenGL 1.5, there is still a lack of GLSL support. We can’t find the GL_ARB_shading_language_100 extension (this extension means the graphics driver supports the OpenGL shading language) and this extension should be supported by any OpenGL 1.5 compliant graphics driver. You can use GPU Caps Viewer to check for the avaibility of GL_ARB_shading_language_100 (in OpenGL Caps tab).

Here is an example of a Intel’s graphics driver that support openGL 1.5 without supporting GLSL:
Mobile IntelR 965 Express Chipset Family

For more examples, look at users’s submissions here: www.ozone3d.net/gpu/db/

Okay this is my analysis, but what is the Intel point of view? Here is the answer:
x3100 & OpenGL Shader (GLSL) thread
Intel’s answer

I think GLSL support with Windows is not a priority for Intel…


GLSL float to RGBA8 encoder

Packing a [0-1] float value into a 4D vector where each component will be a 8-bits integer:

vec4 packFloatToVec4i(const float value)
{
  const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0);
  const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0);
  vec4 res = fract(value * bitSh);
  res -= res.xxyz * bitMsk;
  return res;
}

Unpacking a [0-1] float value from a 4D vector where each component was a 8-bits integer:

float unpackFloatFromVec4i(const vec4 value)
{
  const vec4 bitSh = vec4(1.0/(256.0*256.0*256.0), 1.0/(256.0*256.0), 1.0/256.0, 1.0);
  return(dot(value, bitSh));
}

Source of these codes: Gamedev forums