# Creating Daylight | The Shadows

Alright, let's dive in! Our latest project for Daylight is live, and it's not just another website—it's our bread and butter. We aimed to create something that feels cozy, calm, and natural. From shaders to optimization, we’ve got it all. And if you’re curious, you can peek behind the curtain by visiting the debug view at Daylight Computer Debug. Once there, press “o” to enter the Orbit Camera.

Okay, there is a lot to unpack here. Be patient. We'll do three posts to shed light on the tricks we used. We know you want everything at once, but good things take time, right?

The topic of the day: Rendering soft shadows.

# Why

Early in the project, we aimed to make a calm, warm, and natural site. We experimented with different approaches to achieve that, but the aha moment came when one of the designers created this draft:

The images and background interacting with the shadow created the illusion that the image “was there,” like we saw a wall with pictures. It made perfect sense since the whole idea of the product is for you to read using natural light.

# OGL

Since we only wanted to add soft shadows, we chose OGL as our WebGL library. Why? Because It’s lightweight and easy to use, and honestly, we just wanted to try it.

To integrate it with React, we added pmndrs/react-ogl. The API is similar to react-three-fiber, so adopting it was simple for us.

However, because it is minimalistic, creating advanced effects requires a bit more effort. The upside is that this gives us complete artistic control.

# Soft shadows algorithm

Currently, OGL does not implement soft shadows, so we had to create our own. During this process, we learned several useful techniques that we would like to share.

We began by adding a cube that casts a shadow, which can be done by following the shadow-maps example on the OGL website.

Then, we started testing how to blur the shadows. At first, we edited the shader and added a Gaussian blur, which created the illusion that the objects were far away.

The effect started to look good, but something was off; real shadows don’t behave that way. In real life, the farther an object is from the surface, the more blurred its shadow appears. Here's a reference image of a real wall with shadows to illustrate this concept:

To achieve this effect, our shader first needs to know the object's distance from the wall. This is easily done since we can access the distance information during rendering. The darker the pixel, the closer it is to the wall, and that pixel should have less blur. Here is an example of what the light’s camera from the daylight “sees”:

Here is the same light camera viewed from outside:

# Calculating soft shadows

To simplify the problem, we created this sandbox to play with. The background on the sandbox is a depth map similar to the one above.

On this depth map, we have two shapes. A star and a square. The star is closer to the wall than the square, producing a darker color. We added a small stroke to the shapes to make them easy to distinguish:

If we were using a program like Figma or Photoshop, we would blur each object separately. But that’s not something we can do here. On a shader, we have to calculate every blurred shadow simultaneously.

Remember, each operation deceived will run for every pixel.

So, how can we solve this problem? One way is to search in areas close to our pixel to see if there is an object that causes a shadow. Let's sample a grid around our pixel to search for objects:

**Hint: Click anywhere on the canvas to see how the shader works on that pixel.**

Here is the fragment shader used to sample the texture:

1precision mediump float;2uniform sampler2D uTexture;3uniform float wSize;45varying vec2 vTexCoord;67const int gridSize = 15;8const int gridDivisions = 4;910void main() {11 vec2 uv = vTexCoord;12 uv.y = 1.0 - uv.y;1314 int shadowCounter = 0;15 for (int i = -gridDivisions; i <= gridDivisions; i++) {16 for (int j = -gridDivisions; j <= gridDivisions; j++) {17 vec2 offset = vec2(i * gridSize, j * gridSize);18 vec4 color = texture2D(uTexture, uv + offset / wSize);19 if (color.r > 0.0 && color.g == 1.) {20 shadowCounter++;21 }22 }23 }2425 float shadowFactor = float(shadowCounter) / float((gridDivisions * 2 + 1) * (gridDivisions * 2 + 1));26 vec3 color = vec3(1. - shadowFactor);2728 gl_FragColor = vec4(color, 1.0);29}

In this example, we sample a grid of points around our pixel and evaluate whether the point intersects with an object; if it does, it will “add a shadow”.

We can distinguish our first issue: the produced shadow is pixelated. Let’s add a random rotation to the grid, this will help smooth our result:

Note: to simplify the visualization, the grid will not rotate on the debug view, but it is rotating on the shader.

1precision mediump float;2uniform sampler2D uTexture;3uniform float wSize;4uniform float hSize;56varying vec2 vTexCoord;78const int gridSize = 15;9const int gridDivisions = 4;101112vec3 random3(vec3 c) {13 float j = 4096.0 * sin(dot(c, vec3(17.0, 59.4, 15.0)));14 vec3 r;15 r.z = fract(512.0 * j);16 j *= .125;17 r.x = fract(512.0 * j);18 j *= .125;19 r.y = fract(512.0 * j);20 return r - 0.5;21}2223float getNoise(vec2 uv, float screenWidth) {24 vec2 scaledUV = uv * screenWidth;25 vec3 seed = vec3(scaledUV, mod(scaledUV.x + scaledUV.y, screenWidth));26 vec3 noise = random3(seed);27 return noise.x * 0.3 + noise.y * 0.3 + noise.z * 0.4;28}293031void main() {32 vec2 uv = vTexCoord;33 uv.y = 1.0 - uv.y;3435 float noiseSample = getNoise(uv, wSize);3637 float angle = noiseSample * 3.14159265;38 float cosAngle = cos(angle);39 float sinAngle = sin(angle);4041 int shadowCounter = 0;42 for (int i = -gridDivisions; i <= gridDivisions; i++) {43 for (int j = -gridDivisions; j <= gridDivisions; j++) {44 vec2 offset = vec2(i * gridSize, j * gridSize);4546 vec2 rotatedOffset;47 rotatedOffset.x = cosAngle * offset.x - sinAngle * offset.y;48 rotatedOffset.y = sinAngle * offset.x + cosAngle * offset.y;4950 vec4 color = texture2D(uTexture, uv + rotatedOffset / vec2(wSize, hSize));5152 if (color.r > 0.0 && color.g == 1.) {53 shadowCounter++;54 }55 }56 }5758 float shadowFactor = float(shadowCounter) / float((gridDivisions * 2 + 1) * (gridDivisions * 2 + 1));59 vec3 color = vec3(1. - shadowFactor);6061 gl_FragColor = vec4(color, 1.0);62}

This works; we could call it a day, but every shadow has the same size! In real life, objects near the wall should produce a *smaller shadow.*

To figure out which points create shadows that affect our pixel, we must consider the size of the shadow.

Let's illustrate this by drawing a `circle`

to represent the size of the shadow produced by each point.

Each circle represents the **radius of the shadow produced by that point in space**. The radius was calculated by sampling the `depth map`

at that point. Because the square is further away, its shadow will cover a larger area.

A green circle means that, for that given sample, a surface contributes a shadow to our pixel.

Remember, we are debugging just one pixel at a time, this process has to be done for each pixel on your screen. On our designer's 2k monitor, that’s about `2442240 pixels`

, and he also has 30 other design tools opened, so we better optimize this thing.

Also, there is another problem that you might have noticed by now: the shadows look “blocky.” This is mainly because we are using a square grid to sample our depth map.

So, how can we solve this problem? When searching for the answer, we remembered this awesome library called pmndrs/drei, a collection of useful helpers for `react-three-fiber`

. We couldn’t use the library (we are using OGL), but thankfully, it’s open source, so we get to see how they implemented it.

They used something called `vogelDisk`

to sample the depth shader.

# Vogel disk sampling

The Vogel Disk algorithm is a method for distributing points evenly within a circular area, using the golden angle to achieve uniform spacing. This approach significantly enhances sampling by reducing clustering and gaps compared to grid-based methods, leading to improved sampling quality, reduced aliasing artifacts, and more natural, visually appealing results.

Here is how we implemented the `Vogel Disk Sampling`

:

1precision highp float;2uniform sampler2D uTexture;3uniform highp float wSize;4uniform highp float hSize;56varying vec2 vTexCoord;78const float pi = 3.1415926535897932384626433832795;9const float goldenAngle = pi * (3.0 - sqrt(5.0)); // Golden angle in radians10const float diskSize = 80.0;11const int diskSamples = 100;12const float minSize = 20.;13const float maxSize = 300.;1415vec3 rand(vec2 uv) {16 return vec3(17 fract(sin(dot(uv, vec2(12.75613, 38.12123))) * 13234.76575),18 fract(sin(dot(uv, vec2(19.45531, 58.46547))) * 43678.23431),19 fract(sin(dot(uv, vec2(23.67817, 78.23121))) * 93567.23423)20 );21}2223void main() {24 vec2 uv = vTexCoord;25 uv.y = 1.0 - uv.y;2627 int shadowCounter = 0;28 float shadowInfluence = 0.0;29 float noiseSample = rand(uv).x;3031 float angle = noiseSample * pi;32 float cosAngle = cos(angle);33 float sinAngle = sin(angle);3435 for (int i = 1; i <= diskSamples; i++) {36 float r = diskSize * sqrt(float(i) / float(diskSamples));37 float theta = float(i) * goldenAngle;3839 vec2 offset;40 offset.x = r * cos(theta);41 offset.y = r * sin(theta);4243 vec2 rotatedOffset;44 rotatedOffset.x = cosAngle * offset.x - sinAngle * offset.y;45 rotatedOffset.y = sinAngle * offset.x + cosAngle * offset.y;4647 vec4 color = texture2D(uTexture, uv + rotatedOffset / vec2(wSize, hSize));48 if (color.r > 0.0 && color.g == 1.0) {49 float dist = length(offset);50 float size = color.r;51 size = (size * (maxSize - minSize)) + minSize;5253 if (size / 2.0 >= dist) {54 shadowInfluence += mix(8.0, 0.5, size / maxSize);55 shadowCounter++;56 }57 }58 }5960 float shadowFactor = shadowInfluence / float(diskSamples);61 shadowFactor = clamp(shadowFactor, 0.0, 0.8);62 vec3 color = vec3(1.0 - shadowFactor);6364 gl_FragColor = vec4(color, 1.0);65}

Here is the debug version for it:

As we can see in the final render, the star has no hard edges, and the square shadows seem much smoother. The best part? It uses fewer samples than the grid approach.

Here is the same technique working on the Daylight site.

# Final Thoughts

Being involved in the design process from the very beginning was a key part of achieving a great result, as it allowed us to understand how the site should feel.

We focused our technical efforts on making the experience feel "calm and natural," which is why we chose to develop the soft shadows effect.

This discussion is just the beginning. Stay tuned for the next two articles where we will dive deeper into using Canvas for rendering and advanced debugging techniques to optimize performance and visual fidelity. These upcoming articles will provide further insights and tools to enhance your graphical projects. Keep posted!

↖ Back to listing