揭秘Phantom.land交互式网格与3D面部粒子系统的技术实现

本文深入解析Phantom.land网站如何利用React Three Fiber、GLSL着色器和GSAP构建动态交互网格与3D面部粒子系统,涵盖自定义着色器、粒子生成算法和实时动画控制等核心技术实现细节。

隐形力量:Phantom.land交互式网格与3D面部粒子系统的构建

从一开始,我们就希望打破传统机构网站的常规模式。受驱动创造力、连接与变革的无形能量启发,我们提出了"隐形力量"的概念——能否将塑造世界的强大但无形的元素(运动、情感、直觉和灵感)在数字空间中具象化?

技术选型

我们选择基于Next.js/React构建,这使我们能够无缝使用React Three Fiber库来连接DOM组件和全站使用的WebGL上下文。样式方面采用自定义CSS组件和SASS。

对于交互行为和动画,我们选择GSAP的主要原因有二:一是它包含我们熟悉且喜爱的插件(如SplitText、CustomEase和ScrollTrigger);二是GSAP允许我们在DOM和WebGL组件中使用统一的动画框架。

首页网格系统

网格视图实现

项目的网格视图通过将原始Three.js对象集成到React Three Fiber场景中:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
// GridView.tsx
const GridView = () => {
  return (
    <Canvas>
      <ProjectsGrid />
      <Postprocessing />
    </Canvas>
  );
}

// ProjectsGrid.tsx
const ProjectsGrid = ({atlases, tiles}: Props) => {
  const {canvas, camera} = useThree();
  
  const grid = useMemo(() => {
    return new Grid(canvas, camera, atlases, tiles);
  }, [canvas, camera, atlases, tiles]);

  return <primitive object={grid} />;
}

后期处理失真效果

网格的标志性特征之一是我们的后期处理失真效果,通过在后期处理管道中创建自定义着色器通道实现:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// Postprocessing.tsx
const Postprocessing = () => {
  const {gl, scene, camera} = useThree();
  
  const {effectComposer, distortionShader} = useMemo(() => {
    const renderPass = new RenderPass(scene, camera);
    const distortionShader = new DistortionShader();
    const distortionPass = new ShaderPass(distortionShader);
    const outputPass = new OutputPass();

    const effectComposer = new EffectComposer(gl);
    effectComposer.addPass(renderPass);
    effectComposer.addPass(distortionPass);
    effectComposer.addPass(outputPass);

    return {effectComposer, distortionShader};
  }, []);
  
  useFrame(() => {
    effectComposer.render();
  }, 1);
 
  return null;
}

自定义失真着色器

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
class DistortionShader extends ShaderMaterial {
  private distortionIntensity = 0;

  super({
      name: 'DistortionShader',
      uniforms: {
        distortionIntensity: {value: new Vector2()},
      },
      vertexShader,
      fragmentShader,
  });

  setDistortion(value: number) {
    gsap.to(this, {
      distortionIntensity: value,
      duration: 1,
      ease: 'power2.out',
      onUpdate: () => this.update()
    }
  }
}

片段着色器实现

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// fragment.ts
export const fragmentShader = /* glsl */ `
uniform sampler2D tDiffuse;
uniform vec2 distortion;
uniform float vignetteOffset;
uniform float vignetteDarkness;

varying vec2 vUv;

void main() {
  vec2 shiftedUv = getShiftedUv(vUv);
  float distanceToCenter = length(shiftedUv);
  
  // 镜头失真效果
  shiftedUv *= (0.88 + distortion * dot(shiftedUv));
  vec2 transformedUv = getUnshiftedUv(shiftedUv);
  
  // 渐晕效果
  float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  
                     (vignetteDarkness + vignetteOffset) * distanceToCenter);
  
  vec3 color = texture2D(tDiffuse, distortedUV).rgb * vignetteIntensity;
  gl_FragColor = vec4(color, 1.);
}

微交互设计

环境鼠标偏移

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
getAmbientCursorOffset() {
  const uv = this.navigation.pointerUv;
  const offset = uv.subScalar(0.5).multiplyScalar(0.2);
  return offset;
}

update() {
  const cursorOffset = getAmbientCursorOffset();
  this.mesh.position.x += cursorOffset.x;
  this.mesh.position.y += cursorOffset.y;
}

拖拽缩放效果

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
onPressStart = () => {
  this.animateCameraZ(0.5, 1);
}

animateCameraZ(distance: number, duration: number) {
  gsap.to(this.camera.position, {
    z: distance,
    duration,
    ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
  });
}

拖拽移动惯性

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
drag(offset: Vector2) {
  this.dragAction = offset;
  this.velocity.lerp(offset, 0.8);
}

update() {
  if(this.isDragAction) {
    this.positionOffset.add(this.dragAction.clone());
  } else {
    this.positionOffset.add(this.velocity);
  }
  this.velocity.lerp(new Vector2(), 0.1);
}

面部粒子系统

核心概念:深度驱动的粒子生成

我们使用两个优化的256×256 WebP图像(每个小于15KB)来构建每个面部。通过RealityScan进行3D扫描,在Cinema4D中渲染位置和颜色通道,然后在Photoshop中转换为灰度深度图。

粒子系统构建

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
const POINT_AMOUNT = 280;

const points = useMemo(() => {
  const length = POINT_AMOUNT * POINT_AMOUNT;
  const vPositions = new Float32Array(length * 3);
  const vIndex = new Float32Array(length * 2);
  const vRandom = new Float32Array(length * 4);

  for (let i = 0; i < length; i++) {
      const i2 = i * 2;
      vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
      vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;

      const i3 = i * 3;
      const theta = Math.random() * 360;
      const phi = Math.random() * 360;
      vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
      vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
      vPositions[i3 + 2] = 1 * Math.cos(theta);
  }

  return {vPositions, vRandom, vIndex};
}, []);

React Three Fiber组件结构

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
  return (
    <points ref={pointsRef} position={pointsPosition}>
      <bufferGeometry>
        <bufferAttribute attach="attributes-vIndex" args={[points.vIndex, 2]} />
        <bufferAttribute attach="attributes-position" args={[points.vPositions, 3]} />
        <bufferAttribute attach="attributes-vRandom" args={[points.vRandom, 4]} />
      </bufferGeometry>
      
      <shaderMaterial
        blending={NormalBlending}
        transparent={true}
        fragmentShader={faceFrag}
        vertexShader={faceVert}
        uniforms={uniforms}
      />
    </points>
  );
};

动态粒子缩放

1
2
3
4
/* vertex shader */
vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;
float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;
float pScale = mix(pScaleMin, pScaleMax, density);

环境噪声动画

1
2
/* vertex shader */
pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;

面部过渡动画

1
2
3
4
5
timelineRef.current = gsap
  .timeline()
  .fromTo(uniforms.transition, {value: 0}, {value: 1.3, duration: 1.6})
  .to(uniforms.posZ, {value: particlesParams.offset_z, duration: 1.6}, 0)
  .to(uniforms.zScale, {value: particlesParams.face_scale_z, duration: 1.6}, 0);

自定义景深效果

1
2
3
4
5
6
7
8
/* vertex shader */
vec4 viewPosition = viewMatrix * modelPosition;
vDistance = abs(focus + viewPosition.z); 
gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;

/* fragment shader */
float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
gl_FragColor = vec4(color, alpha);

技术挑战与解决方案

我们面临的主要挑战是实现不同团队成员照片的视觉一致性。每个照片的拍摄条件略有不同——不同的光照、相机距离和面部比例。因此,我们为每个面部手动校准了多个缩放因子:

  • 深度缩放校准:确保鼻子不会过于突出
  • 颜色密度平衡:保持一致的粒子大小关系
  • 焦点平面优化:防止任何单个面部过度模糊
1
2
3
4
5
particle_params: { 
  offset_z: 0,           // 整体Z位置
  z_depth_scale: 0,      // 深度图缩放因子
  face_size: 0,          // 整体面部缩放
}

我们的面部粒子系统展示了如何通过简单但精细的技术实现,从最少的资源中创造有趣的视觉体验。通过结合轻量级WebP纹理、自定义着色器材料和动画,我们创建了一个将简单2D肖像转换为交互式3D图形的系统。

comments powered by Disqus
使用 Hugo 构建
主题 StackJimmy 设计