Unity 手游多光源处理方案(低端机篇)
Unity手游支持多光源方案,我觉得主要依据手机硬件分成两种类型,一种是手机支持ComputeShader,StructuredBuffers,如iphone,以及android一些高端机,这种类型手机方案有很多,如Forward+、Cluster-base Lighting或者Tiled Based Deferred Shading,前面两种我也有写过,不过在公司内网机,懒得重新写,有兴趣的可以参照这两篇文章。现在重点讨论下是低端机多光源处理方案。
低端机,如果光源不多,场景简单,可以直接用Forward Rendering,或者用Uber Shader即将光源写在一个Pass里面,但是随着光源增加,渲染压力会越来越大,因此我们需要光源剔除;unity URP中有种方案是逐模型分配光源,像ReflectionProbe一样,将所影响的光源分配到对应的模型上,但是这种对于一些小模型还好,如果对于地形这种大物件就不适用了。
很早之前看到《明日之后》的点光源处理方案,我们运用了Tiled Point Light这一技术,将画面切分为多个tile,利用上一帧的深度计算tile的world position,然后计算出tile贡献最大的2个点光,使得每个顶点/像素仅需计算2个点光。Tiled Point Light使得我们开销降低(iphone 5s也可使用),大大丰富了场景的光照效果觉得还算可行,虽然也有很大限制,但是低端机就别要什么自行车了,因此现在有时间写下。 我选用Unity SRP,比较灵活方便,不用额外Depth Pass,用上一帧depth buffer的就行,低端机嘛,能省就省,而且可以和高端机那些裁剪方案无缝切换;用的是render-pipelines.universal@7.4.1改的方案,URP这东西出了这么久我觉得还是个半成品,还不如将Buildin放出来,毕竟比较完整。
第一步、写个Pixel Shader,Pixel Shader怎么分tile,我现在都搞不懂,用mipmap?而且我觉得没多大必要,我是逐像素计算的,主要是用_CameraDepthTexture,生成ndc坐标,再转化成世界坐标,然后比较点光源(spot light可以Cone包围起来,当点光源计算,我这里没做)与该世界坐标的距离。具体代码如下
float4 frag(Varyings input) : SV_Target
{
float depthValue = SAMPLE_TEXTURE2D(_CameraDepthTexture, sampler_CameraDepthTexture, input.depthuv).r;
int lightIndexR = 0;
int lightIndexG = 0;
float4 ndc = float4(inputuv.x * 2 - 1, input.uv.y * 2 - 1, depthValue, 1);
float4 worldPos = mul(unity_MatrixInvVP, ndc);
worldPos /= worldPos.w;
//取最近的两盏灯光索引
float LightDistanceR = MAX_POINT_LIGHT_DISTANCE;
float LightDistanceG = MAX_POINT_LIGHT_DISTANCE;
for (int i = 0; i < _AdditionalLightsCount.x; i++)
{
float3 tolight = worldPos.xyz - _AdditionalLightsPosition.xyz;
float lightSqr = dot(tolight, tolight);
if (lightSqr > _AdditionalLightsRange* _AdditionalLightsRange)
continue;
if (lightSqr < LightDistanceR)
{
LightDistanceG = LightDistanceR;
lightIndexG = lightIndexR;
LightDistanceR = lightSqr;
lightIndexR = i + 1;
}
else if(lightSqr < LightDistanceG)
{
LightDistanceG = lightSqr;
lightIndexG = i + 1;
}
}
return float4(lightIndexR * POINT_NUM, lightIndexG * POINT_NUM,0,1);
}第二步,写个LightCullPass,生成m_LightIndexTexture,代码如下
using System;
namespace UnityEngine.Rendering.Universal.Internal
{
/// <summary>
/// Render all objects that have a &#39;DepthOnly&#39; pass into the given depth buffer.
///
/// You can use this pass to prime a depth buffer for subsequent rendering.
/// Use it as a z-prepass, or use it to generate a depth buffer.
/// </summary>
public class LightCullPass : ScriptableRenderPass
{
Material m_SamplingMaterial;
private RenderTargetIdentifier source { get; set; }
private RenderTargetHandle destination { get; set; }
const string m_ProfilerTag = &#34;LightCull Prepass&#34;;
ProfilingSampler m_ProfilingSampler = new ProfilingSampler(m_ProfilerTag);
/// <summary>
/// Create the DepthOnlyPass
/// </summary>
public LightCullPass(RenderPassEvent evt, Material samplingMaterial)
{
m_SamplingMaterial = samplingMaterial;
renderPassEvent = evt;
}
/// <summary>
/// Configure the pass
/// </summary>
public void Setup(
RenderTargetIdentifier source, RenderTargetHandle destination)
{
this.source = source;
this.destination = destination;
}
public bool Setup(ref RenderingData renderingData)
{
int lightCount = renderingData.lightData.additionalLightsCount;
if (lightCount == 0)
return false;
return true;
}
public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
RenderTextureDescriptor descriptor = cameraTextureDescriptor;
descriptor.colorFormat = RenderTextureFormat.RGB565;// RenderTextureFormat.RGInt;
descriptor.bindMS = false;
descriptor.enableRandomWrite = false;
descriptor.depthBufferBits = 0;
descriptor.sRGB = false;
descriptor.useMipMap = false;
descriptor.dimension = TextureDimension.Tex2D;
cmd.GetTemporaryRT(destination.id, descriptor, FilterMode.Point);
}
/// <inheritdoc/>
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
if (m_SamplingMaterial == null)
{
Debug.LogErrorFormat(&#34;Missing {0}. {1} render pass will not execute. Check for missing reference in the renderer resources.&#34;, m_SamplingMaterial, GetType().Name);
return;
}
CommandBuffer cmd = CommandBufferPool.Get(m_ProfilerTag);
RenderTargetIdentifier opaqueColorRT = destination.Identifier();
Blit(cmd, source, opaqueColorRT, m_SamplingMaterial);
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
/// <inheritdoc/>
public override void FrameCleanup(CommandBuffer cmd)
{
if (cmd == null)
throw new ArgumentNullException(&#34;cmd&#34;);
if (destination != RenderTargetHandle.CameraTarget)
{
cmd.ReleaseTemporaryRT(destination.id);
destination = RenderTargetHandle.CameraTarget;
}
}
}
}第三步,在shader里计算光源,先计算ScreenPosition UV,取得光源索引,按正常计算就行
inputData.screenPosition = (input.clipPosition.xy / input.clipPosition.w) * 0.5 + 0.5;
float4 lightCull = SAMPLE_TEXTURE2D(_LightIndexTexture, sampler_LightIndexTexture, inputData.screenPosition);
int lightIndexR = round(lightCull.r * MaxNum);
int lightIndexG = round(lightCull.g * MaxNum);最后我们看看最终效果
项目奉上,这是我临时写的,我们项目也没用这种,肯定有一些问题,如果大家有些好的方案可以分享下。
链接:https://pan.baidu.com/s/1esO8XgsOafdKKiXLDH4i3w
提取码:62w2 当前有手机游戏用延迟了吗 手游目前应该没有 马上就会有了 感谢分享!
这个有测过实机效率吗?尤其是cull pass大概花了几毫秒呢?全场景视锥内大概有多少盏点光呢?
我看到这么大个loop,感觉不太妙啊 frag里分tile,可以第一个pass取16*16像素的zmax和zmin然后保存到一张rg_float上,第二个pass再做剔除 这是嫌用户手机电池太大了吗…… 我觉得……只要有ES3,还是上F+吧,毕竟代码写起来方便,ES2的手机……还要啥光源 我们的延迟管线经过优化带宽消耗非常低,消耗比forward低多了 没有,这只是个小demo,实际要应用到项目还得大量验证和调整。手游同屏二三十盏灯,已经是算多的了。
页:
[1]
2