nerfacc.render_transmittance_from_density¶
- nerfacc.render_transmittance_from_density(t_starts, t_ends, sigmas, *, packed_info=None, ray_indices=None, n_rays=None)¶
Compute transmittance \(T_i\) from density \(\sigma_i\).
\[T_i = exp(-\sum_{j=1}^{i-1}\sigma_j\delta_j)\]Note
Either ray_indices or packed_info should be provided. If ray_indices is provided, CUB acceleration will be used if available (CUDA >= 11.6). Otherwise, we will use the naive implementation with packed_info.
- Parameters:
t_starts (Tensor) – Where the frustum-shape sample starts along a ray. Tensor with shape (n_samples, 1).
t_ends (Tensor) – Where the frustum-shape sample ends along a ray. Tensor with shape (n_samples, 1).
sigmas (Tensor) – The density values of the samples. Tensor with shape (n_samples, 1).
packed_info (Optional[Tensor]) – Optional. Stores information on which samples belong to the same ray. See
nerfacc.ray_marching()
for details. LongTensor with shape (n_rays, 2).ray_indices (Optional[Tensor]) – Optional. Ray index of each sample. LongTensor with shape (n_sample).
n_rays (Optional[int]) – Optional. Number of rays. Only useful when ray_indices is provided yet CUB acceleration is not available. We will implicitly convert ray_indices to packed_info and use the naive implementation. If not provided, we will infer it from ray_indices but it will be slower.
- Returns:
The rendering transmittance. Tensor with shape (n_sample, 1).
- Return type:
Tensor
Examples:
>>> t_starts = torch.tensor( >>> [[0.0], [1.0], [2.0], [3.0], [4.0], [5.0], [6.0]], device="cuda") >>> t_ends = torch.tensor( >>> [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0]], device="cuda") >>> sigmas = torch.tensor( >>> [[0.4], [0.8], [0.1], [0.8], [0.1], [0.0], [0.9]], device="cuda") >>> ray_indices = torch.tensor([0, 0, 0, 1, 1, 2, 2], device="cuda") >>> transmittance = render_transmittance_from_density( >>> t_starts, t_ends, sigmas, ray_indices=ray_indices) [[1.00], [0.67], [0.30], [1.00], [0.45], [1.00], [1.00]]