SuGaR icon indicating copy to clipboard operation
SuGaR copied to clipboard

I have implemented all regularizations, anybody isis interested in same surface-recon and discuss with me?

Open yuedajiong opened this issue 2 years ago • 8 comments

I urgently need a partner for discussion who shares the same direction.

yuedajiong avatar Dec 18 '23 09:12 yuedajiong

I am also working on implementing the regularizations, I am at the last one for smothing

cdcseacave avatar Dec 18 '23 09:12 cdcseacave

i have experience with poisson reconstruction if that is what you need

cdcseacave avatar Dec 18 '23 09:12 cdcseacave

Hello guys,

Sorry for not answering a lot recently, I was working hard on several projects.

As I explained in other issues, the code is basically finished and coming soon. Very, very soon, actually: I plan to release it today.

I just have a few tests to perform to check that the env file is working, and that I've not broken the code when reorganizing it.

Stay tuned, if everything goes well (and no bug is here), the code is out today.

Anttwo avatar Dec 18 '23 09:12 Anttwo

@Anttwo You're awesome, you're my idol.

Waiting for your code, and try.

If you're willing, I'm happy to serve you by testing and optimizing code.

The surface recon is the most important one.

yuedajiong avatar Dec 18 '23 10:12 yuedajiong

hello @yuedajiong, I have contacted you via email. I am looking forward to your reply!

XOURTNEY avatar Dec 19 '23 02:12 XOURTNEY

Hi @yuedajiong and @cdcseacave , I am trying to implement the sdf and normals regularization on the orginal 3DGS. However, I met the problem that the density would be very large number with the equation in the paper. The code is attached below. Could you please take a look. Thanks!

`# sdf loss gaussians._xyz

    # 随机取当前高斯点当作sample_points
    valid_indices = torch.arange(30000, device="cuda")
    cum_probs = valid_indices / 29999
    random_indices = torch.multinomial(cum_probs, 30000, replacement=True)

    random_change = torch.randn_like(
        gaussians._xyz[random_indices]) * 10  # 生成与 gaussians._xyz[random_indices] 相同形状的随机数张量
    sdf_samples = gaussians._xyz[random_indices] + random_change

    # 计算sdf estimation
    sdf_samples_z = sdf_samples[..., 2] + 0
    sdf_samples_xy = sdf_samples[..., 0:2] + 0
    proj_mask = sdf_samples_z > viewpoint_cam.znear

    # 获取点在rendered depth上的深度
    # 将点的坐标调整为深度图上的范围(-1到1之间)
    points_normalized = sdf_samples_xy.clone()
    points_normalized[:, 0] = (sdf_samples_xy[:, 0] / (depth.shape[2] - 1))
    points_normalized[:, 1] = (sdf_samples_xy[:, 1] / (depth.shape[1] - 1))

    grid = points_normalized.view(1, -1, 1, 2)  # 将点的坐标构建成形状为 (1, num_points, 1, 2) 的 grid
    sdf_samples_map_z = torch.nn.functional.grid_sample(rendered_depths, grid, mode='bilinear',
                                                        padding_mode='border')[0, 0, :, 0]

    sdf_estimation = sdf_samples_map_z[proj_mask] - sdf_samples_z[proj_mask] 

    # df real
    beta = gaussians._scaling.min(dim=-1)[0][random_indices].mean(dim=0)

    opacity = gaussians._opacity[random_indices]
    distance = sdf_samples - gaussians._xyz[random_indices]
    covariance = gaussians.get_actual_covariance()[random_indices]

    densities = torch.zeros(30000, 1)
    for i in range(0, 29999):
        density = opacity[i] * (
                    -0.5 * distance[i, :].T @ torch.inverse(torch.squeeze(covariance[i, :, :])) @ distance[i, :])
        densities[i, 0] = density

    density_threshold = 1.
    opacity_min_clamp = 1e-16
    clamped_densities = densities.clamp(min=opacity_min_clamp).to("cuda")
    sdf_values = beta * torch.sqrt(-2. * torch.log(clamped_densities)) 

    # sdf standerd deviation

    sdf_estimation_loss = torch.mean((sdf_values - sdf_estimation.abs())) 

neneyork avatar Apr 15 '24 22:04 neneyork

Hi @neneyork, do you successfully implement the loss? I'm also trying to do that.

JiajieLi7012 avatar Jun 25 '24 02:06 JiajieLi7012

Hi @neneyork, do you successfully implement the loss? I'm also trying to do that.

hi @JiajieLi7012 , I have implement the loss. If you have any difficulaties, please refer the EndoGS.

neneyork avatar Jun 25 '24 19:06 neneyork