Note: cluster 6192 rmax=7---> natommax=36 ---------- Set npointc=36 batch=3096=6192/2 [2024-10-25 17:06:12,325::train::INFO] Namespace(memo='nc36_batch3096', Rscale=3.0, latent_dim=256, num_steps=200, beta_1=0.0001, beta_T=0.05, sched_mode='linear', flexibility=0.0, truncate_std=2.0, latent_flow_depth=14, latent_flow_hidden_dim=256, num_samples=4, sample_num_points=128, kl_weight=0.001, residual=True, train_batch_size=3096, lr=0.002, weight_decay=0, max_grad_norm=10, end_lr=0.0001, sched_start_epoch=200000, sched_end_epoch=400000, seed=2020, logging=True, log_root='./logs_gen', device='cuda:0', max_iters=inf, val_freq=10000, test_freq=30000, test_size=400, tag=None, npointc=36, resume=None, cifdir='CCIF/CIF', randomrot=True, randomcenter=False) python test_gen_ccif.py --ckpt logs_gen/GEN_2024_10_25__17_06_12/ckpt_0.000000_780000.pt --seed 2349 --sample_num_points 2048 (It seems larger number of sample_num_points do not show better results (too smooth. maybe). We used zlatent = z_mu, instead of 'zlatent = reparameterize_gaussian(mean=z_mu, logvar=z_sigma) ' in def sample_simple (and/or def sample) in class DiffusionPoint(Module) in diffusion.py.