Performance of 'topiq_nr-face' metric
Good day. First of all, thanks for all work done in this repo!.
I have noticed the topiq_nr-face metrics performance decays along time. i try 30k images of various dimentions. It starts with 9 it/s and finally decays to 2.5 it/s. I share the code i used for the test: import os import torch import time import pyiqa import pandas as pd import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.data import Dataset, DataLoader from torch.utils.data.distributed import DistributedSampler from tqdm import tqdm from torch.utils.data import Dataset,DataLoader import torchvision.transforms as transforms from PIL import Image
class ImageDataset(Dataset): def init(self, image_paths): self.image_paths = image_paths # Create the Compose object # self.transforms=transforms.Compose( # [transforms.Resize((960, 960)), # transforms.ToTensor()])
def __len__(self):
return len(self.image_paths)
def __getitem__(self, idx):
img_path=self.image_paths[idx]
#img = Image.open(img_path).convert("RGB")
#img_tensor = self.transforms(img)
return img_path#,img_tensor
def main(): # Initialize the distributed environment. dist.init_process_group(backend='nccl') rank = int(os.environ["LOCAL_RANK"]) print(rank)
df = pd.read_csv('dataset.csv')
inference_set = df['image_path'].tolist()
# Create dataset and dataloader
dataset = ImageDataset(inference_set)
sampler = DistributedSampler(dataset)
dataloader = DataLoader(dataset, batch_size=1,pin_memory=True, sampler=sampler)
METRIC=''topiq_nr-face'
print(f'Metric:{METRIC}')
# Create model and move it to GPU with id rank
device = torch.device(f"cuda:{rank}" if torch.cuda.is_available() else "cpu")
torch.cuda.set_device(device)
iqa_metric = pyiqa.create_metric(METRIC,device=device)
iqa_metric = iqa_metric.to(device)
ddp_model = DDP(iqa_metric, device_ids=[rank])
# Create a list to store results
results = []
# Wrap your iterable with tqdm() for a progress bar
for img_path in tqdm(dataloader):
try:
#source = source.unsqueeze(0)
#source = source.to(rank)
score_fr = ddp_model(img_path[0]).item()
results.append({'image_path': img_path[0], METRIC: score_fr})
except:
results.append({'image_path': img_path[0], METRIC: -1})
# Create DataFrame from the list of results
result_df = pd.DataFrame(results)
# Check if the CSV file already exists
csv_filename = 'dataset_with_metric.csv'
if os.path.exists(csv_filename):
# If it exists, load the existing DataFrame and append the new results
existing_df = pd.read_csv(csv_filename)
result_df = pd.concat([existing_df, result_df], ignore_index=True)
# Save the updated DataFrame to CSV
result_df.to_csv(csv_filename, index=False)
# Clean up distributed data parallel environment
dist.destroy_process_group()
if name == "main": marktime = time.time() main() print(time.time() - marktime)
Dont care about pandas ops. Do you see something wrong?. Someone with same escenario and solution? Thank you in advance! Have a nice day
Thanks for your interest!
I checked the code, and did not find anything wrong currently. The topiq_nr-face metric have the following steps for test:
- Detect and extract face in the image (If multiple face exists, only the first one will be evaluated)
- Align face to a specific template, with a fixed size of $512\times512$
- Use the pre-trained model to evaluate score.
The second and third step should take similar time, but the first step may depend on the specific image input.
Thank you. Some advice to preprocessing input image before inference? i have images from less than 1 megapixel to 48 megapixels. im wondering the input size of the model
It seems that your images are too large. Reading and processing 48 megapixel image might be slow. You may try to resize all images (keep aspect ratio) to less than 1 megapixel.