我试图使用展开(unfold)方法来过滤一个单通道256×256大小的二维图像,创建重叠为8的16×16块。如下所示:
# I = [256, 256] imagekernel_size = 16stride = bx/2patches = I.unfold(1, kernel_size, int(stride)).unfold(0, kernel_size, int(stride)) # size = [31, 31, 16, 16]
我开始尝试使用折叠(fold)方法将图像重新拼接,但还没有完全成功。我尝试使用view来调整图像以适应其应有的形状,但我不知道这如何能保留原始图像。或许我考虑得太复杂了。
# patches.shape = [31, 31, 16, 16]patches = filt_data_block.contiguous().view(-1, kernel_size*kernel_size) # [961, 256]patches = patches.permute(1, 0) # size = [951, 256]
任何帮助将不胜感激。非常感谢。
回答:
比Gil提出的方法稍微不那么优雅的解决方案:
我从Pytorch论坛上的这个帖子中获得了灵感,将我的图像张量格式化为标准形状B x C x H x W(1 x 1 x 256 x 256)。展开如下:
# CREATE THE UNFOLDED IMAGE SLICESI = image # shape [256, 256]kernel_size = bx #shape [16]stride = int(bx/2) #shape [8]I2 = I.unsqueeze(0).unsqueeze(0) #shape [1, 1, 256, 256]patches2 = I2.unfold(2, kernel_size, stride).unfold(3, kernel_size, stride)#shape [1, 1, 31, 31, 16, 16]
接下来,我对我的张量堆栈进行了一些变换和过滤。在此之前,我应用了余弦窗并进行了归一化处理:
# NORMALISE AND WINDOWPvv = torch.mean(torch.pow(win, 2))*torch.numel(win)*(noise_std**2)Pvv = Pvv.double()mean_patches = torch.mean(patches2, (4, 5), keepdim=True)mean_patches = mean_patches.repeat(1, 1, 1, 1, 16, 16)window_patches = win.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(1, 1, 31, 31, 1, 1)zero_mean = patches2 - mean_patcheswindowed_patches = zero_mean * window_patches#SOME FILTERING ....#ADD MEAN AND WINDOW BEFORE FOLDING BACK TOGETHER.filt_data_block = (filt_data_block + mean_patches*window_patches) * window_patches
上述代码对我来说是有效的,但使用掩码会更简单。接下来,我准备将形状为[1, 1, 31, 31, 16, 16]的张量转换回原始的[1, 1, 256, 256]形状:
# REASSEMBLE THE IMAGE USING FOLDpatches = filt_data_block.contiguous().view(1, 1, -1, kernel_size*kernel_size)patches = patches.permute(0, 1, 3, 2)patches = patches.contiguous().view(1, kernel_size*kernel_size, -1)IR = F.fold(patches, output_size=(256, 256), kernel_size=kernel_size, stride=stride)IR = IR.squeeze()
这让我能够创建一个重叠的滑动窗口,并无缝地将图像重新拼接。如果省略过滤步骤,得到的图像将是相同的。